id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9902/hep-ex9902015.html
ar5iv
text
# A 16-channel Digital TDC Chip with internal buffering and selective readout for the DIRC Cherenkov counter of the B A B AR experiment ## 1 Introduction The circuit was built for the ring imaging Cherenkov counter (the DIRC) of the B A B AR experiment presently under construction at SLAC (Stanford Linear Accelerator Center) around the interaction point of the PEP-II collider to observe CP violation in the decays of $`B`$ mesons. The accelerator is an asymmetric $`\text{e}^+\text{e}^{}`$ collider with beam energies of 9 $`\mathrm{GeV}`$ (electrons) and 3.1 $`\mathrm{GeV}`$ (positrons). The most recent review of the physics prospects of the B A B AR experiment can be found in reference. The paper starts by recalling the context (sect. 2) through a brief description of the experiment. The timing features of the collider and the detector as well as the properties of the signal and the background are listed and it is explained how the TDC requirements were derived from them. Next comes the detailed accounting of the TDC implementation (sect. 3). The two main building blocks of the circuit: the time measuring section and the selective readout are described in turn. Some of the chip design methods are described at the end of that section. Finally, sect. 4 is devoted to the description of the measurements that were made on the chips to understand their performance. ## 2 Context ### 2.1 The DIRC in the B A B AR experiment The physics require full (better than 3 $`\sigma `$) charged hadron ($`\pi `$/K) identification for tracks with momenta between 0.7 and 4.2 $`\mathrm{GeV}/\mathrm{c}`$ spread according to the peculiar kinematics of the asymmetric storage ring. This is achieved by means of a ring imaging Cherenkov counter (the DIRC) which is briefly described in sect. 2.1.1. The main features of the photodetectors and collider relevant for the design of the electronics are the following. The photodetectors are phototubes (PMT) spread across so big an area that they are at most hit by one photon per event. Hence no amplitude measurement is needed to characterize a hit PMT. A hit is therefore essentially defined by its timing. The photodectector signal at the input of the electronics chain is described in sect. 2.1.2. PEP-II is a B factory, i.e. a very high luminosity (3 $`\times `$ 10<sup>33</sup> cm<sup>-2</sup>s<sup>-1</sup>) device, able to produce the rare CP violating $`B`$ meson decays. That implies a high counting rate both from physics events and machine background which requires not to compromise the PMT timing resolution, hence the use of high precision TDCs tied to the ultra stable machine RF clock. That also implies intricate trigger and data acquisition schemes (see sect. 2.1.3) with high bandwidth demands leading to designs where useless data are thrown away as early as possible. The requirements for the design of the TDC within the DIRC electronics chain are described in sects. 2.2 and 2.3. #### 2.1.1 The DIRC The radiators of DIRC are long (4.9 m) quartz bars with a rectangular section (3.5 cm wide and 1.7 cm thick) arranged as a 12 face prism which approximates a cylinder at a radius of 90 cm around the beam axis. Tracking devices inside that cylinder measure the trajectories and momenta of the charged particles. Some of those emit Cherenkov light in the quartz. The Cherenkov photons with enough grazing incidence upon the bar faces propagate towards the “backward” end of the bars (the “forward” end is equipped with a mirror) undergoing total reflections which maintain their initial direction up to a 16-fold discrete symmetry. The image finally expands inside a “standoff” box full of water towards a detection surface (1.2 m away from the bar ends) covered by 10751 photomultiplier tubes (PMT) with 1 inch diameter photocathodes. An optimized coupling between the bars and the water volume is obtained by means of quartz “wedges” which maximize the Cherenkov photon acceptance at the price of an increased number of possible paths. Particle identification is obtained from the velocity measured by the Cherenkov angle. The soundness of the DIRC concept was proven with a large scale prototype at CERN in 1995-96. #### 2.1.2 The photodetectors and their signal The photodetectors are ETL 9125 photomultiplier tubes which are fast and sensitive to single photoelectrons. Their characteristics measured on the delivered PMTs are detailed in reference. The timing resolution is 1.5 $`\mathrm{ns}`$ rms; the tubes with a resolution above 1.8 $`\mathrm{ns}`$ were discarded. The tubes are operated at a typical gain of $`1.7\times 10^7`$ which correspond to high voltage settings between 900 and 1400 V. An average (single photoelectron) peak to valley ratio of 2.1 is measured, the range being between 1.7 (required minimum to accept a tube) and 3. The analog electronics allow to operate at a threshold of $``$ 10$`\%`$ of the single photoelectron peak which translates to a 2 mV signal at the input of the frontend. Under those conditions the PMTs have single photoelectron detection efficiencies above 90% and noise rates less than 1 $`\mathrm{kHz}`$. #### 2.1.3 Trigger and data acquisition The dominant noise contribution for the DIRC comes from lost particles in the machine. At the time of the design, the rates were, somewhat optimistically, estimated (using a safety factor of ten) to be below 100 $`\mathrm{kHz}`$ on each tube. Enough memory has to be included on the frontend to store the data while the trigger decision is made. The Level 1 (L1) trigger built from Drift Chamber, Calorimeter, and Muons Detector primitives, has a latency of 12 $`\mu \mathrm{s}`$ and an uncertainty (jitter) of less than one $`\mu \mathrm{s}`$. Suppressing in the frontend the data stored during the latency but out of the resolution window eases by a factor 10 the bandwidth requirements for the communication channel between the frontend and the data aquisition downstream. Note that the physics events come much less frequently (100 $`\mathrm{Hz}`$ overall) adding in a 1 $`\mu \mathrm{s}`$ window 50% more hits concentrated in a 60 $`\mathrm{ns}`$ interval. ### 2.2 The TDC requirements The DIRC frontend electronics requirements have been devised from the anticipated environment. Of relevance for the TDC circuit are: * a timing resolution well below that of the PMTs, * a reliable time measuring scale, * the ability to store enough data to cope with the timing structure of the experiment, namely the duration of physics events and the characteristics of the trigger, * the ability to discard data not in time with the trigger, * the capacity to respond to random input rates of 100 $`\mathrm{kHz}`$, with less than a percent deadtime loss, and similarly for a coherent rate of 10 $`\mathrm{kHz}`$ on the 16 channels of a circuit, * enough diagnostics available in the output data. ### 2.3 The TDC within the DIRC frontend electronics The DIRC digital TDC chip is the main building block of the DIRC frontend electronics. It receives 16 outputs from two 8-channel analog chips with zero-crossing discriminators which time the PMT pulses. 64 PMT channels belong to a DIRC Frontend Board (DFB) which thus comprises 8 analog chips and 4 TDC chips. The data and control signals to and from the trigger and data aquisition systems travel on 1 $`\mathrm{Gbits}/\mathrm{s}`$ optical fibers connected to one DIRC Crate Controller (DCC) board per crate with 14 DFBs. The clocks and commands needed by the frontend chips are distributed using a custom backplane PDB (Protocol Distribution Board). See the reference for more details. On any Level 1 trigger (L1) occurence, the digitized time data associated with this trigger are transferred to a Multi-Event Buffer (MEB) on the DFB and stay until a readout request (Readout Strobe) originated in the central control and timing system initiates readout into the data aquisition system. ## 3 TDC Implementation To match the requirements a 16-channel integrated circuit has been built which accepts TTL input pulses. It has been manufactured by ATMEL-ES2 using a 0.7 $`\mu \mathrm{m}`$ CMOS process. The die size is 36 $`\mathrm{mm}^2`$, the dissipation is less than 100 $`\mathrm{mW}`$ when all 16 channels fire at 100 $`\mathrm{kHz}`$. After a summary of the performances actually achieved (sect 3.1), the global architecture (sect. 3.2) and the details of the timing (sect. 3.3) and selective readout (sect. 3.4) implementations are described in turn. Finally technical details about the scan test (sect. 3.5) and the chip layout (sect. 3.6) are given. ### 3.1 Performances The performances are better than required. The TDC uses an external precision clock. For B A B AR, a 59.5 $`\mathrm{MHz}`$ clock is derived from the storage ring radiofrequency. The chip can however be used with clock frequencies ranging from 45 to 90 $`\mathrm{MHz}`$. The time measurement is performed with a 0.5 $`\mathrm{ns}`$ binning (1/32 of the external clock period) over a 32 $`\mu \mathrm{s}`$ full scale. The double hit resolution is 32 $`\mathrm{ns}`$ (conversion time). The dead time loss associated with the storing and the sorting of the data is well below $`10^3`$ for the specified input rates. The acceptance window parameters are programmable between 64 $`\mathrm{ns}`$ and 16 $`\mu \mathrm{s}`$ (8 bits) for the latency and between 64 $`\mathrm{ns}`$ and 2 $`\mu \mathrm{s}`$ (5 bits) for the width. The Read and Write operations can be simultaneous. A bit pattern is output for every trigger which flags the overloaded channels. ### 3.2 Architecture A block diagram of the chip is shown Figure 1. It mostly shows the timing section described in sect. 3.3. The Readout Control box on the figure incorporates the selective readout section (sect. 3.4). The time measurement proceeds in two steps. On each of the 16 channels, a fine time measurement on 5 bits within a period of the external clock is achieved using voltage-controlled digital delay lines (sect. 3.3.1). The control voltages which synchronize the delay lines on the clock period are provided by an extra identical calibration channel. Thus the delay drifts from temperature, power supplies or the process are compensated by construction. This calibration (sect. 3.3.2) is fully transparent while the TDC is operated. A 11-bit synchronous counter, common to all channels counts the clock ticks to provide the coarse time measurement. In order to allow data-driven operations and asynchronous readout occuring at any trigger time, sixteen dual port FIFOs allow data to be written from the TDC section. The selective readout uses three levels of buffering in FIFO memories to sort data in time with an incoming trigger, and make them available for readout (sect. 3.4). Each FIFO overload during a trigger window is reported at the end of each data block as a sixteen bit pattern. ### 3.3 Time measurement The TDC section integrates one 60 MHz counter, 16 digital delay lines with 32 taps of 500 ps delay each and a calibration channel made of a delay line identical to that of a measuring channel. An incoming signal latches the counter state in a 11-bit register. It is also propagated through the delay line. The next clock positive edge latches the state of the delay line in a 32-bit register, the result being binary encoded to five bits. This method is one of those described in reference. #### 3.3.1 The fine time measurement The basic cell of a delay line is composed of two CMOS inverters, one of which with its current limited by a voltage controlled resistor (Fig. 2). A complete line is made of 32 cells used for the measurement preceded and followed by 4 cells (underflow and overflow). The control voltage levels are the result of a continuoulsly running calibration process which locks the calibration channel to the external clock. These analog controls are common to all channels, assuming sufficient process uniformity within the chip. The feasibility of the design was known from measurements made on previous TDC chips using the same technology. #### 3.3.2 Phase locking The calibration channel (Fig. 3) coupled with a state machine and two control voltage generators (Fig. 4) tunes two analog voltage levels to lock the total delay of the chain on the external clock period (gain) and to minimize the time offset of the line. The state machine schedules clock pulses to be sent at the calibration channel inputs. In an offset subcycle, a given clock pulse is sent to both the start and stop inputs and the offset control is adjusted until a zero digitization is obtained. Alternately a gain subcycle consists in sending one clock pulse to the start and the subsequent one to the stop. It ends when the fullscale digitization is reached. This process is basically convergent, and no loss of lock has been observed. Therefore, it is not monitored. Calibration is internally activated at 100 $`\mathrm{kHz}`$, giving the best linearity results. ### 3.4 Selective Readout #### 3.4.1 Overview The block diagram of the selective readout (Fig. 5) shows the 3 levels of buffering implemented using FIFOs. The first level consists of the above mentionned 4 words deep channel FIFOs. They are emptied by a continuous read process at 30 $`\mathrm{MHz}`$ running in the selective readout processor described in the next section (3.4.2). A 32 words deep FIFO called $`\text{FIFO}_\text{L}`$ (L standing for latency) shared by all channels, receives the data sorted in chronological order (older first). As soon as the current time reaches that of the earliest trigger the oldest data can stem from, that data is transferred into the output FIFO, $`\text{FIFO}_\text{O}`$. It remains there at most for one trigger resolution. At any given time, $`\text{FIFO}_\text{O}`$ thus contains all the data which are compatible with a trigger that could occur then. When indeed an L1 trigger occurs, an automatic readout signal empties $`\text{FIFO}_\text{O}`$ and outputs a data packet with a header (containing the L1 time), as many words as present in $`\text{FIFO}_\text{O}`$ and a trailer with flags indicating the overloaded channel numbers for that trigger. The readout process is sequenced at 30 $`\mathrm{MHz}`$. It always terminates before another L1 comes (not before 1.5 $`\mu `$s in B A B AR). When data is readout, the process filling $`\text{FIFO}_\text{O}`$ is still working. There is no deadtime associated provided the rate remains far from saturation i.e. less than 96 hits (sum of the depths of the FIFOs in series) during one trigger latency. Detailed simulation studies have been performed to determine the selective readout parameters, namely the $`\text{FIFO}_\text{L}`$ depth, the width of the time slices and the characteristics of the comparators used by the fast sort algorithm described next. The test bench results (see 4.3) validate the simulation for the rates specified in the requirements for which the time to move the data in the FIFOs contributes very little. #### 3.4.2 Fast sort A dichotomic algorithm pictured on Fig. 6 selects the oldest data in the channel FIFOs during 256 $`\mathrm{ns}`$ time windows using 2-bit comparators. The comparisons have a 64 $`\mathrm{ns}`$ precision and the result of one round is available after 18 $`\mathrm{ns}`$. They are performed on time slices delayed at least by one slice width with respect to the current time (given by the synchronous counter) to avoid carry problems (when the counter wraps around). The width of 256 $`\mathrm{ns}`$ is fixed by the response time of the comparator tree, and the required maximum input occupancy. ### 3.5 Fault simulation A scan path has been implemented. The Built-In Self Test generator of the Silicon manufacturer has also been used for their FIFOs. 18k test vectors have been used, some of which were written by hand, for a fault coverage of 90$`\%`$ of the chip. ### 3.6 Layout The layout (Fig. 7) has been done using the most appropriate style regarding the functionality. TDC section and channel FIFOs (full custom). A stick layout symbolic editor, into which the silicon manufacturer’s design rules were input, was used to draw the sections critical for timing or silicon area: the delay chains, the fast counter, the charge pump and associated controls, the synchronization logic. All analog sections have been simulated with HSPICE before and after layout. Sufficient margins were used to ensure the required behaviour in a temperature range of 20 $`\pm `$ 15 C with a 10% voltage supply variations. A compact layout is obtained for the full custom part which occupies about half the chip area. Latency and Output FIFOs. The latency and output FIFOs have been generated using the automated tool of the Silicon manufacturer as blackboxes to be filled when making the masks. Test vectors have also been generated automatically. A model for these FIFOs has been written in the Verilog hardware description language from which the associated counters and glue logics were merged into the the standard cells generated by the compiler Synergy. In this case as well, a post-layout simulation has checked the design accommodated temperature, voltage supply and process variations within the safety margins recommended by the manufacturer. Random logic. The random logic has been implemented as standard cells, using the library of the manufacturer. A Verilog model was also provided. Verilog models of the TDC sections have been written to simulate the chip globally. The full die size is 36 mm2. ## 4 Performance tests The test bench described in section 4.1 was used to study the integral and differential linearities of the timing measurements (sect. 4.2) and the selective readout performance (sect. 4.3). The locking frequency range of the calibration (from 45 to 90 $`\mathrm{MHz}`$), the cross talk between channels (none was found) and the sensitivity to the environment were also studied with the bench. The temperature coefficient is estimated to be $``$ 2.5 $`\mathrm{ps}`$ per C and the supply voltage coefficient to be $``$ 500 $`\mathrm{ps}`$ per V. The manufacturer was given an array of 18k test vectors to check the digital functionality of the chip. Only chips which passed that test were delivered (1250 parts). Of those 97 % matched the specifications. A further selection was finally done to sift the best 805 chips (672 parts plus spares) for the DIRC. Further tests of the TDC were performed on the DFBs as part of the global commissionning of the DIRC electronics. At present the BABAR experiment including the DIRC is taking cosmic ray data and the TDCs as part of the DIRC system perform satisfactorily. ### 4.1 Test bench Fig. 8 pictures the test bench. It uses 16 phototubes that can be illuminated by an LED the light output of which is adjustable to vary the rates. The TDC channels can be fired either by the discriminated pulses from the PMTs or by precisely timed signals from a pulse generator (LeCroy 9210). The time base (59.5 $`\mathrm{MHz}`$ external clock) is produced by another precision pulser. A custom made four layer printed circuit board with the TDC, a 4k 22-bit word FIFO and a fast readout sequencer, is interfaced to a computer running LabView. ### 4.2 Time measurement The linearity of the time measurement was tested locally and globally. For the differential linearity both random and deterministic methods were used. In the latter the delay between start and stop is varied in 10 $`\mathrm{ps}`$ steps across one delay line range (from 0 to 16 $`\mathrm{ns}`$). The measured time plotted against the set time showed the expected step curve. The difference between the measured and the set times had a standard deviation close to the expected 0.29 lsb<sup>1</sup><sup>1</sup>1least significant bit. (lsb/$`\sqrt{12}`$) and never worse than 0.73 lsb or 383 $`\mathrm{ps}`$, a figure well within the specifications. In the random method, the TDC channels are fired at an average rate above 100 $`\mathrm{kHz}`$ from PMTs at random times with respect to the clock and the linearity is inferred from the deviation from uniformity of the distribution of the 5 least significant bits of the measured times. A typical result is shown Fig. 9. Worse results where the last bin is up to two times too wide were obtained for the edge channels of some chips, in particular those numbered 14 and 15. This unexpected non linearity is presumably a layout residual effect. The local measurements could only test the fine time measurement. To prove that the fine and coarse scales matched seamlessly a global procedure was devised. The generator is run at 500 $`\mathrm{kHz}`$ and asynchronously from the trigger to produce double pulses with a time spacing of 15 clock periods plus 520 $`\mathrm{ps}`$ (that brings the measured difference at the limit between 32 and 33 lsb). The measured time difference is recorded for 12000 triggers, enough to test the transition for 90 % of the synchronous counter codes. A few occurrences of a mismatch between the coarse and fine time measurements were found in those and further tests (performed on the DFB frontend boards). 3% of the channels were anomalous when the input rate was 30 times the rate specified in the requirements. Careful analysis revealed that the affected channels were edge channels with too wide a bin 31. It so turns out that the two problems, the existence of too wide bins for the edge channels and the occasional slipping of the synchronous counter, are correlated. The parade is to phase lock only 31 bins of the delay lines (instead of 32) to the external clock. Doing so, slightly worse results are obtained for the linearity (the average is 73 $`\mathrm{ps}`$). However, they correspond to timing resolutions (196 $`\mathrm{ps}`$ on average) well within specifications. And, most importantly, the probability of a carry problem becomes negligible for the specified input rates. The differential linearity statistics obtained with the random method in the two cases are displayed on Figure. 10. ### 4.3 Selective readout The selective readout process has been checked by sending on an input a (500 $`\mathrm{kHz}`$) signal one latency before the trigger while the other 15 received random PMT pulses at a rate up to 2 $`\mathrm{MHz}`$. No loss of data is observed. The signal is observed on its input channel while the others exhibit the expected accidental rate. The histogram of the time difference between the trigger and the recorded time measurement shows a peak at the expected value above a flat distribution again compatible with the accidental level (see Fig. 11) with the expected width corresponding to the trigger resolution setting and the fast sort algorithm properties. The system detecting the channels with overloaded input has been tested satisfactorily. An experimental determination of the dead time loss is obtained from the ratio of the rate of all overloaded channels to the rate of good time measurements. A comparison to the simulated computation (Fig. 12) shows a good agreement for the two extreme cases that were studied: the case where all inputs receive independent random pulses at the specified rate and the case where all inputs are simultaneously fired by one and the same random input pulse at the specified rate. ## 5 Conclusion The digital TDC chip described in this paper is a major building block of the DIRC frontend electronics since it captures the PMT hits and selects those in time with the trigger. The time is measured from an external reference clock with a typical frequency of 60 $`\mathrm{MHz}`$ with a 0.5 $`\mathrm{ns}`$ lsb over a range of 32 $`\mu \mathrm{s}`$. The data driven architecture enables to eliminate background data as soon as possible on the frontend without resorting to pipelines. It is a mixed analog and digital IC which was produced at ATMEL-ES2 using a 0.7 $`\mu \mathrm{m}`$ CMOS process with an excellent yield (97%). The 1213 good parts have performances above the specifications. The DIRC detector equipped with them is presently taking cosmic ray data at SLAC. The time distribution of cosmic ray tracks (Fig. 13) is conform to expectations. ## Acknowledgements This work has been supported by Institut National de Physique Nucléaire et de Physique des Particules, IN2P3 from the Centre National de la Recherche Scientifique, CNRS (France). It benefited a lot from discussions and collaborative work within the BaBar and DIRC Electronics communities as well as within LPNHE. One of the authors has been supported by the government of China while at LPNHE Paris as a foreign visitor.
no-problem/9902/hep-ph9902317.html
ar5iv
text
# Measuring WWZ and WW𝛾 coupling constants with Z⁰-pole data ## 1 Introduction One of the most prominent goals of the LEP 2 program performed at the Large Electron Positron Collider (LEP) is the precise measurement of the couplings between the neutral electroweak bosons $`\mathrm{Z}^0`$, $`\gamma `$ and the charged boson $`\mathrm{W}^\pm `$ . Analogous measurements were performed at the TEVATRON measuring mainly the coupling between the photon and the $`\mathrm{W}^\pm `$. These two measurements were the first ones which were able to prove the non-Abelian character of the electroweak part of the Standard model . Even more precise determinations will be possible at future hadron or electron-positron-collider. However, before the LEP 2 program with centre-of-mass energies above the W-pair production threshold of about 161 GeV, LEP was running at energies around the $`\mathrm{Z}^0`$-pole at 91 GeV allowing to perform very precise measurements of fermion pair production properties. The experiments at LEP-1 and also at SLAC measure radiative corrections to the $`\mathrm{Z}^0`$ff vertex. These radiative corrections involve contributions with WWV (V=$`\mathrm{Z}^0`$, $`\gamma `$) vertices as shown in figure 1 a) and b) and WWV-independent contributions (figure 1 c,d). Therefore precise measurements of fermion-pair production allow the determination of the WWV coupling constants. This was noted already in the beginning of the LEP era . The phenomenological effective Lagrangian of the WWZ and WW$`\gamma `$ vertices, respecting only Lorentz-invariance, contains 14 triple gauge coupling constants (TGCs) as free parameters. All of these can be accommodated in the Standard Model requesting SU(2)$`\times `$U(1) gauge invariance, if one considers higher dimensional SU(2)$`\times `$U(1) gauge invariant operators. The neglect of higher dimensional operators leads automatically to relations between TGCs. The model which is discussed in the following neglects operators having a higher dimension than six. Loop corrections in this model lead to a logarithmic divergence of low energy observables . However it was shown that three dimension-six operators, that induce non-standard TGCs do not have this property. Assuming the existence of a light Higgs boson, created by the Higgs-doublet field $`\mathrm{\Phi }`$, one can apply a linear realization of the SU(2)$`\times `$U(1) symmetry. Then one obtains in addition to the SM Lagrangian the following three terms : $`\mathrm{\Delta }`$ $`=ıg^{}{\displaystyle \frac{\mathrm{\Delta }\kappa _\gamma \mathrm{cos}^2\theta _W\mathrm{\Delta }g_Z^1}{m_W^2}}(D_\mu \mathrm{\Phi })^{}B^{\mu \nu }(D_\nu \mathrm{\Phi })+ıg{\displaystyle \frac{\mathrm{cos}^2\theta _W\mathrm{\Delta }g_Z^1}{m_W^2}}(D_\mu \mathrm{\Phi })^{}\stackrel{}{\tau }\stackrel{}{\widehat{W}}^{\mu \nu }(D_\nu \mathrm{\Phi })`$ $`+ıg{\displaystyle \frac{\lambda _\gamma }{6m_W^2}}\stackrel{}{\widehat{W}}B_\nu ^\mu (\stackrel{}{\widehat{W}}B_\rho ^\nu \times \stackrel{}{\widehat{W}}B_\mu ^\rho ).`$ (1) In this model the TGC-relations are : $`\mathrm{\Delta }\kappa _\gamma `$ $`={\displaystyle \frac{\mathrm{cos}^2\theta _W}{\mathrm{sin}^2\theta _W}}(\mathrm{\Delta }\kappa _Z\mathrm{\Delta }g_Z^1),`$ (2) $`\lambda _\gamma `$ $`=\lambda _Z.`$ (3) The remaining nine coupling constants are zero. The SM predicts that all 14 parameters are zero. The TGCs $`\mathrm{\Delta }\kappa _V`$ and $`\mathrm{\Delta }g_V^1`$ parametrise the difference of $`g_V^1`$ and $`\kappa _V`$ to its SM expectation of unity : $`\mathrm{\Delta }\kappa _V=\kappa _V1`$ (4) $`\mathrm{\Delta }g_V^1=g_V^11`$ (5) In almost all models the electromagnetic gauge invariance is taken for granted, such that $`\mathrm{\Delta }g_\gamma ^1`$, the divergence of the W-charge from the unit charge, is always zero. The parameter $`\lambda _\gamma `$ is also set to zero in our analysis, since we are not aware of any computation of the dependence of $`ϵ_1`$, $`ϵ_2`$ and $`ϵ_3`$ on $`\lambda _\gamma `$. ## 2 Analysis and Results The preliminary measurements of electroweak parameters performed at LEP 1, SLAC and TEVATRON are listed in table 1. The SM predictions agree well with these measurements . The analysis of this data set proceeds via two steps. In the first step, the $`ϵ`$ parameters $`ϵ_1`$, $`ϵ_2`$, $`ϵ_3`$ and $`ϵ_b`$ : $`ϵ_1`$ $`=\mathrm{\Delta }\rho `$ (6) $`ϵ_2`$ $`=\mathrm{cos}^2\theta _W^0\mathrm{\Delta }\rho +{\displaystyle \frac{\mathrm{sin}^2\theta _W^0\mathrm{\Delta }r_W}{\mathrm{cos}^2\theta _W^0\mathrm{sin}^2\theta _W^0}}2\mathrm{sin}^2\theta _W^0\mathrm{\Delta }k^{}`$ (7) $`ϵ_3`$ $`=\mathrm{cos}^2\theta _W^0\mathrm{\Delta }\rho +(\mathrm{cos}^2\theta _W^0\mathrm{sin}^2\theta _W^0)\mathrm{\Delta }k^{}`$ (8) $`ϵ_b`$ $`={\displaystyle \frac{g_A^b}{g_A^l}}1\text{and}ϵ_b={\displaystyle \frac{g_V^b}{g_A^l}}\left(1{\displaystyle \frac{4}{3}}(1+\mathrm{\Delta }k^{})\mathrm{sin}^2\theta _W^0\right)`$ (9) where: $`\mathrm{sin}^2\theta _W^0`$ $`=`$ $`{\displaystyle \frac{\pi \alpha (m_Z^2)}{\sqrt{2}G_Fm_Z^2}}`$ (10) are extracted. These parameters are very sensitive to radiative corrections and thus the influence of physics beyond the SM, hence also very sensitive to non-SM TGCs. It is interesting to note that $`ϵ_2`$ and $`ϵ_b`$ do not, on the one-loop level, depend on the yet unknown Higgs-mass $`m_H`$. Here $`\mathrm{\Delta }\rho `$ stands for radiative corrections to the $`\rho `$-parameter , $`\mathrm{\Delta }r_w`$ describes corrections to the $`G_F`$-$`M_W`$ relation and $`\mathrm{\Delta }k^{}`$ relates $`\mathrm{sin}^2\theta _W^0`$ to the effective electroweak mixing angle . As the fermion coupling constants depend on the $`ϵ`$-parameters one can extract these from the measurements reported in table 1 (except the top-quark mass), which all depend on $`g_V`$, $`g_A`$ or $`\mathrm{sin}^2\theta _W`$. A simultaneous fit to all four parameters and in addition to the electromagnetic coupling constant $`\alpha _{em}(m_Z)`$, the strong coupling constant $`\alpha _s(m_Z)`$ and $`m_Z`$ gives the numbers quoted in table 2. The computation of the SM expectations shows that these values are in good agreement with the measured ones, and they are also in good agreement with other recent computations . One finds strong correlations between $`ϵ_b`$ and $`\alpha _s`$ as well as for $`ϵ_1`$ and $`ϵ_3`$. The latter is visible in figure 2, showing the two-dimensional contours of each pair of $`ϵ`$-parameters. These contour curves are compared with the evolution of the $`ϵ`$-parameters as a function of the TGC coupling constants. The dependence of the $`ϵ`$-parameters on the WWV couplings is shown in the following equations : $`{\displaystyle \frac{12\pi }{\alpha }}\mathrm{\Delta }ϵ_1`$ $`=\left\{\left[{\displaystyle \frac{27}{2}}\mathrm{tan}^2\theta _W\right]{\displaystyle \frac{m_Z^2}{m_W^2}}\mathrm{ln}{\displaystyle \frac{\mathrm{\Lambda }^2}{m_W^2}}+{\displaystyle \frac{9}{2}}{\displaystyle \frac{m_Z^2m_H^2}{m_W^4}}\left[\mathrm{ln}{\displaystyle \frac{\mathrm{\Lambda }^2}{m_H^2}}+{\displaystyle \frac{1}{2}}\right]\right\}\mathrm{\Delta }\kappa _\gamma `$ $`+\left\{\left[\mathrm{tan}^2\theta _W\mathrm{cot}^2\theta _W\right]{\displaystyle \frac{9}{2}}{\displaystyle \frac{m_H^2}{m_W^2}}\left[\mathrm{ln}{\displaystyle \frac{\mathrm{\Lambda }^2}{m_H^2}}+{\displaystyle \frac{1}{2}}\right]\right\}\mathrm{\Delta }g_Z^1`$ (11) $`{\displaystyle \frac{12\pi }{\alpha }}\mathrm{\Delta }ϵ_2`$ $`={\displaystyle \frac{m_Z^2}{m_W^2}}\mathrm{ln}{\displaystyle \frac{\mathrm{\Lambda }^2}{m_W^2}}\mathrm{sin}^2\theta _W\mathrm{\Delta }\kappa _\gamma +\mathrm{cot}^2\theta _W\mathrm{ln}{\displaystyle \frac{\mathrm{\Lambda }^2}{m_W^2}}\mathrm{\Delta }g_Z^1`$ (12) $`{\displaystyle \frac{12\pi }{\alpha }}\mathrm{\Delta }ϵ_3`$ $`=\left\{\left[\mathrm{cos}^4\theta _W7\mathrm{cos}^2\theta _W{\displaystyle \frac{3}{4}}\right]{\displaystyle \frac{m_Z^2}{m_W^2}}\mathrm{ln}{\displaystyle \frac{\mathrm{\Lambda }^2}{m_W^2}}{\displaystyle \frac{3}{4}}{\displaystyle \frac{m_H^2}{m_W^2}}\left[\mathrm{ln}{\displaystyle \frac{\mathrm{\Lambda }^2}{m_W^2}}+{\displaystyle \frac{1}{2}}\right]\right\}\mathrm{\Delta }\kappa _\gamma `$ $`+\left\{10\mathrm{cos}^2\theta _W+{\displaystyle \frac{3}{2}}\right\}\mathrm{ln}{\displaystyle \frac{\mathrm{\Lambda }^2}{m_W^2}}\mathrm{\Delta }g_Z^1`$ (13) $`\mathrm{\Delta }ϵ_b`$ $`={\displaystyle \frac{m_Z^2m_t^2}{64\pi ^2m_W^4}}\mathrm{ln}{\displaystyle \frac{\mathrm{\Lambda }^2}{m_W^2}}\mathrm{\Delta }\kappa _\gamma `$ $`\left[{\displaystyle \frac{\mathrm{cot}^2\theta _W}{64\pi ^2}}{\displaystyle \frac{m_Z^2m_t^2}{m_H^4}}\mathrm{ln}{\displaystyle \frac{\mathrm{\Lambda }^2}{m_W^2}}+{\displaystyle \frac{3\mathrm{cot}^2\theta _W}{32\pi ^2}}{\displaystyle \frac{m_t^2}{m_W^2}}\mathrm{ln}{\displaystyle \frac{\mathrm{\Lambda }^2}{m_W^2}}\right]\mathrm{\Delta }g_Z^1`$ (14) These expressions are based on the constraints between TGCs quoted earlier. All non-standard contributions are logarithmically divergent. The coupling parameters, that are used here, are defined in dependence on the new physics scale $`\mathrm{\Lambda }`$ and a form factor f coming from the new physics effect, eg. $`\mathrm{\Delta }g_Z^1`$ $`={\displaystyle \frac{m_Z^2}{\mathrm{\Lambda }^2}}f.`$ (15) Thus the coupling parameters vanish in the limit of a large new physics scale, $`\mathrm{\Lambda }\mathrm{}`$. The new physics scale in the following measurement is set to 1 TeV. In addition a Higgs-mass of 300 GeV is assumed. A fit using equations 11 to 14 and the difference of the measured values of the $`ϵ`$-parameters and the ones expected in the SM as shown in table 2 is used to determine the TGC coupling parameters $`\mathrm{\Delta }g_Z^1`$ and $`\mathrm{\Delta }\kappa _\gamma `$. The errors on the SM predictions of the $`ϵ`$-parameters are included, neglecting their correlations. The $`\chi ^2`$ curves of a fit to each of these coupling constants, setting the other to its SM value of zero, is shown in figure 3. One finds the following results: $`\mathrm{\Delta }g_Z^1`$ $`=0.017\pm 0.018`$ (16) $`\mathrm{or}`$ $`\mathrm{\Delta }\kappa _\gamma `$ $`=0.016\pm 0.019.`$ (17) If both couplings are allowed to vary in the fit, one finds the contour plot in figure 4. The corresponding numerical values of the TGC-parameters are $`\mathrm{\Delta }g_Z^1`$ $`=0.013\pm 0.027`$ $`\mathrm{\Delta }\kappa _\gamma `$ $`=0.005\pm 0.029,`$ (18) with a correlation of 75.5 percent. The SM expectation of zero for both parameters agrees well with this measurement. As 1 TeV is the lower limit of the new physics scale and the couplings depend inversely on $`\mathrm{\Lambda }`$, the errors decrease with increasing $`\mathrm{\Lambda }`$. Higher Higgs masses decrease also the errors on the TGC, while a lower Higgs mass increases the error. Assuming a 100 GeV Higgs, the error on $`\mathrm{\Delta }g_Z^1`$ increases by 1% of the error, while the one-dimensional error on $`\mathrm{\Delta }\kappa _\gamma `$ increases to 0.033. The error of 5 GeV on $`m_t`$, as quoted in table 1 has a negligible impact on the result. The results presented above are more precise than recent direct measurements of the LEP and TEVATRON collaborations : $`\mathrm{\Delta }g_Z^1=0.00_{0.11}^{+0.12}`$ and $`\mathrm{\Delta }\kappa _\gamma =0.28_{0.27}^{+0.33}`$. Here the parameters are negatively correlated with -54 percent. The direct measurement is however more suitable for a general test of the TGCs while the indirect measurement tests TGCs only in particular models. Recent computations parametrise also the dependence of $`ϵ_b`$ on the coupling constants $`\lambda _\gamma `$ and $`g_Z^5`$ giving access to a more general view of the TGC couplings. Computations of the dependence of $`ϵ_1`$, $`ϵ_2`$ and $`ϵ_3`$ on the TGCs $`\lambda _\gamma `$ and $`g_Z^5`$ would be most useful to measure also these coupling constants more precisely. ## 3 Acknowledgements We are very grateful to S. Riemann for bringing the possibility of the indirect measurement of TGCs to our attention. We thank F.Caravaglios and G.Altarelli for clarifying discussions on the $`ϵ`$ parameters and T. Hebbeker, W.Lohmann and T. Riemann for useful comments.
no-problem/9902/astro-ph9902266.html
ar5iv
text
# High Energy Cosmic Rays from Neutrinos ## Abstract We address a class of models in which neutrinos, having a small mass, originate the highest energy cosmic rays interacting with the relic cosmic neutrino background. Assuming lepton number symmetry and an enhanced neutrino density in arbitrary size clusters (halos), we make an analytical calculation of the required neutrino fluxes. We show that the parameter space for these models is heavily constrained by horizontal air shower searches. Marginal room is left for models with exceptionally flat neutrino spectral indices, neutrino masses in the $`0.1`$eV range and supercluster scale halos of order 50 Mpc size. Our constraints do not apply to models with lepton number asymmetry. The detection of ultra high energy cosmic rays (UHECR) above the Greisen-Zatsepin-Kuzmin (GZK) cut off has stirred the research activity in cosmic acceleration mechanisms. Above the GZK cutoff protons rapidly loose energy through photoproduction in the cosmic microwave background and therefore sources must be relatively nearby . The known objects in our “extragalactic neighborhood”, within few tens of Megaparsecs, have difficulties to accommodate stochastic acceleration mechanisms (most commonly invoked) due to dimensional arguments. Moreover such UHECR deviate little in the magnetic fields encountered over this length scale and no obvious astrophysical candidates are seen in the arrival direction of the few detected events. Several possibilities have been considered to explain these events and in particular that cosmic ray production arises through ultra high energy (UHE) neutrino interactions with the cosmic neutrino background . UHE neutrinos could come from cosmic distances and interact with the relic neutrinos in our halo. The final stable products of these interactions would be gamma rays and protons (besides secondary neutrinos) which would constitute the high energy end (above $`10^{19}`$eV) of the cosmic ray spectrum. The idea is attractive because it avoids the constraint that source candidates must be at distances below $`50`$Mpc, but it requires large fluxes of very high energy neutrinos (above $`10^{21}`$eV) without getting into the details concerning the UHE neutrino production mechanism. Models involving annihilation of topological defects and heavy relic decays could for instance produce these neutrinos rather naturally. Bounds for some of these models have already been discussed in the literature based on neutrino and photon flux measurements . Only the resonance peak in the $`Z^0`$ production interactions with the cosmic neutrino background can provide any significant secondary particle flux. In the massless neutrino case an energy $`E_\nu ^{res}10^{16}`$ GeV is required to produce $`Z^0`$’s at resonance since relic neutrinos have energies $`2`$ K $`1.710^4`$ eV. This possibility would imply either neutrino fluxes exceeding current limits from Horizontal Air Showers, as will be shown below, or unnaturally fine tuned neutrino energy spectra. If neutrinos are massive, a possibility that is becoming increasingly more realistic in the light of recent results by Superkamiokande , the necessary neutrino beam energy to produce $`Z^0`$’s at resonance in the interactions with the relic neutrinos becomes inversely proportional to the neutrino mass. Moreover background neutrinos would tend to accumulate in an extended halo as pointed out in Ref. increasing their local density with respect to the cosmological value and the probability of nearby interactions. The idea has already been discussed by several authors. Using an incoming neutrino flux of spectral index 2, Waxman has discussed the models from general energy density arguments, using limits on local neutrino density because of Pauli exclusion, to conclude that a new class of models would have to be invoked to accelerate the neutrinos themselves and that the energy required is comparable to the total photon luminosity of the Universe . The calculation depends on the spectral index and the energy cutoffs of the assumed neutrino spectrum. Yoshida et al. have recently computed the particle spectra for several case studies after detailed propagation of all secondary products in the extragalactic magnetic fields and through the cosmic microwave background, and assuming clustering in supergalactic scales. These cases support the possibility that the produced UHECR, neutrinos and gamma rays are compatible with neutrino observations and bounds . In this paper we further discuss this idea analyzing the model dependence on the assumed neutrino spectral index. We establish the neutrino fluxes firstly using energetic considerations similar to those of Ref. and then analytically calculating the proton and photon secondary spectra. If large fluxes of high energy neutrinos exist, they should have been detected. Indeed by imposing that the neutrinos produce the observed UHECR one expects a much larger flux of neutrinos as pointed out in Ref., since the probability for interacting in the neutrino halo is small. In a phenomenological approach we leave the neutrino spectral index and the local neutrino density enhancement in the halo as free parameters. We will show that for a large region in the two dimensional parameter space, the required neutrino flux is heavily constrained by existing data on horizontal showers. Assuming neutrinos are massive (of order 1 eV) we consider a neutrino of energy $`E_\nu `$. This neutrino could interact with an antineutrino from the cosmic background, with a center of mass squared energy $`s=2m_\nu E_\nu `$. The cross section for this process is maximal near the $`Z^0`$ resonance, of width $`\mathrm{\Gamma }_Z`$, which occurs for neutrino energies of $`E_\nu \mathrm{4\hspace{0.33em}10}^{12}(1\text{eV}/m_\nu )`$ GeV. In hadronic decays the $`Z^0`$ produces high energy particles, mostly pions, which further decay so that only high energy photons and protons (and neutrinos) would eventually reach the Earth. The final particle spectra are given by a convolution of the quark fragmentation functions and the in flight decays of all the intermediate particles. These spectra have to be propagated in the photon cosmic background, IR background, galactic fields, etc. which would alter the arrival fluxes of high energy photons and protons. As shown by Yoshida et al. , the final particle spectra agree well with observations and can explain the UHECR spectrum for a wide range of spectral indices, $`\gamma 2`$, assumed in the original neutrino flux. The survival probability of a UHE neutrino in the relic neutrino background is in general given by $`𝒫_𝒮(E_\nu )=e^{\tau _\nu }`$, with $`\tau _\nu `$ being the opacity. Considering only the resonant $`Z^0`$ production cross section $`\sigma _{\nu \overline{\nu }}`$ in a matter dominated Universe the opacity can be well approximated by the redshift integral: $$\tau _\nu \left(E_{\nu 0}\right)\frac{c}{H_0}n_{\overline{\nu }0}_0^{z_{max}}\frac{dz(1+z)^2\sigma _{\nu \overline{\nu }}\left[2m_{\overline{\nu }}E_{\nu 0}(1+z)\right]}{\sqrt{\mathrm{\Omega }_{M0}(1+z)^3+\left[1\mathrm{\Omega }_{M0}\mathrm{\Omega }_{\mathrm{\Lambda }0}\right](1+z)^2+\mathrm{\Omega }_{\mathrm{\Lambda }0}}}$$ (1) Here $`H`$ is the Hubble constant, $`E_\nu `$ is the interacting neutrino energy and the subscript $`0`$ is used to indicate the present value of a redshift varying quantity. $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ are respectively the matter density and the cosmological constant terms in the Friedmann equation expressed in dimensionless units. The energy integral over the relic neutrino spectral density has been eliminated in the assumption the neutrinos are non relativistic so that $`n_{\overline{\nu }}`$ is the relic antineutrino number density and the argument of the interaction cross section is the redshift varying center of mass energy of the collision. If one integrates this expression to galaxy formation era $`z_{max}5`$ assuming no clustering of the relic neutrinos, the uncertainty in the numerical value of the opacity is mainly dominated by the lack of precise knowledge of $`\mathrm{\Omega }_{M0}`$ and $`H_0`$. For incoming neutrinos having the appropriate energy to interact resonantly, the opacity obtained ranges from $`0.05`$ to $`0.3`$ for cosmological scenarios with $`0.1<\mathrm{\Omega }_{M0}<1`$ and cosmological constant parameter in the range $`0<\mathrm{\Omega }_{\mathrm{\Lambda }0}<0.7`$ . Although models in which the UHECR are produced by UHE neutrinos can require energy densities comparable to the luminosity of the Universe , they would not necessarily have strong observable effects provided the opacity is low. If the opacities were larger, as one could expect for a mechanism generating the neutrinos at higher redshifts, there could be other observable consequences such as low energy photons above current experimental limits. Whatever the origin of the interacting neutrinos, if one requires a neutrino flux well exceeding that of other particles such as electrons, photons, protons and neutrons, models can be found which are consistent with the low energy photon flux bound as shown by the examples in Ref. . In order to explain the UHECR spectrum nearby the Earth the production rate within the absorption distance of the cosmic rays in the CMB ($`50`$ Mpc) is fixed by data. If the local relic neutrino density is known this normalizes the neutrino flux. We leave a local density enhancement factor, $`10^\xi `$, to account for possible clustering effects and assume a halo radius $`D`$, otherwise the probability of interaction for a neutrino is very small and the neutrino flux needed to produce the cosmic rays must be enormous and in conflict with both energy considerations and experimental neutrino bounds. The survival probability for the incoming neutrino flux is given by a local opacity factor $`\tau _{D\nu }`$ integrating Eq.( 1) to the halo limit taken to be $`D`$. This probability has a large resonance peak at $`E_\nu ^{res}=M_Z^2/(2m_\nu )`$ of width $`\delta E_\nu =E_\nu \mathrm{\Gamma }_Z/M_Z`$ where $`M_Z`$ and $`\mathrm{\Gamma }_Z`$ are the $`Z`$ mass and width respectively. As long as $`D`$ is below 50 Mpc the upper $`z`$ limit is small $`z<0.01`$ and the opacity at the resonant energy can be well approximated by the (”static”) expression: $$\tau _{D\nu }\left(E_{\nu 0}\right)D\sigma _{\nu \overline{\nu }}\left(2m_{\overline{\nu }}E_{\nu 0}\right)n_{\overline{\nu }0}1.310^510^\xi \frac{D}{1\text{Mpc}}.$$ (2) The probability of interacting locally in the halo is given by $`𝒫_I(E_\nu )=1e^{\tau _{D\nu }}\tau _{D\nu }`$. As long as its size is not extremely large, $`D50`$ Mpc, the interaction probability, $`𝒫_I`$, is small except for very large density enhancement factors. Taking the local neutrino flux entering the halo region to be $`\varphi (E_\nu )`$, the injection energy through resonant $`Z^0`$ production is simply given by: $`_\nu `$ $`=`$ $`{\displaystyle 𝑑E_\nu E_\nu 𝒫_I(E_\nu )\varphi (E_\nu )}`$ (3) $``$ $`1.3[E_\nu ^{res}]^2𝒫_I(E_\nu ^{res})\varphi (E_\nu ^{res}){\displaystyle \frac{\mathrm{\Gamma }_Z}{M_Z}},`$ (4) where the last expression corresponds to the common approximation used for integrating the resonant cross section and the factor makes the expression numerically exact for a neutrino spectral index of 2. Following Waxman, Eq.(4) can be equated to the produced energy flux of cosmic rays to obtain $`\varphi (E_\nu ^{res})`$: $$_p=_{E_{min}}^{E_{max}}𝑑EE\varphi _p(E),$$ (5) where $`\varphi _p(E)`$ is the higher energy cosmic ray flux tail assumed to be due to this mechanism. For a neutrino spectral index $`\gamma `$, the flux is: $$\varphi _\nu (E)=\frac{_p}{1.3[E_\nu ^{res}]^2𝒫_I(E_\nu ^{res})\mathrm{\Gamma }_Z/M_Z}\left[\frac{E_\nu }{E_\nu ^{res}}\right]^\gamma .$$ (6) The important point is that $`\varphi (E_\nu ^{res})`$ is inversely proportional to the interaction probability at the resonance peak $`𝒫_I(E_\nu ^{res})`$ and to its width $`\delta E_\nu `$. One should expect extrapolations of this flux with a fairly constant spectral index $`\gamma `$ both below and above the resonant energy because experience in astrophysical fluxes and theoretical considerations are very strong in supporting a neutrino flux spanning a few decades in energy. High energy fluxes from low interacting particles are severely constrained by existing experiments . For the range of energies considered here the strongest limit is given by the Fly’s Eye group . The non observation of horizontal air showers allows to put a limit on the integrated flux of any low interacting particle. Provided the neutrino flux can be extrapolated to the effective energy threshold for the Fly’s Eye bound, $`E_F10^8`$ GeV: $$\mathrm{\Phi }_\nu (E_F)=_{E_F}^{\mathrm{}}𝑑E\varphi _\nu (E)\left[1𝒫_I(E_\nu )\right]\mathrm{\Phi }_0.$$ (7) Fixing the neutrino mass, assuming $`\gamma >1`$, choosing a conservative (high) value of $`E_{min}=\mathrm{5\hspace{0.33em}10}^{19}`$ eV, and $`2.5`$ for the proton spectral index in Eq.( 5), we can constrain the region of allowed values of $`\gamma `$ and $`𝒫_I`$, using Eq.( 6) for $`\varphi _\nu `$. This is shown in fig. 1 for $`m_\nu =0.1,1`$, and 10 eV. The figure shows that there is a critical spectral index $`\gamma =2.15`$ above which the model is ruled out for all masses in the $`0.110`$ eV range. That is if $`\gamma >2.15`$ horizontal showers should have been observed even in the event that all neutrinos in the resonant energy range were converted to UHECR. If $`\gamma 1.2`$ however, a very low (depending on the neutrino mass) conversion probability could be allowed by data. Using Eq.( 2) to substitute the probability $`𝒫_I(E_\nu )`$ into the experimental limit expression in Eq. (7) we get a region of allowed parameter space $`\xi ,D`$ for any given value of $`m_\nu `$ and $`\gamma `$. This is shown in fig. 2 where the limits are given as the continuous lines for different values of the neutrino spectral index. Further restrictions apply in this parameter plot. The maximum density is constrained by the Fermi distribution to be : $$n_{\nu 0}\mathrm{1.5\hspace{0.33em}10}^3\left(\frac{m_\nu }{1\text{eV}}\right)^3\left(\frac{v}{220\text{km/s}}\right)^3\text{cm}^3,$$ (8) where $`v`$ is the characteristic neutrino velocity in the halo. However, given the strong dependence on the neutrino mass and an unknown velocity we take it as a free parameter. In addition, if the total number of background neutrinos in the Universe is fixed, the density enhancement factors in the halo, their sizes and the maximum total number of halos are related. For constant density halos, assuming that no neutrinos are outside them, the number of neutrino clusters, in a Hubble radius, of a given size and a given enhancement factor is simply $`N_c=10^\xi (R_H/D)^3`$, where $`R_H`$ is the Hubble radius. One can now easily see that the maximum number of halos can be read in a slant coordinate shown as dashed lines in Figs. 2,3. The three lines correspond to a single halo, $`\mathrm{3\hspace{0.33em}10}^4`$ and $`10^{10}`$ halos. The shaded region above the upper line implies less than 1 halo within a Hubble radius which is meaningless. The $`\mathrm{3\hspace{0.33em}10}^4`$ and $`10^{10}`$ halo lines roughly correspond to one halo per supercluster (every 4 $`10^6`$ Mpc<sup>3</sup>) and to one per galaxy (every 500 Mpc<sup>3</sup>) respectively. Note that this is a maximum number of halos, for instance below the lower curve models can still have clusters around all galaxies, as long as the population of neutrinos in between the halos is non zero. Notice that protons are attenuated in the CMB in an energy loss distance of about 50 Mpc. This means that for $`D>50`$ Mpc the region of the halo outside a sphere of this radius centered around us can be ignored for the production of the local UHECR spectrum. Halo sizes exceeding 50 Mpc should be considered in these plots as having an effective size of 50 Mpc. The approach is very conservative. Eq.(4) neglects fractions of the $`Z^0`$ production energy which goes into particles that cannot be UHECR. The $`Z^0`$ decay will produce a particle flux following a typical fragmentation spectrum and the decay of the unstable particles will add low energy particles which cannot contribute to the UHECR. Neither can neutrinos from $`Z^0`$ and pion decays nor that part of the high energy particles that are degraded by the showering developed in the intergalactic medium. The proton energy flux in Eq.(5) depends on the limits of the integration. It has been conservatively estimated by setting them close to the UHE part of the CR spectrum. As the observed cosmic ray flux spectral index at these energies is about 2.5, the lower integration limit gives the dominant contribution to the integral. The upper limit is not so relevant and it is in any case bounded by the neutrino resonance energy which in turn depends on the neutrino mass. For harder spectral indices closer to 2 one gets a similar result using an upper limit of order $`E_{max}=100E_{min}`$ as in Ref. . We have also done an analytical calculation of the proton and photon fluxes originated from the neutrino–antineutrino annihilation again using $`𝒫_I(E_\nu )`$ obtained from Eq. (2). The flux of protons is given by: $$\varphi _p(E_p)=_0^{\mathrm{}}\varphi _\nu (E_\nu )\frac{d\sigma (\nu \overline{\nu }p)}{dE_p}\overline{X},$$ (9) where $`d\sigma /dE_p`$ is the cross section for the $`\nu \overline{\nu }`$ to produce a proton of energy $`E_p`$ and $`\overline{X}`$ is the column depth of neutrinos in the halo. The cross section $`d\sigma /dE_p`$ can be written as the convolution: $$\frac{d\sigma (\nu \overline{\nu }p)}{dz}=_z^1𝑑y\frac{d\sigma (\nu \overline{\nu }q)}{dy}f(z/y),$$ (10) where $`z`$ is the fraction of the energy taken by the proton, $`f(x)`$ is the fragmentation function of a quark into an hadron, and $`d\sigma (\nu \overline{\nu }q)/dy`$ is the inclusive cross section for quark production from $`\nu \overline{\nu }`$ interactions. At the $`Z^0`$ resonance the non-resonant channels can be neglected at this level of precision. We use Hill’s fragmentation function , $`f(x)=N\mathrm{\hspace{0.33em}15}/16x^{1.5}(1x)^2`$ with $`N=0.03`$ for baryons. For calculating the secondary UHE photon spectrum similar expressions apply with $`N=0.32`$ and an additional integral over $`\pi ^0`$ decay. These integrals are evaluated numerically. We normalize the photon plus proton spectrum to the observed cosmic ray flux at $`E>\mathrm{5\hspace{0.33em}10}^{19}`$ eV. We neglect the interaction of the high energy particles produced with the IR and CMBR which would increase the neutrino flux needed. If we again apply the Fly’s Eye limit, parameter space becomes more restricted leaving less room for the conjecture as can be seen in fig. 3. No model with $`\gamma >2`$ is allowed in agreement with Ref. but there is still room for harder spectral indices. The natural assumption of one halo per galaxy forcing small halo sizes and bounding the possible density enhancements is ruled out for any injection spectrum. This is unfortunate since a clear experimental signature of relatively small halo sizes would be a cosmic ray anisotropy due to the asymmetric position of the solar system within the halo, given by the ratio of our position and the galactic halo radius $`D`$. Assuming $`10\%`$ sensitivity to anisotropy a future experiment such as the Auger Observatories could test models with halos of order 100 kpc. The picture is however not complete since neither Pauli blocking nor total mass constraints have been included. These must be related to mass density bounds that exist on different scales. The two case studies in Ref. use a halo size of 5 Mpc and density enhancements of 300 and 1000 ($`\xi 2.5`$ and $`\xi 3`$). As stated in Ref. they are not in conflict with horizontal air shower data; this is because of the very hard spectral indices used, $`\gamma 1.2`$. It is interesting to notice, however, that the maximum number of halos corresponding to these models is $`\mathrm{7\hspace{0.33em}10}^5`$ and $`\mathrm{2\hspace{0.33em}10}^5`$ respectively, implying that most galaxies can not have an associated neutrino halo, yet the size of these halos is of order the average inter galactic distance. The total mass in neutrinos of such enhanced density halos is, on the other hand: $$M_\nu =\frac{4}{3}\pi m_\nu n_\nu D^3=\mathrm{1.2\hspace{0.33em}10}^{10}D^310^\xi \left(\frac{m_\nu }{1\text{eV}}\right)M_{}$$ (11) Assuming a 1 eV neutrino mass this gives at least $`M_\nu =\mathrm{4.5\hspace{0.33em}10}^{14}M_{}`$ for the most favourable case with density enhancement 300. Although this mass is suggestive of supercluster scales, its size fits closer to the Local Group. Unfortunately the case studies imply a total mass in neutrinos that clearly exceeds the dynamic mass measurements associated to the Local Group. For $`\mathrm{\Omega }=1`$ the mass within a sphere of radius $`R`$ given by $`H_0R=390`$ km s<sup>-1</sup> is determined to be $`M_G=5.710^{12}M_{}`$ . The mass increases by about $`10\%`$ for $`\mathrm{\Omega }=0.1`$. Although the horizontal shower data and the halo mass constrains can be both met by a model neutrino mass ($`m_\nu <0.1`$eV) and spectral index $`\gamma <1.2`$, Pauli blocking arguments would imply orbital velocities exceeding $`1000`$km s<sup>-1</sup> implying that a constant density model is inconsistent. The only other possibility that is left for these models is an even larger halo size. A halo size of $`50`$ Mpc, a neutrino spectral index of $`\gamma 1.2`$ and a neutrino mass of order $`0.1`$eV seems viable. This case corresponds to a halo mass of order $`10^{15}M_{}`$ and orbiting velocities in the 1000 km<sup>-1</sup> range. Increasing the halo size helps mostly by removing the strong constrain on the total mass. In summary Horizontal Shower limits, mass constraints and Pauli blocking mechanisms leave very little room for UHE neutrinos to explain the origin of the UHECR. Only very large halos, relatively low neutrino masses and unusually flat spectral indices are marginally allowed. Up to now we have assumed that there is absolute lepton number symmetry and in such cases the density parameter for the neutrinos is fixed by the neutrino mass: $$\mathrm{\Omega }_\nu =\frac{1}{h^2}\frac{m_\nu }{93\text{eV}}$$ (12) It is however remarkable that if there is lepton number asymmetry, as recently suggested in Ref., density enhancement comes rather naturally and is distributed uniformly over the whole Universe. Assuming that neutrinos are degenerate, $`\mathrm{\Omega }_\nu 0.01`$ and $`m_\nu 0.07`$ eV, they obtain an enhancement factor of $`30`$. Our analysis has been repeated considering the interactions within a sphere of $`50`$Mpc, revealing that the model is completely consistent with the Fly’s Eye limit on neutrino fluxes, provided the spectral index of the UHE neutrinos satisfies $`\gamma 1.8`$. In any case there is no experimental signature provided by anisotropy. In the end however it is fortunate that these models can be further tested by experiment. A promising signature lies in the identification of photons as a significant component of the UHECR. This issue can be addressed by future experiments such as the Auger Observatories . Most importantly the fact that horizontal showers provide such a strict bound on these models also implies that future neutrino experiments, having much larger acceptance for neutrinos than Fly’s Eye, should be able to detect the postulated UHE neutrino fluxes. Here the Auger Observatories may also play a role together with other high energy neutrino detectors in construction or planning stages. We would like to thank Concha Gonzalez-Garcia for many discussions after reading the manuscript and Edward Baltz and Joseph Silk for discussions on mass bounds and Graciela Gelmini, Alexander Kusenko, Ken D. Olum, Günter Sigl, and Alex Vilenkin for helpful conversations. This work was supported in part by Xunta de Galicia (XUGA-20602B98), by CICYT (AEN99-0589-C02-02), and the Fundación Pedro Barrie de la Maza.
no-problem/9902/hep-ph9902398.html
ar5iv
text
# ELECTRICAL CONDUCTION IN THE EARLY UNIVERSE ## 1 Introduction The strong magnetic fields measured in many spiral galaxies, $`B2\times 10^6`$ G, are conjectured to be produced primordially; proposed mechanisms include fluctuations during an inflationary universe or at the GUT scale, and plasma turbulence during the electroweak transition or in the quark-gluon hadronization transition. The production and later diffusion of magnetic fields depends crucially on the electrical conductivity, $`\sigma _{el}`$, of the matter in the universe; typically, over the age of the universe, $`t`$, fields on length scales smaller than $`L(t/4\pi \sigma _{el})^{1/2}`$ are damped. The electrical conductivity was estimated in in the relaxation time approximation as $`\sigma _{el}n\alpha \tau _{el}/m`$ with $`mT`$ and relaxation time $`\tau _{el}1/(\alpha ^2T)`$, where $`\alpha =e^2/4\pi `$. In Refs. the relaxation time was corrected with the Coulomb logarithm. A deeper understanding of the screening properties in QED and QCD plasmas has made it possible to calculate a number of transport coefficients including viscosities, diffusion coefficients, momentum stopping times, etc., exactly in the weak coupling limit . However, calculation of processes that are sensitive to very singular forward scatterings remain problematic. For example, the calculated color diffusion and conductivity , even with dynamical screening included, remain infrared divergent due to color exchange in forward scatterings. Also the quark and gluon damping rates at non-zero momenta calculated by resumming ring diagrams exhibit infrared divergences whose resolution requires more careful analysis including higher order diagrams as, e.g., in the Bloch-Nordsieck calculation of Ref. . Charge exchanges through $`W^\pm `$, processes similar to gluon color exchange in QCD, are important in forward scatterings at temperatures above the $`W`$ mass, $`M_W`$. ## 2 Electrical conductivities in high temperature QED The electrical conductivity in the electroweak symmetry-broken phase at temperatures below the electroweak boson mass scale, $`M_WTm_e`$, is dominated by charged leptons $`\mathrm{}=e^{},\mu ^{},\tau ^{}`$ and anti-leptons $`\overline{\mathrm{}}=e^+,\mu ^+,\tau ^+`$ currents. In the broken-symmetry phase weak interactions between charged particles, which are generally smaller by a factor $`(T/M_W)^4`$ compared with photon-exchange processes, can be ignored. The primary effect of strong interactions is to limit the drift of strongly interacting particles, and we need consider only electromagnetic interactions between charged leptons and quarks. Transport processes are most simply described by the Boltzmann kinetic equation for the distribution functions of particle species $`i`$, of charge $`e`$. We use the standard methods of linearizing around equilibrium with a collision term with interactions given by perturbative QED. We refer to for details of the calculation but note the essential physics of Debye screening of the longitudinal (electric) interactions and Landau damping of the transverse (magnetic) interactions. The electrical conductivity for charged leptons is to leading logarithmic order $`\sigma _{el}^{(\mathrm{}\overline{\mathrm{}})}`$ $``$ $`j_\mathrm{}\overline{\mathrm{}}/E={\displaystyle \frac{3\zeta (3)}{\mathrm{ln}2}}{\displaystyle \frac{T}{\alpha \mathrm{ln}(C/\alpha N_l)}},m_eTT_{QGP}.`$ (1) where the constant $`C1`$ in the logarithm gives the next to leading order terms. Note that the number of lepton species drops out except in the logarithm. The above calculation taking only electrons as massless leptons ($`N_l=1`$) gives a first approximation to the electrical conductivity in the temperature range $`m_eTT_{QGP}`$, below the hadronization transition, $`T_{QGP}150`$ GeV, at which hadronic matter undergoes a transition to a quark-gluon plasma. Thermal pions and muons in fact also reduce the conductivity by scattering electrons, but they do not become significant current carriers because their masses are close to $`T_{QGP}`$. For temperatures $`T>T_{QGP}`$, the matter consists of leptons and deconfined quarks. The quarks themselves contribute very little to the current, since strong interactions limit their drift velocity. However, they are effective scatterers, and thus modify the conductivity (see and Fig. 1): $`\sigma _{el}={\displaystyle \frac{N_l}{N_l+3_q^{N_q}Q_q^2}}\sigma _{el}^{(\mathrm{}\overline{\mathrm{}})},T_{QGP}TM_W.`$ (2) ## 3 The symmetry-restored phase To calculate the conductivity well above the electroweak transition, $`TT_c100`$GeV , where the electroweak symmetries are fully restored, we describe the electroweak interactions by the standard model Weinberg-Salam Lagrangian with minimal Higgs ($`\varphi `$). At temperatures below $`T_c`$, the Higgs mechanism naturally selects the representation $`W^\pm `$, $`Z^0`$, and $`\gamma `$ of the four intermediate vector bosons. At temperatures above the transition – where $`\varphi `$ vanishes for a sharp transition, or tends to zero for a crossover – we consider driving the system with external vector potentials $`A^a=B,W^\pm ,W^3`$, which give rise to corresponding “electric” fields $`E_i^aF_{i0}^a=_iA_0^a_0A_i^a`$ for $`A^a=B`$ and $`F_{i0}^a=_iA_0^a_0A_i^agϵ_{abc}A_i^bA_0^c`$ for $`A^a=W^1,W^2,W^3`$. One can equivalently drive the system with the electromagnetic and weak fields derived from $`A`$, $`Z^0`$, and $`W^\pm `$, as when $`TT_c`$, or any other rotated combination of these. We consider here only the weak field limit and ignore the nonlinear driving terms. The self-couplings between gauge bosons are important, however, in the scattering processes in the plasma determining the conductivity. The electroweak fields $`A^b`$ act on the matter to generate currents $`J_a`$ of the various particles present in the plasma, such as left and right-handed leptons and their antiparticles, and quarks, vector bosons, and Higgs bosons. The Higgs and vector boson contributions are negligible. Therefore the significant terms in the currents are $`J_B^\mu =\frac{g^{}}{2}(\overline{L}\gamma ^\mu YL+\overline{R}\gamma ^\mu YR)`$ and $`J_{W^i}^\mu =\frac{g}{2}\overline{L}\gamma ^\mu \tau _iL`$. We define the conductivity tensor $`\sigma _{ab}`$ in general by $`𝐉_a=\sigma _{ab}𝐄^b.`$ (3) The electroweak $`U(1)\times SU(2)`$ symmetry implies that the conductivity tensor, $`\sigma _{ab}`$, in the high temperature phase is diagonal in the representation , as can be seen directly from the (weak field) Kubo formula which relates the conductivity to (one-boson irreducible) current-current correlation functions. The construction of the conductivity in terms of the Kubo formula assures that the conductivity and hence the related entropy production in electrical conduction are positive. Then $`\sigma =Diag(\sigma _{BB},\sigma _{WW},\sigma _{WW},\sigma _{WW})`$. Due to isospin symmetry of the $`W`$-interactions the conductivities $`\sigma _{W^iW^i}`$ are the same, $`\sigma _{WW}`$, but differ from the $`B`$-field conductivity, $`\sigma _{BB}`$. The calculation of the conductivities $`\sigma _{BB}`$ and $`\sigma _{WW}`$ in the weak field limit parallels that done for $`TT_c`$. The main difference is that weak interactions are no longer suppressed by a factor $`(T/M_W)^4`$ and the exchange of electroweak vector bosons must be included. The conductivity, $`\sigma _{BB}`$, for the abelian gauge field $`B`$ can be calculated similarly to the electrical conductivity at $`TT_c`$. Although the quarks and $`W^\pm `$ are charged, their drifts in the presence of an electric field do not significantly contribute to the electrical conductivity. Charge flow of the quarks is stopped by strong interactions, while similarly flows of the $`W^\pm `$ are effectively stopped by $`W^++W^{}Z^0`$, via the triple boson coupling. Charged Higgs bosons are likewise stopped via $`W^\pm \varphi ^{}\varphi `$ couplings. These particles do, however, affect the conductivity by scattering leptons. Particle masses have negligible effect on the conductivities. These considerations imply that the $`B`$ current consists primarily of right-handed $`e^\pm `$, $`\mu ^\pm `$ and $`\tau ^\pm `$, interacting only through exchange of uncharged vector bosons $`B`$, or equivalently $`\gamma `$ and $`Z^0`$. Because the left-handed leptons interact through $`𝐖`$ as well as through $`B`$, they give only a minor contribution to the current. They are, however, effective scatterers of right-handed leptons. The resulting conductivity is (see for details) $`\sigma _{BB}`$ $`=`$ $`{\displaystyle \frac{9}{19}}\mathrm{cos}^2\theta _W\sigma _{el},TT_c.`$ (4) Applying a $`W^3`$ field to the electroweak plasma drives the charged leptons and neutrinos oppositely since they couple through $`g\tau _3W_3`$. In this case, exchanges of $`W^\pm `$ dominate the interactions as charge is transferred in the singular forward scatterings and Landau damping is not sufficient to screen the interaction, a QCD magnetic (gluon) mass, $`m_{mag}g^2T`$, will provide an infrared cutoff. We expect that $`\sigma _{WW}\alpha \sigma _{BB}`$. This effect of $`W^\pm `$ exchange is analogous to the way gluon exchange in QCD gives strong stopping and reduces the “color conductivity” significantly ; similar effects are seen in spin diffusion in Fermi liquids . The electrical conductivity is found from $`\sigma _{BB}`$ and $`\sigma _{WW}`$ by rotating the $`B`$ and $`W^3`$ fields and currents by the Weinberg angle; using Eq. (3), $`(J_A,J_{Z^0})=(\theta _W)\sigma (\theta _W)(A,Z^0)`$. Thus the electrical conductivity is given by $`\sigma _{AA}=\sigma _{BB}\mathrm{cos}^2\theta _W+\sigma _{WW}\mathrm{sin}^2\theta _W;`$ (5) $`\sigma _{el}/T`$ above the electroweak transition differs from that below mainly by a factor $`\mathrm{cos}^4\theta _W0.6`$. ## 4 Summary and Outlook Typically, $`\sigma _{el}T/\alpha \mathrm{ln}(1/\alpha )`$, where the logarithmic dependence on the coupling constant arises from Debye and dynamical screening of small momentum-transfer interactions. In the quark-gluon plasma, at $`TT_{QGP}150`$ MeV, the additional stopping on quarks reduces the electrical conductivity from that in the hadronic phase. In the electroweak symmetry-restored phase, $`TT_c`$, interactions between leptons and $`W^\pm `$ and $`Z^0`$ bosons reduce the conductivity further. It does not vanish (as one might have imagined to result from singular unscreened $`W^\pm `$-exchanges), and is larger than previous estimates, within an order of magnitude. The current is carried mainly by right-handed leptons since they interact only through exchange of $`\gamma `$ and $`Z^0`$. From the above analysis we can infer the qualitative behavior of other transport coefficients. The characteristic electrical relaxation time, $`\tau _{el}(\alpha ^2\mathrm{ln}(1/\alpha )T)^1`$, defined from $`\sigma _{el}e^2n\tau _{el}/T`$, is a typical “transport time” which determines relaxation of transport processes when charges are involved. Right-handed leptons interact through $`Z^0`$ exchanges only, whereas left-handed leptons may change into neutrinos by $`W^\pm `$ exchanges as well. Since $`Z^0`$ exchange is similar to photon exchange when $`TT_c`$, the characteristic relaxation time is similar to that for electrical conduction, $`\tau _\nu (\alpha ^2\mathrm{ln}(1/\alpha )T)^1`$ (except for the dependence on the Weinberg angle). Thus the viscosity is $`\eta \tau _\nu T^3/(\alpha ^2\mathrm{ln}(1/\alpha ))`$. For $`TM_W`$ the neutrino interaction is suppressed by a factor $`(T/M_W)^4`$; in this regime neutrinos have longest mean free paths and dominate the viscosity. The electrical conductivity of the plasma in the early universe is sufficiently large that large-scale magnetic flux present in this period does not diffuse significantly over timescales of the expansion of the universe. The time for magnetic flux to diffuse on a distance scale $`L`$ is $`\tau _{diff}\sigma _{el}L^2`$. Since the expansion timescale $`t_{exp}`$ is $`1/(t_{\mathrm{Planck}}T^2)`$, where $`t_{\mathrm{Planck}}10^{43}`$ s is the Planck time, one readily finds that $`\frac{\tau _{diff}}{t_{exp}}\alpha x^2\frac{\tau _{el}}{t_{\mathrm{Planck}}}1`$, where $`x=L/ct_{exp}`$ is the diffusion length scale in units of the distance to the horizon. As described in Refs. , sufficiently large domains with magnetic fields in the early universe would survive to produce the primordial magnetic fields observed today. ## Acknowledgments To my collaborator G. Baym, and for discussions with C.J. Pethick and J.Popp. ## References
no-problem/9902/math-ph9902001.html
ar5iv
text
# Spontaneous Transitions in Quantum Mechanics ## I Introduction We reconsider in this paper the problem of spontaneous pair creations in static external fields. In the original version , the problem was addressing to high energy physicists. The experimental test was done by comparing the theoretical predictions with the experimental results coming from heavy ion collision experiments. As is stated in , there was no agreement between the two results, one of the possible cause being the large effects of non-adiabatic processes. In the past few years, experimental results showed that the transport properties of the semiconductors with high symmetry may change drastically if a certain critical value of the external electric field is exceed. A particular example is a quasi-one dimensional semiconductor, cooled down below the Peierls transition temperature. It is known that, below this critical temperature, a gap is opening in the single-particle excitation spectrum. Moreover, the experimental results show the existence of a threshold value of the applied electric field where the transport properties change drastically. The two elements: existence of the gap in the one-particle Hamiltonian spectrum and the existence of the critical value of the applied electric field, above which the conductivity is practically reduced to zero, are strong arguments for the idea that we are facing here with the phenomenon of spontaneous pair creations. We agree that there are many theories which, more or less, explain this phenomenon. While most of them involve interacting quantum fields, our hope is that an effective potential can be written down such that, for applied electric fields above the threshold value, the overcritical part of the conjecture applies. If this is true, then there may be another way to experimentally test the theory, this time, with a better control on the time variations of the external fields and so, on the non-adiabatic processes. In some situations, the threshold value of the electric field can be small. This means that, experimentally, we are not enforced to switch off the applied field (to protect the sample). This shows one of the qualitative difference between the two experimental settings: in the heavy ion collisions, the quantum system is perturbed by the electric fields produced during the collisions so we have no control on the “switch on” or “switch off” of the interaction. In contradistinction, for a semiconductor with low critical value of the electric field, we have total control on how slow the interaction is introduced. Because of a technical difficulty, in , the definition of overcritical external fields was slightly modified in order to prove the existence of the overcritical external fields. We propose another approach of the problem which avoid this technical difficulty. However, this doesn’t mean that the problem of spontaneous pair creation is solved, but, in the light of the last observation, the new approach seems to be more appropriate for the problem of spontaneous pair creations in semiconductors. ## II Description of the problem Because the results in scattering problems involving periodic Schrodinger operators are much poor than for those involving Dirac operators, we will treat the problem at the level of first quantization. We show that, above the critical value of the interaction, electrons can spontaneously transit between two different energetic bands. If the scattering operator can be implemented in the second quantization, this result is equivalent with spontaneous pair creations of electrons and holes. For simplicity, we will discuss here the case of a self-adjoint operator, $`H_0`$, defined on some dense subspace $`𝒟\left(H_0\right)`$ of the Hilbert space $``$, of whom spectrum consists of two absolute continuous, bounded, disjoint parts. We denote the lower and upper parts by $`\sigma _{}`$ and $`\sigma _+`$ respectively. Let $`H_\lambda =H_0+\lambda V`$ be the perturbed operator, where we assume that $`𝒟\left(H_\lambda \right)=𝒟\left(H_0\right)`$, $`𝒟\left(H_0\right)𝒟\left(V\right)`$ and the perturbation leaves $`\sigma _{}`$ and $`\sigma _+`$ unchanged. Our interest is in the case when, as $`\lambda `$ increases, some eigenvalues emerge from $`\sigma _+`$ and move continuously to $`\sigma _{}`$, and there is a critical value, $`\lambda _c`$, at which the lowest eigenvalue touches $`\sigma _{}`$ and then it disappears in the lower continuum spectrum. We study the scattering problem of pair $`(H_0,H_\lambda )`$ in the adiabatic switching formalism for both cases: $`\lambda <\lambda _c`$ and $`\lambda >\lambda _c`$. Let us consider a function, $`\phi :_+`$, $`\phi 𝒞^{\mathrm{}}`$ such that $$\phi \left(s\right)=\{\begin{array}{c}1\text{}\left|s\right|<1\\ 0\text{}\left|s\right|>2\end{array}$$ (1) and, for a pair of positive numbers, $`\epsilon =(\epsilon _1,\epsilon _2)`$, we consider the adiabatic switching factor: $$\phi _\epsilon \left(s\right)=\{\begin{array}{c}\phi \left(\epsilon _1s\right)\text{}s<0\\ \phi \left(\epsilon _2s\right)\text{}s0\end{array}.$$ (2) One can consider that $`\epsilon _1`$ controls the “switch on” process and $`\epsilon _2`$ controls the “switch off” process. Note that $`\phi _\epsilon `$ is also of $`𝒞^{\mathrm{}}`$. For the time dependent Hamiltonian: $`H_{\epsilon ,\lambda }\left(t\right)=H_0+\lambda \phi _\epsilon \left(t\right)V`$, and time independent Hamiltonian: $`H_\lambda =H_0+\lambda V`$, we denote by: $$W_{\epsilon ,\lambda }^\pm =s\underset{T\pm \mathrm{}}{lim}U_{\epsilon ,\lambda }^{}(T,0)e^{iTH_0}$$ (3) and $$W_\lambda ^\pm =s\underset{T\pm \mathrm{}}{lim}e^{iTH_\lambda }e^{iTH_0}$$ (4) the adiabatic and static Moller operators. The notation $`U_{\epsilon ,\lambda }(T,T^{})`$ stands for the propagator corresponding to $`H_{\epsilon ,\lambda }\left(t\right)`$. We suppose that, for $`\lambda [0,\lambda _0]`$, $`\lambda _0>\lambda _c`$, these operators exist, the adiabatic Moller operators converge strongly to the static operators. In addition, we consider that the static Moller operators are locally complete on $`\sigma _{}`$, i.e. $`Range\left[P_{H_\lambda }\left(\sigma _{}\right)W_\lambda ^\pm \right]=P_{H_\lambda }\left(\sigma _{}\right)`$. We will discuss later why the situation is different in the case when the Moller operators are only weakly complete (in the sense of ). With these assumptions, one can define the unitary scattering matrix $`S_\lambda =\left(W_\lambda ^{}\right)^{}\times W_\lambda ^+`$ and the adiabatic version, $`S_{\epsilon ,\lambda }=\left(W_{\epsilon ,\lambda }^{}\right)^{}\times W_{\epsilon ,\lambda }^+`$. It is known that the adiabatic scattering operator converge weakly to the static scattering operator in the adiabatic limit, $`\epsilon 0`$. Let us denote by $`P_{H_\lambda }\left(\mathrm{\Omega }\right)`$ the spectral projection of $`H_\lambda `$ corresponding to some $`\mathrm{\Omega }R`$. The spontaneous excitations (transfer from $`P_{H_0}\left(\sigma _{}\right)`$ to $`P_{H_0}\left(\sigma _+\right)`$ and vice-versa) are denied by the fact that the scattering matrix $`S_\lambda `$ commutes with the unperturbed Hamiltonian and in consequence: $`P_{H_0}\left(\sigma _\pm \right)S_\lambda P_{H_0}\left(\sigma _{}\right)0`$. The key observation is that $`S_{\epsilon ,\lambda }`$ does not commute with the unperturbed Hamiltonian and, because $`S_{\epsilon ,\lambda }`$ goes weakly to the static scattering operator, we still have a chance for $`\underset{\epsilon 0}{lim}P_{H_0}\left(\sigma _\pm \right)S_\lambda P_{H_0}\left(\sigma _{}\right)>0`$. Indeed, was proven in that this is the case if one considers a discontinuous switching factor, $`\phi _\delta `$, with $`\underset{\delta 0}{lim}\phi _\delta `$ a smooth function. Moreover, it was shown that $$\underset{\epsilon _1=\epsilon _20}{lim}P_{H_0}\left(\sigma _\pm \right)S_{\epsilon ,\lambda }P_{H_0}\left(\sigma _{}\right)=1o\left(\delta \right)$$ (5) provided $`\lambda >\lambda _c`$. We will prove in the next section that: $$\underset{\epsilon _10}{lim}\underset{\epsilon _20}{lim}P_{H_0}\left(\sigma _\pm \right)S_{\epsilon ,\lambda >\lambda _c}P_{H_0}\left(\sigma _{}\right)=1\text{,}$$ (6) but with $`\phi `$ of $`𝒞^{\mathrm{}}`$ class. As was already pointed out in the previous section, this version may be more appropriate for the case of pair creations in semiconductors. ## III The result Our main result is: ###### Theorem 1 In the conditions enunciated in the previous sections, for $`\lambda \left[0,\lambda _0>\lambda _c\right]`$ and $`H\left(t\right)`$ of $`𝒞^3`$ in respect with $`t`$ (in the sense of ), then: $$\underset{\epsilon _10}{lim}\underset{\epsilon _20}{lim}P_{H_0}\left(\sigma _{}\right)S_{\epsilon ,\lambda }P_{H_0}\left(\sigma _+\right)=\{\begin{array}{c}0\text{ if }\lambda <\lambda _c\\ 1\text{ if }\lambda >\lambda _c\end{array}\text{.}$$ (7) Proof. The under-critical part ($`\lambda <\lambda _c`$) results directly from the adiabatic theorem. In this situation, the order of limits are non-important. Note that the under-critical case was proven in full generality for Dirac operators in . We start now the proof of the overcritical part ($`\lambda >\lambda _c`$) which follows closely . We will denote by $`E_g\left(t\right)`$ and $`\psi _g\left(t\right)`$ the lowest eigenvalue of $`H_{\epsilon ,\lambda }\left(t\right)`$ and one of its eigenvector. (Without loss of generality, we can suppose that the eigenvalues do not change their order during the switching). Any constant which depends on $`\epsilon _{1,2}`$ and goes to zero as $`\epsilon _{1,2}`$ goes to zero will be denoted by $`o\left(\epsilon _{1,2}\right)`$. Our task is to find a vector $`\varphi `$, $`\varphi =1`$, such that $$P_{H_0}\left(\sigma _{}\right)S_{\epsilon ,\lambda }P_{H_0}\left(\sigma _+\right)\varphi _\epsilon >1o(\epsilon _1,\epsilon _2)\text{.}$$ (8) Let $`\phi \left(s_0\right)=\lambda _c/\lambda `$, $`s_0>0`$, and $`0<\delta <1`$ such that $`E_g\left(\left(s_0+\delta \right)/\epsilon _1\right)`$ exists. From the adiabatic theorem applied on $`(2/\epsilon _2,\left(s_0+\delta \right)/\epsilon _1)`$ we get $$P_{H_0}\left(\sigma _+\right)U_{\epsilon ,\lambda }(2/\epsilon _1,\left(s_0+\delta \right)/\epsilon _1)\psi _g\left(\left(s_0+\delta \right)/\epsilon _1\right)>1o\left(\epsilon _1\right)$$ (9) and we will choose $`\varphi _{\epsilon _1}^{}=U_{\epsilon ,\lambda }(2/\epsilon _1,\left(s_0+\delta \right)/\epsilon _1)\psi _g\left(\left(s_0+\delta \right)/\epsilon _1\right)`$, where the index $`\epsilon _1`$ emphasizes that this vector depends only on $`\epsilon _1`$. Again, from the adiabatic theorem on $`(\left(s_0+\delta \right)/\epsilon _1,0)`$ we have $$P_{H_\lambda }\left(\sigma _{}\right)U_{\epsilon ,\lambda }(0,2/\epsilon _1)\varphi _{\epsilon _1}^{}>1o\left(\epsilon _1\right)\text{.}$$ (10) Because $`W_\lambda ^\pm `$ are complete, there exists $`\stackrel{~}{\varphi }_{\epsilon _1}P_{H_0}\left(\sigma _{}\right)`$, $`\stackrel{~}{\varphi }_{\epsilon _1}1`$, such that: $$W_\lambda ^+\stackrel{~}{\varphi }_{\epsilon _1}=P_{H_\lambda }\left(\sigma _{}\right)U_{\epsilon ,\lambda }(0,2/\epsilon _1)\varphi _{\epsilon _1}^{}\text{.}$$ (11) In fact, $`\stackrel{~}{\phi }_{\epsilon _1}`$ is given by: $$\stackrel{~}{\varphi }_{\epsilon _1}=P_{H_0}\left(\sigma _{}\right)\left(W_\lambda ^+\right)^{}U_{\epsilon ,\lambda }(0,2/\epsilon _1)\varphi _{\epsilon _1}^{}\text{.}$$ (12) Thus we can continue: $$P_{H_0}\left(\sigma _{}\right)e^{iH_02/\epsilon _2}U_{\epsilon ,\lambda }(2/\epsilon _2,0)U_{\epsilon ,\lambda }(0,2/\epsilon _1)\varphi _{\epsilon _1}^{}$$ (13) $`\left|\stackrel{~}{\varphi }_{\epsilon _1},e^{iH_02/\epsilon _2}U_{\epsilon ,\lambda }(2/\epsilon _2,0)U_\epsilon (0,2/\epsilon _1)\varphi _{\epsilon _1}^{}\right|`$ $`\left|W_\lambda ^+\stackrel{~}{\varphi }_{\epsilon _1},U_{\epsilon ,\lambda }(0,2/\epsilon _1)\varphi _{\epsilon _1}^{}\right|`$ $`\left|\left[U_{\epsilon ,\lambda }^{}(2/\epsilon _2,0)e^{iH_02/\epsilon _2}W_\lambda ^+\right]\stackrel{~}{\varphi }_{\epsilon _1},U_{\epsilon ,\lambda }(0,2/\epsilon _1)\varphi _{\epsilon _1}^{}\right|`$ $`=P_{H_\lambda }\left(\sigma _{}\right)U_{\epsilon ,\lambda }(0,2/\epsilon _1)\varphi _{\epsilon _1}^{}^2`$ $`\left|\left[W_{\epsilon _2,\lambda }^+W_\lambda ^+\right]\stackrel{~}{\varphi }_{\epsilon _1},U_{\epsilon ,\lambda }(0,2/\epsilon _1)\varphi _{\epsilon _1}^{}\right|`$ $`>1o\left(\epsilon _1\right)\left|\left[W_{\epsilon _2,\lambda }^+W_\lambda ^+\right]\stackrel{~}{\varphi }_{\epsilon _1},U_{\epsilon ,\lambda }(0,2/\epsilon _1)\varphi _{\epsilon _1}^{}\right|\text{,}`$by using inequality (10). Finally, choosing $`\varphi =e^{iH_02/\epsilon _1}\varphi _{\epsilon _1}^{}`$ it follows from (9) that: $`P_{H_0}\left(\sigma _{}\right)S_{\epsilon ,\lambda }P_{H_0}\left(\sigma _+\right)\varphi `$ $$>P_{H_0}\left(\sigma _{}\right)e^{iH_02/\epsilon _2}U_{\epsilon ,\lambda }(2/\epsilon _2,2/\epsilon _1)\varphi _{\epsilon _1}^{}o\left(\epsilon _1\right)\text{.}$$ (14) Further, from inequality (13) $`P_{H_0}\left(\sigma _{}\right)S_{\epsilon ,\lambda }P_{H_0}\left(\sigma _+\right)\varphi _{\epsilon _1}`$ $$1o\left(\epsilon _1\right)\left|\left[W_{\epsilon _2,\lambda }^+W_\lambda ^+\right]\stackrel{~}{\varphi }_{\epsilon _1},U_{\epsilon ,\lambda }(0,2/\epsilon _1)\varphi _{\epsilon _1}^{}\right|\text{.}$$ (15) Because $`\stackrel{~}{\varphi }_{\epsilon _1}`$ do not depend on $`\epsilon _2`$, the statement of the theorem follows from the strong convergence of the adiabatic Moller operator to the static Moller operator.$`\mathrm{}`$ Following , one can second quantize our problem by considering $`P_{H_0}\left(\sigma _\pm \right)`$ as the spaces of particles and antiparticles (holes). If $`S_{\epsilon ,\lambda }`$ can be implemented in the Fock space, then one can follow the method of to show that this result is equivalent with the spontaneous pair creations. We want to point out that the local completeness of Moller operators is essentially in the proof of the above theorem. Supposing that they are only weakly local complete (i.e. $`RanP_{H_\lambda }\left(\sigma _{}\right)W_\lambda ^{}=RanP_{H_\lambda }\left(\sigma _{}\right)W_\lambda ^+P_{H_\lambda }\left(\sigma _{}\right)H_{a.c.}\left(H_\lambda \right)`$), then the eigenvector $`\psi _g\left(\left(s_0+\delta \right)/\epsilon _1\right)`$ may be trapped in $`P_{H_\lambda }\left(\sigma _{}\right)\left[RanW_\lambda ^+\right]^{}`$ under the evolution $`U_\epsilon `$. Unfortunately, it follows from that this is not a rare case. Moreover, because of infinite dimensionality of this subspace, the weak convergence: $$w\underset{\epsilon _10}{lim}P_{H_\lambda }\left(\sigma _{}\right)U_{\epsilon ,\lambda }(0,1/\epsilon _1)=0$$ (16) cannot be used to show that the vector escapes from $`P_{H_\lambda }\left(\sigma _{}\right)\left[RanW_\lambda ^+\right]^{}`$ after a long period of time. The conclusion is that during the ”switch on” process, the eigenvector is most likely trapped and stays in $`P_{H_\lambda }\left(\sigma _{}\right)\left[RanW_\lambda ^+\right]^{}`$. Then there is no way of defining a vector similar to $`\stackrel{~}{\varphi }_{\epsilon _1}`$ so the above proof cannot be applied. Because $`\left(W_{\epsilon ,\lambda }^{}\right)^{}`$ converges only weakly to $`\left(W_\lambda ^{}\right)^{}`$, there is no direct argument against the possibility that the ”switch off” process to bring this vector back to $`P_{H_0}\left(\sigma _+\right)`$. ## IV Conclusions The last observation shows that even in this simplified form, the problem of spontaneous transitions is not trivial. A deep question about the subject is under what conditions the same result is true disregarding any order of the limits, in particular, for $`\epsilon _1=\epsilon _2`$. In the case when Moller operators are complete (or locally complete on $`\sigma _{}`$), the result of the last section reduces this problem to the study of $`\stackrel{~}{\varphi }_{\epsilon _1}`$ properties. One might expect that $$_0^{\mathrm{}}𝑑tVe^{itH_0}\stackrel{~}{\varphi }_{\epsilon _1}<M\text{,}$$ (17) with $`M`$ independent of $`\epsilon _1`$ in which case it is straightforward that the order of limits is unimportant. To prove a relation like 17, one has to prove that $`\stackrel{~}{\varphi }_{\epsilon _1}`$ belongs to a set of vectors for which the Cook criterion is valid, together with uniform estimates. From the definition of $`\stackrel{~}{\varphi }_{\epsilon _1}`$, one can see that this problem can be reduced to the study of the evolution of the eigenvector $`\psi \left(\left(s_0+\delta \right)/\epsilon _1\right)`$, which does not depend on $`\epsilon _1`$. In the most of the cases, the Schwartz’s space may be chosen as the set of vectors for which the Cook criterion holds. Unfortunately, to prove that the evolution of $`\psi \left(\left(s_0+\delta \right)/\epsilon _1\right)`$ belongs to this space is almost impossible. A much easier task is to prove that it belongs to some Sobolev space $`W^{k,p}`$. If this step is accomplished, we think that $`W^{k,p}`$ estimates of may be used to complete the proof, at least for large dimensions. ###### Acknowledgement 2 The author gratefully acknowledges support, under the direction of J. Miller, by the State of Texas through the Texas Center for Superconductivity and the Texas Higher Education Coordinating Board Advanced Technology Program, and by the Robert A. Welch Foundation. Also, the author acknowledge extremely helpful observations pointed out by one of the referees.
no-problem/9902/astro-ph9902288.html
ar5iv
text
# Stability in Force-Free Electrodynamics ## 1 Introduction It has been suggested that the luminosity of pulsars, Kerr black holes in external magnetic fields, relativistic accretion disks, and gamma-ray bursts comes originally in the form of a Poynting jet. We show that Poynting jets are subject to Kelvin-Helmholtz and screw instabilities. Generically, Poynting jets should lose their stability as they propagate away from the sources because the guiding magnetic field decreases more quickly than the toroidal magnetic field. Both the guiding magnetic field $`B`$ and the Poynting flux density $`S`$ decrease as $`1/A`$, where $`A`$ is the cross section of the Poynting jet, and we assume that $`A`$ grows as the jet propagates away from the source. Since the toroidal magnetic field and the radial (in local cylindrical coordinates) electric field scale as $`S^{1/2}`$, these fields decrease slower than the guiding field $`B`$. The stabilizing effect of the guiding field decreases, causing the jet to lose stability. Instabilities probably lead to a turbulent cascade of energy to smaller scales. The ultimate energy sink may be particle acceleration and/or pair production. We investigate the instabilities in the framework of force-free electrodynamics (FFE). FFE has been applied to pulsars, black holes, accretion disks, and gamma-ray bursts (Goldreich & Julian 1969, Scharlemann & Wagoner 1973, Blandford 1976, Blandford & Znajek 1977, Michel 1973, Thompson & Blaes 1998, Meszaros & Rees 1997). By definition, FFE applies when the energy-momentum tensor is dominated by the electromagnetic fields. Particles carry charges and currents but have zero inertia. We present in §2 different formulations of FFE that are used in the stability calculations. We show in §3 that electric-like tangential discontinuities are Kelvin-Helmholtz unstable. We prove in §4 that all non-trivial cylindrically symmetrical magnetic configurations are screw unstable. We give in §5.1 the eigenmode equation for a cylindrical Poynting jet. As an illustration of applications of this equation, we show in §5.2 that the Poynting jet of the Goldreich-Julian pulsar is screw unstable if the Poynting luminosity exceeds the classical “magneto-dipole” value. ## 2 Force-free electrodynamics (FFE) FFE is applicable if electromagnetic fields are strong enough to produce pairs and baryon contamination is prevented by strong gravitational fields (Blandford & Znajek 1977). Pulsars, Kerr black holes in external magnetic fields, relativistic accretion disks, and gamma-ray bursts are the astrophysical objects whose luminosity might come originally in a pure electromagnetic form describable by FFE. FFE is classical electrodynamics supplemented by the force-free condition: $$_t𝐁=\times 𝐄,$$ (1) $$_t𝐄=\times 𝐁𝐣,$$ (2) $$\rho 𝐄+𝐣\times 𝐁=0.$$ (3) $`𝐁=0`$ is the initial condition. The speed of light is $`c=1`$; $`\rho =𝐄`$ and $`𝐣`$ are the charge and current densities multiplied by $`4\pi `$. The electric field is everywhere perpendicular to the magnetic field, $`𝐄𝐁=0`$ <sup>1</sup><sup>1</sup>1If $`\rho 0`$, $`𝐄𝐁=0`$ follows from (3), if $`\rho =0`$, this condition is an independent basic equation of FFE.. The electric field component parallel to the magnetic field should vanish because the charges are freely available in FFE. It is also assumed that the electric field is everywhere weaker than the magnetic field, $`E^2<B^2`$. Then equation (3) means that it is always possible to find a local reference frame where the field is a pure magnetic field, and the current is flowing along this field. FFE is Lorentz invariant. Equation (3) can be written in the form of the Ohm’s law. The current perpendicular to the local magnetic field can be calculated from equation (3). The parallel current is determined from the condition that electric and magnetic fields remain perpendicular during the evolution described by the Maxwell equations (1), (2). We thus obtain the following non-linear Ohm’s law $$𝐣=\frac{(𝐁\times 𝐁𝐄\times 𝐄)𝐁+(𝐄)𝐄\times 𝐁}{B^2}.$$ (4) Equations (1), (2), (4) form an evolutionary system (initial condition $`𝐄𝐁=0`$ is assumed). It therefore makes sense to study stability of equilibrium electromagnetic fields in FFE. One can also study linear waves and their nonlinear interactions in the framework of FFE (Thompson & Blaes 1998). It is convenient to introduce a formulation of FFE similar to magnetohydrodynamics (MHD); then we can use the familiar techniques of MHD to test stability of magnetic configurations. To this end, define a field $`𝐯=𝐄\times 𝐁/B^2`$, which is similar to velocity in MHD. Then $`𝐄=𝐯\times 𝐁`$ and equation (1) becomes the “frozen-in” law $$_t𝐁=\times (𝐯\times 𝐁).$$ (5) From $`𝐯=𝐄\times 𝐁/B^2`$, and from equations (1)-(3), one obtains the momentum equation $$_t(B^2𝐯)=\times 𝐁\times 𝐁+\times 𝐄\times 𝐄+(𝐄)𝐄,$$ (6) where $`𝐄=𝐯\times 𝐁`$. Equations (5), (6) are the usual MHD equations except that the density is equal $`B^2`$ and there are order $`v^2`$ corrections in the momentum equation. ## 3 Tangential discontinuities The planar Poynting jet $`𝐄_0=(V(x),0,0)`$, $`𝐁_0=(0,U(x),B(x))`$ is a stationary solution of FFE if $$B^2+U^2V^2=\mathrm{const}.$$ (7) The eigenmodes of this jet are $`\mathrm{exp}(i\omega t+ik_yy+ik_zz)`$. As shown in Appendix A, the eigenmode equation for the x-displacement $`\xi `$, defined by $`\delta B_x=i(Uk_y+Bk_z)\xi `$, is $$(F\xi ^{})^{}+(\omega ^2k_y^2k_z^2)F\xi =0.$$ (8) Here the prime denotes the x-derivative, and $`F=(\omega B+k_yV)^2(k_zB+k_yU)^2+(k_zV\omega U)^2`$. To illustrate applications of the eigenmode equation (8), consider a tangential discontinuity $`𝐄_0=(V,0,0)`$, $`𝐁_0=(0,U,B)`$ at $`x<0`$, and $`𝐄_0=0`$, $`𝐁_0=(0,0,B_0)`$ at $`x>0`$. As follows from equation (7), $`B_0^2=B^2+U^2V^2`$. The eigenmode equation (8) is solved by $`\xi =\mathrm{exp}(\kappa |x|)`$, $`\kappa ^2=k_y^2+k_z^2\omega ^2`$. From the continuity of $`F\xi ^{}`$, $`F(0)+F(+0)=0`$, which gives the dispersion law $$(\omega B+k_yV)^2(k_zB+k_yU)^2+(k_zV\omega U)^2+(\omega ^2k_z^2)B_0^2=0.$$ (9) It follows from the dispersion law (9) that magnetic-like discontinuities, $`|U|>|V|`$, are stable and electric-like discontinuities , $`|V|>|U|`$, are unstable <sup>2</sup><sup>2</sup>2The simplest way to see this is to use a Lorentz boost along $`z`$ to remove either $`U`$ or $`V`$.. ## 4 Magnetic configurations A force-free magnetic field ($`\times 𝐁\times 𝐁=0`$) with a zero electric field is a stationary FFE solution. We will study the stability of the force-free magnetic configurations using the MHD formulation of FFE given by equations (5) and (6). Linear perturbations, denoted $`𝐛`$ and $`𝐯`$, $`\mathrm{exp}(i\omega t)`$, satisfy $$i\omega 𝐛=\times (𝐯\times 𝐁),$$ (10) $$i\omega B^2𝐯=\times 𝐁\times 𝐛+\times 𝐛\times 𝐁.$$ (11) Define the displacement $`𝝃`$ by $`𝐯=i\omega 𝝃`$. From (10), $`𝐛=\times (𝝃\times 𝐁)`$. Now equation (11) can be written as $$\omega ^2B^2𝝃=\times 𝐁\times \times (𝝃\times 𝐁)+\times \times (𝝃\times 𝐁)\times 𝐁\widehat{𝐊}𝝃.$$ (12) Since the operator $`\widehat{𝐊}`$ is self-adjoint (Kadomtsev 1966 and references therein), the frequency is given by the variational principle $$\omega ^2=\mathrm{min}\frac{d^3r𝝃\widehat{𝐊}𝝃}{d^3rB^2𝝃^2}.$$ (13) The equilibrium field $`𝐁`$ is unstable if the potential energy $$Wd^3r𝝃\widehat{𝐊}𝝃,$$ (14) is negative for some displacements $`𝝃`$. Consider the case of cylindrical symmetry, coordinates $`(r,\theta ,z)`$. The equilibrium field $`𝐁=(0,U(r),B(r))`$ should satisfy $`BB^{}+Ur^1(rU)^{}=0`$, where the prime denotes the r-derivative. For an eigenmode $`\mathrm{exp}(im\theta +ikz)`$, the potential energy reduces to (Kadomtsev 1966) $$W𝑑r(f\xi ^2+g\xi ^2),$$ (15) where $`\xi `$ is the radial component of the displacement. The other two components of the displacement vector, $`\xi _\theta `$ and $`\xi _z`$, were chosen to minimize the energy for a given $`\xi `$. The functions of radius $`f`$ and $`g`$ are given by $$f=r\frac{(krB+mU)^2}{k^2r^2+m^2},$$ (16) $$g=r^1\left(\frac{k^2r^2+m^21}{k^2r^2+m^2}(krB+mU)^2+\frac{2k^2r^2}{(k^2r^2+m^2)^2}(k^2r^2B^2m^2U^2)\right).$$ (17) We now show that magnetic configurations with a non-zero toroidal field $`U`$ are screw unstable. Screw means, e. g., that $`m=1`$, but not $`m=1`$, is unstable for a given sign of $`k`$. Kadomtsev (1966) gives a clear discussion of the screw instability in plasmas, and shows that the screw mode is the most dangerous mode (if the plasma is unstable, it is screw unstable). Assume that $`U(0)=0`$, $`U^{}(0)>0`$, $`U0`$ for $`r\mathrm{}`$, and $`U`$ is positive in between. Assume that $`B`$ is everywhere positive. Let $`k`$ be positive and small. Take $`m=1`$. Let $`r_0`$ be the first zero of $`krBU`$. Take a trial function $`\xi =1`$ for $`r<r_0`$, and zero at $`r>r_0`$. We can chose this generalized function $`\xi `$ in such a way that the first term in the energy integral (15) vanishes. The second term is an integral of $`g`$ from $`0`$ to $`r_0`$. For small $`k`$, $`g=k^2r(krBU)(3krB+U)`$ is negative. ## 5 Poynting jets Poynting jets become observable only after the instabilities convert the Poynting flux into kinetic energy of charged particles. The FFE instabilities may also limit the Poynting luminosity of the sources. Thus, the stability of Poynting jets in FFE is of interest for astrophysical applications. Unlike the purely magnetic case, the eigenvalue problem is not self-adjoint. We first give the eigenmode equation for cylindrical Poynting jets. We then analyze stability of a slowly rotating Goldreich-Julian pulsar. We will show that the Poynting jet is screw unstable if the current density exceeds the charge density times the speed of light <sup>3</sup><sup>3</sup>3Since charges of either sign are available, the current density can, in principle, exceeds the charge density times the speed of light.. ### 5.1 The eigenmode equation Cylindrical Poynting jet $`𝐄_0=(V(r),0,0)`$, $`𝐁_0=(0,U(r),B(r))`$, in cylindrical coordinates $`(r,\theta ,z)`$, is a stationary solution of FFE if $$BB^{}+Ur^1(rU)^{}Vr^1(rV)^{}=0,$$ (18) the prime denotes the r-derivative. The eigenmodes are $`\mathrm{exp}(i\omega t+im\theta +ikz)`$. Proceeding along the same lines as in Appendix A, we obtain the eigenmode equation for the r-displacement $`\xi `$, defined by $`\delta B_r=i(mU/r+Bk)\xi `$, $$(f\xi ^{})^{}=g\xi .$$ (19) Here $$f=rF/\kappa ^2,$$ (20) $$g=r^1E/\kappa ^2,$$ (21) $$\kappa ^2=k^2+m^2r^2\omega ^2,$$ (22) $$F=c_1^2c_2^2+c_3^2,$$ (23) $$E=(1\kappa ^2r^2)F2\frac{k^2\omega ^2}{\kappa ^2}G4HrH^{},$$ (24) $$G=2B(\omega c_1kc_2)F,$$ (25) $$H=c_3^2,$$ (26) $$c_1=\omega B+mV/r,$$ (27) $$c_2=kB+mU/r,$$ (28) $$c_3=kV\omega U.$$ (29) The eigenmode equation (19) reduces to (8) in the planar limit, $`r\mathrm{}`$, $`m\mathrm{}`$, $`m/r=\mathrm{const}=k_y`$. In the purely magnetic case, $`V=0`$, for $`\omega =0`$, the functions $`f`$ and $`g`$ reduce to (16), (17). The eigenmode equation is invariant under the Lorentz boosts in the z-direction - since $`(\omega ,k)`$, $`(U,V)`$, and $`(c_1,c_2)`$ are vectors, one can show that $`f`$ and $`g`$ are scalars. ### 5.2 Stability of the Goldreich-Julian pulsar model To illustrate how equation (19) can be applied, we will derive the stability criterion for a slowly rotating Goldreich-Julian (1969) pulsar. The pulsar is called slow if $`\mathrm{\Omega }Rc`$, where $`\mathrm{\Omega }`$ is the neutron star (NS) rotation frequency and $`R`$ is the NS radius, $`c=1`$ is the speed of light. The pulsar luminosity is carried by a narrow Poynting jet, radius $`a(\mathrm{\Omega }R/c)^{1/2}R`$, emanating from the polar cap region. Since $`aR`$, the jet is approximately cylindrical. In the vicinity of the axis, the electrostatic potential is given by $`V=\mathrm{\Omega }Br`$, $`B`$ is the NS magnetic field. The toroidal magnetic field can be written in the form $`U=\mathrm{\Omega }_U(r)Br`$. We will assume that $`\mathrm{\Omega }_U(r)`$ decreases from some positive value $`\mathrm{\Omega }_U`$ at $`r=0`$ to $`0`$ at $`r>a`$. The value of $`\mathrm{\Omega }_U`$ is not known, but it was suggested that $`\mathrm{\Omega }_U\mathrm{\Omega }`$. The following analytical derivation of the sufficient condition for screw instability has been confirmed by numerical simulations of the eigenmode equation. Take $`k=(1ϵ)\mathrm{\Omega }_U`$, where $`ϵ`$ is a small positive number. If $`\mathrm{\Omega }=0`$, the screw mode ($`m=1`$) is unstable and has a small growth rate. What happens when we “turn on” $`\mathrm{\Omega }`$? Since $`f/g1/k^2a^2`$, the eigenmode is approximately a step-function, $`\xi =1`$ at $`r<r_0`$, $`\xi =0`$ at $`r>r_0`$, where $`r_0`$ is the first zero of $`c_2`$, and the real part of the eigenfrequency is $`\mathrm{Re}\omega =\mathrm{\Omega }`$, so that $`|c_1||c_2|`$. The instability criterion is similar to the purely magnetic case of §4, $$_0^{r_0}𝑑rg<0.$$ (30) In the leading order in $`\mathrm{\Omega }R/c`$, $$g=(k^2\mathrm{\Omega }^2)(kBU/r)^2+2(k^2\mathrm{\Omega }^2)(k^2B^2U^2/r^2)3\mathrm{\Omega }^2(kBU/r)^2.$$ (31) The second term in $`g`$ is the largest in absolute value. This term is negative if $`\mathrm{\Omega }<\mathrm{\Omega }_U`$. Therefore the jet is unstable so long as $`\mathrm{\Omega }<\mathrm{\Omega }_U`$. It follows that the luminosity of a stable Goldreich-Julian pulsar does not exceed $`La^2cUV(\mathrm{\Omega }R/c)^4cB^2R^2`$ \- the “magneto-dipole” value. ###### Acknowledgements. I thank John Bahcall and Roger Blandford for discussions. This work was supported by NSF PHY-9513835. ## Appendix A Planar symmetry We derive in this Appendix the eigenmode equation (8). Consider a planar Poynting jet $`𝐄_0=(V(x),0,0)`$, $`𝐁_0=(0,U(x),B(x))`$, $`BB^{}+UU^{}VV^{}=0`$, the prime denotes the x-derivative. The eigenmodes are $`\mathrm{exp}(i\omega t+ik_yy+ik_zz)`$. The FFE equations (1)-(3) for the perturbed fields and currents (denoted $`e`$, $`b`$, $`j`$) are $$\omega b_x=k_ye_zk_ze_y,$$ (A1) $$\omega b_y=k_ze_x+ie_z^{},$$ (A2) $$\omega b_z=k_ye_xie_y^{},$$ (A3) $$\omega e_x=k_yb_z+k_zb_yij_x,$$ (A4) $$\omega e_y=k_zb_xib_z^{}ij_y,$$ (A5) $$\omega e_z=k_yb_x+ib_y^{}ij_z,$$ (A6) $$V^{}e_x+V(e_x^{}+ik_ye_y+ik_ze_z)B^{}b_zU^{}b_y+Bj_yUj_z=0,$$ (A7) $$V^{}e_y+U^{}b_xBj_x=0,$$ (A8) $$V^{}e_z+B^{}b_x+Uj_x=0.$$ (A9) From (A8), (A9), $$Vb_x+Ue_y+Be_z=0.$$ (A10) From (A4), (A8), $$V^{}e_y+U^{}b_x=iB(\omega e_x+k_yb_zk_zb_y).$$ (A11) From (A5), (A6), (A7) $$(Bb_z+Ub_yVe_x)^{}=i(c_1e_y+c_2b_x+c_3e_z),$$ (A12) where we denoted $`c_1=\omega B+k_yV`$, $`c_2=k_zB+k_yU`$, $`c_3=k_zV\omega U`$. For further derivations the following relationships are useful: $`c_1Uc_2V+c_3B=0`$, and $`c_1k_zc_2\omega c_3k_y=0`$. We solve (A1), (A10) by introducing the x-displacement $`\xi `$ (called the x-displacement because according to (A13) $`b_x`$ is obtained from $`U`$ and $`B`$ by an x-displacement $`\xi `$): $$b_x=ic_2\xi ,e_y=ic_1\xi ,e_z=ic_3\xi .$$ (A13) Define $$\eta =Bb_z+Ub_yVe_x.$$ (A14) Equation (A12) takes the form $$\eta ^{}=(c_1^2c_2^2+c_3^2)\xi .$$ (A15) From (A2), (A3), (A13), (A14), (A11) $$B(c_1\xi )^{}=(\omega V+k_yB)e_x+\omega Ub_y\omega \eta ,$$ (A16) $$(c_3\xi )^{}=k_ze_x+\omega b_y,$$ (A17) $$(c_2U^{}c_1V^{})\xi =c_1e_xc_2b_y+k_y\eta .$$ (A18) Excluding $`e_x`$ and $`b_y`$ from (A16)-(A18), we obtain $$(c_1^2c_2^2+c_3^2)\xi ^{}=(k_y^2+k_z^2\omega ^2)\eta .$$ (A19) From (A15), (A19), $$((c_1^2c_2^2+c_3^2)\xi ^{})^{}=(k_y^2+k_z^2\omega ^2)(c_1^2c_2^2+c_3^2)\xi .$$ (A20)
no-problem/9902/cond-mat9902024.html
ar5iv
text
# Thermodynamics of the quantum easy-plane antiferromagnet on the triangular lattice ## Abstract The classical XXZ triangular-lattice antiferromagnet (TAF) shows both an Ising and a BKT transition, related to the chirality and the in-plane spin components, respectively. In this paper the quantum effects on the thermodynamic quantities are evaluated by means of the pure-quantum self-consistent harmonic approximation (PQSCHA), that allows one to deal with any spin value through classical MC simulations. We report the internal energy, the specific heat, and the in-plane correlation length of the quantum XX0 TAF, for $`S=1/2`$, 1, 5/2. The quantum transition temperatures turn out to be smaller the smaller the spin, and agree with the few available theoretical and numerical estimates. A renewed interest has recently focused on triangular antiferromagnets (TAF) . Indeed, they turned out to describe the magnetic behavior of several real compounds as, for example, the stacked antiferromagnet $`\mathrm{NaTiO}_2`$ , the organic superconductors of the family $`\kappa (\mathrm{BEDT}\mathrm{TTF})_2\mathrm{X}`$ and the K/Si(111):B interface . In this paper we investigate the thermodynamic properties of the quantum XXZ Heisenberg antiferromagnet on the triangular lattice, defined by the following Hamiltonian $$\widehat{}=\frac{J}{2}\underset{𝐢,𝐝}{}\left(\widehat{S}_𝐢^x\widehat{S}_{𝐢+𝐝}^x+\widehat{S}_𝐢^y\widehat{S}_{𝐢+𝐝}^y+\lambda \widehat{S}_𝐢^z\widehat{S}_{𝐢+𝐝}^z\right),$$ (1) where $`J`$ is the positive (antiferromagnetic) exchange constant, and $`(\widehat{S}_𝐢^x,\widehat{S}_𝐢^y,\widehat{S}_𝐢^z)`$ are the spin operators sitting on the sites $`𝐢`$ of a triangular lattice. They satisfy $`𝒮𝒰(2)`$ commutation relations and belong to the spin-$`S`$ representation $`\left|\widehat{𝐒}_𝐢^2\right|=S(S+1)`$. The interaction is restricted to nearest-neighbors and $`𝐝`$ runs over their relative displacements. The planar character of the system is due to the presence of the anisotropy $`\lambda [0,1)`$, energetically favoring configurations with the spins lying in the $`xy`$ plane (easy-plane). For $`\lambda =0`$ the spin components on the $`z`$ axis do not appear in the Hamiltonian and the model is known as XX0 or quantum XY. The minimum energy configuration of the classical counterpart of the Hamiltonian (1) for every value of $`\lambda [0,1]`$ consists of coplanar spins forming $`\pm 2\pi /3`$ angles between nearest-neighbors and this leads to a $`\sqrt{3}\times \sqrt{3}`$ periodic Néel state. In contrast to the isotropic case, where the plane in which the $`2\pi /3`$ structure lies can take any direction in the spin space, in the XXZ model such structure must take place in the easy-plane. As a result, in the planar TAF the frustration causes an additional discrete two-fold degeneracy of the classical ground state, which is due to the chirality (or helicity), defined as the sign of rotation of the spins along the sides of each elementary triangle. The resulting degeneracy corresponds to the group $`SO(2)\times Z_2`$. As the Mermin and Wagner theorem only states that the sublattice magnetization must vanish at any non zero temperature, long-range order can occur as far as the chirality is concerned, and an Ising-like phase transition is indeed observed , in addition to the usual Berezinskii-Kosterlitz-Thouless (BKT) critical behavior associated to the rotation symmetry in the $`xy`$ plane. In the quantum case the situation is far less clear. In fact, unlike the antiferromagnet on the square lattice where there is a general consensus about the ordered nature of the ground state even for $`S=1/2`$, in the frustrated cases the lack of exact analytical results is accompanied by difficulties in applying stochastic numerical methods, as their reliability is strongly limited by the well-known sign problem. Indeed only very recently a systematic size-scaling of the order parameter and of the spin gap has been performed using a new Quantum Monte Carlo technique , confirming the existence of Néel long-range order in the ground state as also suggested by the symmetry properties of the first excited states, evidenced by Bernu et al. . An even less clear situation is that concerning the finite temperature behavior. In fact an early numerical work , limited to lattice sizes up to 27 sites, indicated for the $`S=1/2`$ XX0 model a phase diagram similar to the classical one, in contrast with the high temperature expansion produced by Fujiki and Betts where no evidence for a phase transition was found. For the XXZ Hamiltonian, Momoi and Suzuki , applying an effective field theory, conjectured that the chiral phase transition should persist for every value of $`\lambda [0,1)`$, as in the classical case, and were able to estimate the transition temperature for $`\lambda =0`$, obtaining a value very close to that found in Ref. . Recently Suzuki and Matsubara using a quantum transfer Monte Carlo method to study clusters up to 24 sites, have claimed instead the absence of the chiral order at any finite temperature for $`\lambda 0.6`$. In this context where, at least up to now, quantum numerical methods cannot give satisfactory answers, the pure-quantum self-consistent harmonic approximation (PQSCHA) can provide an effective instrument to investigate the thermodynamics of quantum spin systems, as far as their ground state is ordered. The method is based on the path-integral formulation of quantum statistical mechanics, and has been successfully applied recently to a variety of unfrustrated spin models, both one- and two-dimensional . By the PQSCHA the evaluation of thermal averages in the quantum model can be reduced to the calculation of classical-like averages over a Boltzmann distribution defined by an effective Hamiltonian, which contains the contribution of the pure-quantum part of the fluctuations (approximated within a self consistent harmonic scheme) in its renormalized interaction parameters, which are temperature and spin dependent. As a result one can get accurate results on the quantum spin system using classical computational methods, like the transfer-matrix in the one-dimensional case and classical Monte Carlo simulations in the two-dimensional one. The first step of the derivation of the effective Hamiltonian for the easy-plane TAF, is to apply the unitary transformation which defines a spatially varying coordinate system pointing along the local Néel direction, namely $$𝒰=\mathrm{exp}\left(\frac{2\pi }{3}i\underset{𝐢\mathrm{B}}{}\widehat{S}_𝐢^z\frac{2\pi }{3}i\underset{𝐢\mathrm{C}}{}\widehat{S}_𝐢^z\right),$$ (2) where B and C labels two of the three sublattices. Unlike in the bipartite lattices where the corresponding transformation maps the antiferromagnet into a model with an in-plane ferromagnetic exchange and an antiferromagnetic coupling along the $`z`$ axis, thus allowing the demonstration of the Lieb and Mattis theorem and computability with standard quantum Monte Carlo methods, in the triangular case the transformed Hamiltonian shows an extra current-like term which contains the effects of the frustration, whose form is quite similar to the chiral order parameter , i.e., the physical quantity undergoing the order-disorder phase transition present in the classical model for every value of $`\lambda <1`$. From now on the derivation follows the same lines already described in Refs. . A point worth being recalled is the use of the Villain transformation in order to represent the spin operators in terms of canonically conjugated variables, which is a necessary step in the derivation of the effective Hamiltonian. As it is well known, this spin-boson transformation preserves the commutation rules but neglects the so called kinematic interaction due to the limited spectrum of $`S^z`$, thus giving a better description when the system has a good easy-plane character and the spin states with large fluctuations of $`S^z`$ are less relevant to the thermodynamics. In the square lattice case such approximation scheme turns out to be reliable up to some value of $`\lambda _\mathrm{M}<1`$ ($`\lambda _\mathrm{M}=0.58`$ in the extreme quantum case $`S=1/2`$), when the mapping with the Villain transformation breaks down and a different spin-boson transformation is needed. However, it provides accurate results for the critical temperatures even for $`\lambda =0.5`$; a similar behavior is also found in the case of the quantum TAF. Finally we remind that Weyl ordering, which is inherent to the PQSCHA, naturally leads to define an effective classical spin length as $`\stackrel{~}{S}=S+1/2`$ and thus to set the natural energy scale $`ϵ=J\stackrel{~}{S}^2`$. Therefore in the following we use the reduced temperature, $`t=k_BT/J\stackrel{~}{S}^2`$. In the case of the easy-plane TAF the effective Hamiltonian has the form $`_{\mathrm{eff}}=\overline{}+G(t)`$, where $`G(t)`$ is an additive uniform term, formally identical to that obtained for the square lattice, which is unessential in the calculation of the thermal averages, while: $$\overline{}=\frac{ϵ}{2}j_{\mathrm{eff}}\underset{𝐢,𝐝}{}\left(s_𝐢^xs_{𝐢+𝐝}^x+s_𝐢^ys_{𝐢+𝐝}^y+\lambda _{\mathrm{eff}}s_𝐢^zs_{𝐢+𝐝}^z\right),$$ (3) where $`(s_𝐢^x,s_𝐢^y,s_𝐢^z)`$ are unit vectors, i.e., classical spins. Within the PQSCHA quantum effects are embodied in the temperature and spin dependence of the renormalized $`j_{\mathrm{eff}}(t,S,\lambda )`$ $`=`$ $`(1{\displaystyle \frac{1}{2}}D_{})^2e^{\frac{1}{2}𝒟_{}},`$ (4) $`\lambda _{\mathrm{eff}}(t,S,\lambda )`$ $`=`$ $`\lambda (1{\displaystyle \frac{1}{2}}D_{})^1e^{\frac{1}{2}𝒟_{}},`$ (5) with $`D_{}`$ $`=`$ $`(2\stackrel{~}{S}N)^1{\displaystyle \underset{𝐤}{}}{\displaystyle \frac{b_𝐤}{a_𝐤}}(f_𝐤),`$ (6) $`𝒟_{}`$ $`=`$ $`(\stackrel{~}{S}N)^1{\displaystyle \underset{𝐤}{}}(1\gamma _𝐤){\displaystyle \frac{a_𝐤}{b_𝐤}}(f_𝐤),`$ (7) where $`a_𝐤^2`$ $`=`$ $`{\displaystyle \frac{z}{2}}(1\frac{1}{2}D_{})\mathrm{e}^{{\scriptscriptstyle \frac{1}{2}}𝒟_{}}(1+2\lambda _{\mathrm{eff}}\gamma _𝐤),`$ (8) $`b_𝐤^2`$ $`=`$ $`{\displaystyle \frac{z}{2}}j_{\mathrm{eff}}(1\gamma _𝐤),`$ (9) $`f_𝐤=a_𝐤b_𝐤/(2\stackrel{~}{S}t)`$ , $`(x)=\mathrm{coth}xx^1`$ is the Langevin function, $`\gamma _𝐤=z^1_𝐝\mathrm{cos}(𝐤𝐝)`$ , and $`𝐤`$ is a wavevector varying in the first Brillouin zone. $`D_{}(S,\lambda ,t)`$ and $`𝒟_{}(S,\lambda ,t)`$ represent the pure-quantum square fluctuations of the out-of-plane and in-plane components of the spins respectively. They are decreasing functions of temperature and spin, vanishing for $`t\mathrm{}`$ and $`S\mathrm{}`$, i.e., when the quantum part of the fluctuations is negligible with respect to the classical one. From the above equations, we can infer that the PQSCHA approach is valid under the condition that second order terms in $`(D_{})^2`$ can be neglected. One can take the criterion that the renormalization effects of quantum fluctuations must not reduce much more than, say, $`50\%`$ the effective exchange integral. Such a strong renormalization only occurs for $`S=1/2`$ and $`t0.2`$, while for higher spin values the PQSCHA is reliable at any temperature. In the XX0 model, $`\lambda _{\mathrm{eff}}=\lambda =0`$ and all the information about the quantum system is hence contained in the renormalization of the energy scale. In this case the critical properties of the quantum system at a temperature $`t`$ are essentially those of its classical counterpart at the effective temperature $`t_{\mathrm{eff}}=t/j_{\mathrm{eff}}(t,S)`$, and we have used the results of classical Monte Carlo (MC) simulations recently obtained for lattice sizes up to $`N=120\times 120`$ to calculate the corresponding quantum observables. Indicating the classical averages with the effective Hamiltonian as $`\mathrm{}_{\mathrm{eff}}`$, the internal energy per spin can be calculated as $$e(t,S,\lambda )=\frac{\widehat{}}{Nϵ}=\overline{}_{\mathrm{eff}}+\frac{z}{2}\lambda 𝒟_{},$$ (10) where $`𝒟_{}`$ can be expressed as $`D_{}`$, Eq. (6), with an extra factor $`\gamma _𝐤`$ in the summand. For $`\lambda =0`$ the above equation reduces to $$e(t,S)=j_{\mathrm{eff}}(t)e_{\mathrm{cl}}(t_{\mathrm{eff}}),$$ (11) being $`e_{\mathrm{cl}}(t)`$ the internal energy per spin of the corresponding classical system. In Fig. 1 $`e(t,S)`$ is plotted for various values of the spin in the range of temperatures where the PQSCHA is expected to give reliable results. The energy curves flatten and increase with decreasing $`S`$ due to the increased quantum fluctuations. As said before the $`S=1/2`$ curve is reported only in the valid temperature range. As a matter of fact the extrapolation to lowest temperatures gives the self-consistent spin-wave ground-state energy. The difference from the most refined estimates can be mainly attributed to $`1/S^2`$ constant contributions coming from the Villain transformation and to the use of the low coupling approximation (LCA) . This term is not significant for $`S1`$. Consistently with this picture the finite size ($`N=120\times 120`$) peak of the specific heat (Fig. 2), obtained by numerical derivation of the internal energy, moves towards lower temperatures and decreases in height as $`S`$ decreases. However, since the quantum renormalizations are essentially size-independent, classical scaling with size is conserved and a logarithmic divergence of the specific heat, connected with the Ising-like chirality phase-transition, is therefore expected in the thermodynamic limit. By direct derivation of Eq. (11) it is easily seen that, in order for the quantum specific heat to vanish in the zero temperature limit, within our approximation we must have $`dj_{\mathrm{eff}}/dt|e_{\mathrm{cl}}(0)|^1`$ as $`t0`$ a condition which is fulfilled for every value of $`S`$, as can be verified analytically from the explicit expressions of the renormalization parameters; this is shown in the inset of Fig. 1. Most papers in literature do mainly concern with the chiral order-disorder transition, while the XXZ TAF also supports another kind of phase transition. In fact, the classical system displays as well a BKT critical behavior. For this reason we have calculated the magnetic correlation length which governs the decay of the in-plane correlation functions in the high-temperature phase, whose expression within the PQSCHA reads $$\widehat{S}_𝐢^x\widehat{S}_{𝐢+𝐫}^x+\widehat{S}_𝐢^y\widehat{S}_{𝐢+𝐫}^y=G(𝐫)s_𝐢^xs_{𝐢+𝐫}^x+s_𝐢^ys_{𝐢+𝐫}^y_{\mathrm{eff}},$$ (12) where $`𝐢`$ and $`𝐢+𝐫`$ belong to the same sublattice and $`G(𝐫)`$ is bounded and essentially constant for large $`𝐫`$. As a consequence the asymptotic behavior of the correlation functions in the critical region is the same of the effective classical spin system. In particular, for $`\lambda =0`$, the correlation length can be simply found as $`\xi (t)=\xi _{\mathrm{cl}}(t_{\mathrm{eff}})`$. The result is reported in Fig. 3: as expected in a BKT transition, it displays a divergence at a temperature $`t_{\mathrm{BKT}}`$ which decreases with decreasing $`S`$, as a result of enhanced quantum fluctuations. Within the PQSCHA, the quantum renormalizations cannot modify the critical behavior of the effective classical system , so that both the chirality and the BKT critical temperatures can be connected to their classical counterparts by the self-consistent relation $$\frac{t_{\mathrm{crit}}(S,\lambda )}{t_{\mathrm{crit}}^{\mathrm{cl}}\left(\lambda _{\mathrm{eff}}(t_{\mathrm{crit}},S,\lambda )\right)}=j_{\mathrm{eff}}(t_{\mathrm{crit}},S,\lambda ),$$ (13) which can be solved numerically. The obtained critical temperatures for $`\lambda =0`$ and $`\lambda =0.5`$ are reported for various values of the spin in Table I. In the $`S=1/2`$ case, although our theory begins to become unreliable when $`t0.2`$, we notice that for $`\lambda =0`$ the extrapolated value for the chiral critical temperature, $`t_\mathrm{c}=0.193`$, agrees remarkably well with those obtained by the size scaling on the QMC data ($`t_\mathrm{c}=0.195(1)`$) and the effective field theory of Ref. ($`t_\mathrm{c}0.20`$). L. C. acknowledges S. Sorella for continuous and fruitful discussions.
no-problem/9902/cond-mat9902245.html
ar5iv
text
# Full Symmetry of Single- and Multi-wall Nanotubes ## I Introduction The single-wall carbon nanotubes are quasi 1D cylindrical structures , which can be imagined as rolled up cylinders of the 2D honeycomb lattice of the single atomic layer of crystalline graphite. Frequently, several single-wall tubes are coaxially arranged, making multi-wall nanotube. Since their diameters are small (down to $`0.7\mathrm{nm}`$) in comparison to lengths (up to tens of $`\mu m`$), the theoretical model of the extended (i. e. infinite, and hence without caps at the ends) nanotube is well justified. The symmetry of the nanotubes is relevant both for deep insight into the physical properties (quantum numbers, selection rules, optical activity, conducting properties, etc.) and to simplify calculations. As for the single-wall tubes, symmetry studies started by the classification of the graphene tubes according to fivefold, threefold or twofold axis of the related $`C_{60}`$ molecule , and gave just a part of their point group symmetry. The translational periodicity was discussed in context of the nanotube metallic properties . Finally, the helical and rotational symmetries were found : the screw axis was characterized in terms of tube parameters, as well as the order of the principle rotational axis . The first goal of this paper is to complete this task, giving the full geometric symmetry of the extended single-wall nanotubes. Due to their 1D translational periodicity, the resulting groups are the line groups ; it appears that only two line group families are relevant: the 5th for the chiral, and the 13th for the armchair and the zig-zag nanotubes. The symmetry of the double- and multi-wall tubes has never been seriously studied, despite their importance for applications in nanodevices. Here, we present the exhaustive list of symmetries of such tubes. Depending on the single-wall constituents, and their relative arrangement, the resulting nanotube symmetry may be either line group or axial point group. In section II, at first the necessary notions on the line groups are briefly summarized, and the relevant notation is introduced. Then, in subsection II A, the line groups of all the nanotubes are derived: the familiar symmetries of the original graphene lattice are transferred into the tubular geometry and those which remain symmetries of the rolled up lattice form the corresponding line group. Besides the rotational, translational and helical symmetries, the horizontal axes and (for zig-zag and armchair tubes) mirror- and glide-planes are also present. The symmetry groups of multi-wall nanotubes are studied in subsection II B. Note that among them there are also tubes being not translationally periodic. Some of the possible applications of symmetry in the physics of nanotubes are discussed in the last section. ## II Symmetry of nanotubes The line groups contain all the symmetries of the systems periodical in one direction and usually are used in context of stereoregular polymers and quasi-1D subsystems of 3D crystals. It immediately follows that, being periodic along its axis, any extended single-wall nanotube has the symmetry described by one of the line groups. All the line group transformations leave the tube axis ($`z`$-axis, by convention) invariant. Consequently, such a transformation $`(P|t)`$ (Koster-Seitz symbol) is some point group operation $`P`$ preserving the $`z`$-axis, followed by the translation for $`t`$ along the $`z`$-axis. Action on the point $`𝐫=(x,y,z)`$ gives $`(P|t)𝐫=(x^{},y^{},z^{})`$ with $$x^{}=P_{xx}x+P_{xy}y,y^{}=P_{yx}x+P_{yy}y,z^{}=P_{zz}z+t.$$ (1) Here, $`P_{ij}`$ are elements of the $`3\times 3`$ matrix of $`P`$ in the Cartesian coordinates; those coupling $`z`$ to the other axes vanish. Such point operations are called axial, and they form seven types of the axial point groups : $`𝐂_n`$, $`𝐒_{2n}`$, $`𝐂_{nh}`$, $`𝐂_{nv}`$, $`𝐃_n`$, $`𝐃_{nd}`$, $`𝐃_{nh}`$, where $`n=1,2,\mathrm{}`$ is the order of the principle rotational axis. There are infinitely many line groups, since there is no crystallographic restriction on the order of the principle axis, and they are classified within 13 families. Each line group is a product $`𝐋=\mathrm{𝐙𝐏}`$ of one axial point group $`𝐏`$ and one infinite cyclic group $`𝐙`$ of generalized translations (screw-axis $`𝐓_q^r`$, pure translations $`𝐓=𝐓_1^0`$, or glide plane $`𝐓_c`$, generated by the transformations $`(I|a)`$, $`(C_q^r|\frac{n}{q}a)`$ and $`(\sigma _v|\frac{a}{2})`$, respectively ). Thus, to determine the full symmetry of a nanotube, both of these factors (having only the identical transformation in common) should be found. The point factor $`𝐏`$ should be distinguished from the isogonal point group $`𝐏_I`$ of the line group : only for the symorphic groups when $`𝐙=𝐓`$, these groups are equal; otherwise $`𝐏_I`$ is not a subgroup of $`𝐋`$. Due to the convention , $`2\pi /q`$ is the minimal angle of rotation performed by the elements of the line group (if the screw axis is nontrivial it is followed by some fractional translation), as well as by its isogonal point group. The easiest way to determine the line group $`𝐋`$ of a system is to find at first the subgroup $`𝐋^{(1)}`$, containing all the translations and the rotations around the principle axis (including the ones followed by fractional translations). Having the same screw axis ($`𝐓`$ is a special case) as $`𝐋`$, and the same order $`n`$ of the principle axis, this subgroup $`𝐋^{(1)}=𝐓_q^r𝐂_n`$ is the maximal subgroup from the first line group family. Then the symmetries complementing $`𝐋^{(1)}`$ to $`𝐋`$ should be looked for. To complete $`𝐙`$, it should be checked if there is a vertical glide plane. Also, $`𝐂_n`$ is to be complemented to $`𝐏`$ by eventual additional point group generators; at most two of them are to be chosen among the mirror planes, horizontal rotational axes of order two or rotoreflection axis (refining pure rotations that are already encountered in $`𝐂_n`$). ### A Single-wall nanotubes Elementary cell of the hexagonal honeycomb lattice (Fig. 1) is formed by vectors $`\stackrel{}{a}_1`$ and $`\stackrel{}{a}_2`$ of the length $`a_0=2.461`$Å; within its area $`S_g=\sqrt{3}/2a_0^2`$ there are two carbon atoms at positions $`(\stackrel{}{a}_1+\stackrel{}{a}_2)/3`$ and $`2(\stackrel{}{a}_1+\stackrel{}{a}_2)/3`$. The single-wall nanotube $`(n_1,n_2)`$ is formed when the honeycomb lattice is rolled up, in such a way that the chiral vector $`\stackrel{}{c}=n_1\stackrel{}{a}_1+n_2\stackrel{}{a}_2`$ becomes the circumference of the tube (its end and origin match). The tubes $`(n_1,0)`$ and $`(n_1,n_1)`$ are called zig-zag and armchair, respectively, while the others are known as the chiral ones. The chiral angle $`\theta `$ of the nanotube is the angle between the chiral vector $`\stackrel{}{c}`$ and the zig-zag direction $`\stackrel{}{a}_1`$. When $`0\theta <\pi /3`$, all the tubes are encountered; in fact, for the zig-zag and the armchair nanotubes $`\theta `$ equals $`0`$ and $`\pi /6`$, respectively, and between these chiralities lay chiral vectors of all the chiral nanotubes with $`n_1>n_2>0`$ (the tubes $`(n_2,n_1)`$, with $`\pi /6<\theta <\pi /3`$, are their optical isomers). There are $`n=\text{GCD}(n_1,n_2)`$ (the greatest common divisor) honeycomb lattice points laying on the chiral vector. The translations for $`s\stackrel{}{c}/n`$ in the chiral direction, on the tube appear as the rotations for $`2s\pi /n`$ ($`s=0,1,\mathrm{}`$) around the tube axis. Thus, the principal axis of order $`n`$ is a subgroup of the full symmetry of the tube $`(n_1,n_2)`$: $$𝐂_n,n=\text{GCD}(n_1,n_2).$$ (2) Obviously, $`n=n_1`$ for the zig-zag $`(n_1,0)`$ and the armchair $`(n_1,n_1)`$ nanotubes. To the primitive translation of the tube corresponds the vector $`\stackrel{}{a}=a_1\stackrel{}{a}_1+a_2\stackrel{}{a}_2`$ in the honeycomb lattice, being the minimal one among the lattice vectors orthogonal onto $`\stackrel{}{c}`$. Therefore, $`a_1`$ and $`a_2`$ are coprimes, yielding $$\stackrel{}{a}=\frac{2n_2+n_1}{n}\stackrel{}{a}_1+\frac{2n_1+n_2}{n}\stackrel{}{a}_2,a=|\stackrel{}{a}|=\frac{\sqrt{3(n_1^2+n_2^2+n_1n_2)}}{n}a_0,$$ (3) with $`=3`$ if $`\frac{n_1n_2}{3n}`$ is integer and $`=1`$ otherwise. For the zig-zag and the armchair tubes $`a=\sqrt{3}a_0`$ and $`a=a_0`$, respectively. The elementary cell of the tube is the cylinder of the height $`a`$ and area $`S_t=a|\stackrel{}{c}|`$; it contains $`\frac{S_t}{S_g}=2\frac{n_1^2+n_2^2+n_1n_2}{n}`$ elementary graphene cells . So, the translational group $`𝐓`$ of the nanotube is composed of the elements $`(I|ta)`$, $`t=0,\pm 1,\mathrm{}`$ The encountered symmetries $`𝐓`$ and $`𝐂_n`$ originate from the honeycomb lattice translations: on the folded lattice the translations along the chiral vector become pure rotations, while those along $`\stackrel{}{a}`$ remain pure translations. These elements generate the whole nanotube from the sector of angle $`2\pi /n`$ of the elementary cell, with $`2\frac{n_1^2+n_2^2+n_1n_2}{n^2}`$ elementary graphene cells. This number is always greater then 1, pointing out that not all of the honeycomb lattice translations are taken into account. The missing translations are neither parallel with nor orthogonal onto $`\stackrel{}{c}`$; on the rolled up sheet they are manifested as rotations (for fraction of $`2\pi /n`$) combined with translations (for fractions of $`a`$), yielding the screw axis of the nanotube. Its generator $`(C_q^r|\frac{n}{q}a)`$ corresponds to the vector $`\stackrel{}{z}=r\frac{\stackrel{}{c}}{q}+n\frac{\stackrel{}{a}}{q}`$ of the honeycomb lattice, which, together with the encountered translations, generates the whole honeycomb lattice. Thus, $`\stackrel{}{z}`$ can be chosen to form the elementary honeycomb cell together with the minimal lattice vector $`\stackrel{}{c}/n`$ along the chiral direction. The honeycomb cell area $`S_g`$ must be the product of $`|\stackrel{}{c}|/n`$ and the length $`na/q`$ of the projection of $`\stackrel{}{z}`$ onto $`\stackrel{}{a}`$: $`a|\stackrel{}{c}|/q=\sqrt{3}a_0^2/2`$. This gives the order $`q`$ of the screw axis. Finally, $`r`$ is found from the condition that projections of $`\stackrel{}{z}`$ on the $`\stackrel{}{a}_1`$ and $`\stackrel{}{a}_2`$ are coprimes. This completely determines the screw axis ($`\mathrm{Fr}[x]=x[x]`$ is the fractional part of the rational number $`x`$, and $`\phi (m)`$ is the Euler function, giving the number of coprimes less then $`m`$): $$𝐙=𝐓_q^r,q=2\frac{n_1^2+n_1n_2+n_2^2}{n},r=\frac{q}{n}\mathrm{Fr}\left[\frac{n}{q}(32\frac{n_1n_2}{n_1})+\frac{n}{n_1}\left(\frac{n_1n_2}{n}\right)^{\phi (\frac{n_1}{n})1}\right].$$ (4) Especially, for both the zig-zag $`(n,0)`$ and the armchair $`(n,n)`$ tubes, $`q=2n`$ and $`r=1`$, i. e. $`𝐙=𝐓_{2n}^1`$. Note that $`q`$ is an even multiple of $`n`$. It is equal to the number of the graphene cells in the elementary cell of the tube $`S_t/S_g`$. Therefore, $`q/n`$ is the number of the graphene cells in the sector and therefore always greater then 1; this means that all the single-wall tubes have nonsymorphic symmetry groups. To resume, the translational symmetry of the honeycomb lattice appears as the group $`𝐋^{(1)}=𝐓_q^r𝐂_n`$ of symmetries of the nanotube, with $`q`$ and $`r`$ given by (4). Its elements $`(C_q^{rt}C_n^s|t\frac{n}{q}a)`$ ($`t=0,\pm 1,\mathrm{}`$, $`s=0,\mathrm{},n1`$) generate the whole nanotube from any adjacent pair of the nanotube atoms. The group $`𝐋^{(1)}`$ contains all the symmetries previously considered in the literature . Note that the screw axis used here is somewhat different to the previously reported ones, due to the convention . With this convention $`2\pi /q`$ is the minimal rotation (followed by some fractional translation) in the group , providing that $`q`$ is the order of the principle axis of the isogonal point group. This explains why $`q`$ is equal to the number of the graphene cells contained in the elementary cell of the tube. Note that the translational period $`a`$ and the diameter $`D`$ of the tube are determined by the symmetry parameters $`q`$ and $`n`$: $$a=\sqrt{\frac{3q}{2n}}a_0,D=\frac{1}{\pi }\sqrt{\frac{nq}{2}}a_0.$$ (5) Besides the translations, there are other symmetries of the honeycomb lattice: (a) perpendicular rotational axes through the centers of the hexagons (of order six), through the carbon atoms (of order three) and through the centers of the edges of the hexagons (of order two); (b) six vertical mirror planes through the centers of the hexagons formed by the atoms (or through the atoms); (c) two types of vertical glide planes — connecting the midpoints of the adjacent edges, and the midpoints of the next to nearest neighboring edges of the hexagons. Among the rotations, only those for $`\pi `$, leaving invariant the axis of $`\stackrel{}{a}`$, i. e. the $`z`$-axis of the tube, remain the symmetry of the rolled up lattice. Thus, two types of horizontal second order axes emerge as symmetries of any nanotube (Fig. 1): $`U`$, passing through the center of the deformed nanotube hexagons, and $`U^{}`$, passing through the midpoints of the adjacent atoms. Moreover, the first of these transformations is obtained when the second one is followed by the screw axis generator: $`U=(C_q^r|\frac{n}{q}a)U^{}`$. Thus, any of them, say $`U`$, complements the principle tube axis $`𝐂_n`$ to the dihedral point group $`𝐃_n`$. This shows that at least the line group $`𝐓_q^r𝐃_n`$ (from the 5th family) is the symmetry group of any nanotube. Note that $`U^{}`$ just permutes the two carbon atoms in the elementary honeycomb cell, meaning that all the honeycomb atoms are obtained from an arbitrary one by the translations and the rotation $`U^{}`$. Analogously, the elements of the group $`𝐓_q^r𝐃_n`$ generate the whole nanotube from any of its atoms. The action (1) of the group elements on the point $`𝐫_{000}=(\rho _0,\varphi _0,z_0)`$ (cylindrical coordinates) gives the points $$𝐫_{tsu}=(C_q^{rt}C_n^sU^u|t\frac{n}{q}a)𝐫_{000}=(\rho _0,(1)^u\phi _0+2\pi (\frac{t}{q}+\frac{s}{n}),(1)^uz_0+t\frac{n}{q}a),$$ (6) ($`u=0,1`$; $`s=0,\mathrm{},n1`$; $`t=0,\pm 1,\mathrm{}`$); hereafter, the $`x`$-axis is assumed to coincide with the $`U`$-axis. Using (5), it can be shown that the coordinates of the first atom (positioned at $`\frac{1}{3}(\stackrel{}{a}_1+\stackrel{}{a}_2)`$ on the honeycomb) are $$𝐫_{000}^C=(\frac{D}{2},2\pi \frac{n_1+n_2}{nq},\frac{n_1n_2}{\sqrt{6nq}}a_0).$$ (7) Substituting these values in (6), the coordinates of all other atoms are obtained. Rolling up deforms any plane perpendicular onto the graphene sheet, unless it is either parallel with $`\stackrel{}{c}`$ (then it becomes horizontal plane) or orthogonal onto $`\stackrel{}{c}`$ (giving vertical plane). Thus, only the tubes with the chiral vectors being parallel or orthogonal to the enumerated mirror and glide planes obtain additional symmetries of these types. The zig-zag and armchair tubes are immediately singled out by simple inspection. Precisely, only in these cases the chiral vector is in a perpendicular mirror plane; when the sheet is rolled up, this plane becomes the horizontal mirror plane $`\sigma _h`$ of the corresponding nanotubes. Enlarging the previously found point symmetry group $`𝐃_n`$ by $`\sigma _h`$, the point group $`𝐃_{nh}`$ of the zig-zag and armchair tubes is obtained. Finally, taking into account the generalized translations (4), the full symmetry groups of the single-wall nanotubes are (besides the factorized notation, the international symbol is given also): $$𝐋_{\text{chiral}}=𝐓_q^r𝐃_n=𝐋q_p22,p=q\mathrm{Fr}\left[\frac{n}{q}\frac{(\frac{2n_2+n_1}{n})^{\phi (\frac{2n_1+n_2}{n})1}qn_2}{2n_1+n_2}\right],$$ (9) $$𝐋_{\text{armchair}}=𝐋_{\text{zig-zag}}=𝐓_{2n}^1𝐃_{nh}=𝐋2n_n/mcm.$$ (10) Their isogonal point groups are $`𝐃_q`$ and $`𝐃_{2nh}`$. The line group $`𝐓_{2n}^1𝐃_{nh}`$ (13th family) contains various new symmetries (Fig. 2), being combinations of the ones mentioned above. In fact, when $`\sigma _h`$ has been added to the group $`𝐓_{2n}^1𝐃_n`$, the other mirror and glide planes parallel to and orthogonal onto $`\stackrel{}{c}`$ are automatically included in the symmetry groups of the zig-zag and armchair nanotubes. These transformations can be seen as $`\sigma _h`$ followed by some of the elements from $`𝐓_{2n}^1𝐃_n`$. At first, there are $`n`$ vertical mirror planes (one of them is $`\sigma _v=\sigma _hU`$, and the others are obtained by pure rotations; by the previous convention, the $`\sigma _h`$ plane is the $`xy`$-coordinate plane). Bisecting the mirror planes, there are glide planes (e. g. the product $`(\sigma _v^{}|\frac{1}{2})=(C_{2n}|\frac{1}{2}a)\sigma _v`$), and the vertical rotoreflection axis of order $`2n`$ (generated by $`\sigma _vU^{}=C_{2n}\sigma _h^{}`$, the reflection in the $`\sigma _h^{}`$ plane, followed by the rotation for $`\pi /n`$). The vectors obtained from $`\stackrel{}{c}`$ by the rotations from the point symmetry group $`𝐂_{6v}`$ of the honeycomb lattice, produce the nanotubes which are essentially the same one, only looked at from the rotated coordinate systems. Nevertheless, the vertical mirror plane image of $`\stackrel{}{c}`$ (e. g. in the vertical plane bisecting the angle of $`\stackrel{}{a}_1`$ and $`\stackrel{}{a}_2`$) produces the tube which can be considered as the same one only in the coordinate system with the opposite sign (the coordinate transformation involves the spatial inversion). Thus, the tubes $`(n_1,n_2)`$ and $`(n_2,n_1)`$ are the optical isomers. Only the mirror image of zig-zag and armchair tube is equivalent to the original, and these tubes have no optical isomers. Concerning the symmetry groups, if $`𝐓_q^r𝐂_n`$ corresponds to the tube $`(n_1,n_2)`$, then the group of the tube $`(n_2,n_1)`$ is $`𝐓_q^{\frac{q}{n}r}𝐂_n`$ (although isomorphic, these groups are equal only when $`q=2n`$ and $`r=1`$, i. e. only for the zig-zag and the armchair tubes). ### B Double- and multi-wall nanotubes The symmetry of a multi-wall nanotube now can be found as intersection of the symmetry groups of its single-wall constituents. This task will be considered for double-wall tubes at first, and then the results are straightforwardly generalized to the multi-wall ones. The intersection of the line groups, $`𝐋=\mathrm{𝐙𝐏}`$ and $`𝐋^{}=𝐙^{}𝐏^{}`$ has the form $`𝐋_2=𝐙_2(𝐏𝐏^{})`$. Thus, the intersection of the point groups is looked for independently of the generalized translations. As it has been derived in (2), the tubes $`(n_1,n_2)`$ and $`(n_1^{},n_2^{})`$ are invariant under the rotations around their axes for the multiples of the angles $`2\pi /n`$ and $`2\pi /n^{}`$ ($`n=\text{GCD}(n_1,n_2)`$, $`n^{}=\text{GCD}(n_1^{},n_2^{})`$), respectively. The tube composed of these coaxially arranged components is invariant under the rotation for $`2\pi /N`$, being the minimal common rotation of the components, and its multiples. Thus, the principle axis subgroup of the double-wall nanotube is $`𝐂_N`$, with $`N=\text{GCD}(n,n^{})=\text{GCD}(n_1,n_2,n_1^{},n_2^{})`$. The horizontal second order rotational axis $`U`$ (and $`U^{}`$) is also symmetry of all single-wall nanotubes. Nevertheless, such an axis remains the symmetry of the composite tube only if it is common to all of the components, and then the point symmetry is $`𝐃_N`$. Obviously, if a nanotube contains at least one chiral component, then $`𝐃_N`$ is its maximal point symmetry. Only the tubes composed exclusively of the zig-zag and armchair single-wall components may have additional mirror and glide planes, as well as the rotoreflectional axis. Analogously to the horizontal axis, these are symmetries of the whole tube only if they are common for all of the components (the rotoreflectional axis appear only if the horizontal planes $`\sigma _h^{}`$ coincides). After the point symmetries are thereby completely determined, the more difficult study of the generalized translational factor $`𝐙_2`$ remains. At first, note that it may be completely absent. Suppose that double-wall tube has the translational period $`A`$. If the translational periods of its constituents are $`a`$ and $`a^{}`$, then $`A`$ is obviously the minimal distance being multiple both of $`a`$ and of $`a^{}`$: $`A=\alpha a=\alpha ^{}a^{}`$, where $`\alpha `$ and $`\alpha ^{}`$ are positive coprimes (to assure minimality). Thus, the double-wall tube is translationally periodic if and only if the translational periods of its constituents are commensurate, i. e. only when $`a^{}/a`$ is rational. On the contrary, if $`a^{}/a`$ is an irrational number, the composed tube is not translationally periodic, and $`𝐙_2`$ is trivial (identical transformation only); the total symmetry reduces to the already found point group. In the commensurate case it remains to examine if the translational group can be refined by a screw axis, common to all of the single-wall components. The task is to determine the screw axis generator $`(C_Q^R|F)`$ with maximal $`Q`$, appearing in the both groups $`𝐋=𝐓_q^r𝐂_n`$ and $`𝐋^{}=𝐓_q^{}^r^{}𝐂_n^{}`$. Thus, it is looking for the values of $`Q`$, $`R`$ and $`F`$ (in accordance with ), such that there exist integers $`t`$, $`s`$, $`t^{}`$ and $`s^{}`$ (enumerating the elements of $`𝐋`$ and $`𝐋^{}`$) satisfying $$(C_Q^R|F)=(C_q^{rt}C_n^s|tf)=(C_q^{}^{r^{}t^{}}C_n^{}^s^{}|t^{}f^{})\text{ with }F=\frac{N}{Q}A,f=\frac{n}{q}a,f^{}=\frac{n^{}}{q^{}}a^{}.$$ (11) Obviously, the fractional translation $`F`$ is multiple $`F=tF^{}`$ of the minimal common fractional translation $`F^{}`$, implying $`A=\frac{Q}{N}tF^{}`$. Analogously to $`A`$, the translation $`F^{}`$ is found as the minimal distance being multiple both of $`f`$ and $`f^{}`$; thus it is given by the unique solution in the coprimes $`\varphi `$ and $`\varphi ^{}`$ of the equation $`F^{}=\varphi f=\varphi ^{}f^{}`$. Since the translational periods of the single-wall components are multiples of their fractional translations, $`A`$ is multiple of $`F^{}`$, i. e. $`A=\mathrm{\Phi }F^{}`$. With help of number theory, it can be shown that only the tubes with the same $``$ may be commensurate; then $`\alpha =\varphi ^{}=\sqrt{\frac{q^{}/n^{}}{\text{GCD}(q/n,q^{}/n^{})}}`$, $`\alpha ^{}=\varphi =\sqrt{\frac{q/n}{\text{GCD}(q/n,q^{}/n^{})}}`$ and $`\mathrm{\Phi }=\sqrt{\frac{qq^{}}{nn^{}}}`$. Thus, $`Q=\mathrm{\Phi }N/\tau `$, and the minimal $`\tau `$ is looked for to provide the finest screw axis. The translational part of (11) immediately shows that $`t=\tau \alpha ^{}`$ and $`t^{}=\tau \alpha `$. With this values substituted, the rotational part of (11) gives the equations: $$C_Q^R=C_q^{r\alpha ^{}\tau }C_n^s=C_q^{}^{r^{}\alpha \tau }C_n^{}^s^{}.$$ (12) The minimal $`\tau `$ for which the last equation is solvable in $`s`$ and $`s^{}`$ is $`\tau =\mathrm{\Phi }/\text{GCD}(r\alpha \frac{n^{}}{N}r^{}\alpha ^{}\frac{n}{N},\mathrm{\Phi })`$. Finally, $`Q=N\text{GCD}(r\alpha \frac{n^{}}{N}r^{}\alpha ^{}\frac{n}{N},\sqrt{\frac{qq^{}}{nn^{}}})`$, and $`R`$ is easily found from the first equation (12). All these results are immediately generalized to the multi-wall tubes. Note that the generalized translations and the principle rotational axis of the multi-wall nanotube depend only on the types of their single-wall components. On the contrary, the appearance of the mirror and glide planes and the horizontal axes in the common symmetry group is additionally determined by the relative positions of these components. It remains to give the summary of the symmetry groups of the multi-wall tubes. If at least one of the single-wall constituents is chiral, then in the commensurate case there are two possibilities: $`𝐓_Q^R𝐂_N`$, corresponding to the general mutual position, and $`𝐓_Q^R𝐃_N`$ in the special mutual positions with common $`U`$-axis. Analogously, the tube built of the incommensurate components have the symmetry described by the point groups $`𝐂_N`$ or $`𝐃_N`$. If the nanotube is built of the zig-zag and armchair single-wall tubes $`(n,0)`$ (or $`(n,n)`$), $`(n^{},0)`$ (or $`(n^{},n^{})`$),…, the order of the principle rotational axis is $`N=\text{GCD}(n,n^{},\mathrm{})`$. If the tube contains at least one single-wall tube of both types, no translational periodicity appears and its symmetry is described by a point group (Tab. I). On the other hand, for the tube composed of the components of the same type (either zig-zag or armchair), the translation period is equal to that of the components. Two different situations may occur: if all the integers $`n/N`$, $`n^{}/N`$…are odd (”odd” case), the translations are refined by the screw axis $`𝐓_{2N}^1`$; otherwise, if at least one of these integers is even (”even” case), no screw axis emerges. The analysis of the special arrangements of the constituents with common horizontal axes, mirror or glide planes, increasing the symmetry of the total system is summarized in the table I. Note that according to the various arrangements of the components, any of the line and axial point groups may be the resulting symmetry for the commensurate and incommensurate components, respectively. Here we give some realistic examples (interlayer distance of approximately $`3.4\text{Å}`$, and with diameters of the single-wall components from $`0.7\mathrm{nm}`$ to $`3\mathrm{nm}`$) of the double-wall nanotubes, with their symmetry groups in general positions. With the trivial isogonal point group there are the commensurate tubes (line group is $`\mathrm{𝐓𝐂}_1`$) $`(6,6)\text{}(11,11)`$, $`(7,7)\text{}(12,12)`$, $`(11,2)\text{}(12,12)`$, $`(22,4)\text{}(26,11)`$, $`(10,0)\text{}(19,0)`$, $`(11,0)\text{}(20,0)`$, $`(7,3)\text{}(14,6)`$, $`(21,9)\text{}(28,12)`$, $`(14,6)\text{}(21,9)`$, and the incommensurate pairs (the total symmetry group is trivial $`𝐂_\mathrm{𝟏}`$) $`(9,0)\text{}(10,10)`$, $`(15,0)\text{}(14,14)`$, $`(11,2)\text{}(21,0)`$, $`(13,4)\text{}(14,14)`$, $`(10,4)\text{}(19,4)`$, $`(6,4)\text{}(17,1)`$, $`(9,2)\text{}(19,0)`$, $`(10,0)\text{}(17,3)`$, $`(5,5)\text{}(17,0)`$, $`(6,6)\text{}(19,0)`$, $`(8,8)\text{}(23,0)`$, $`(24,9)\text{}(35,6)`$, $`(25,7)\text{}(38,0)`$, $`(27,0)\text{}(31,8)`$, $`(17,17)\text{}(30,13)`$, $`(10,0)\text{}(11,11)`$, $`(14,0)\text{}(13,13)`$, $`(17,0)\text{}(15,15)`$, $`(6,4)\text{}(13,7)`$, $`(8,6)\text{}(21,0)`$, $`(10,0)\text{}(15,6)`$, $`(16,2)\text{}(15,15)`$. The commensurate tubes $`(9,0)\text{}(17,0)`$, $`(9,6)\text{}(15,10)`$, $`(13,0)\text{}(21,0)`$, $`(5,5)\text{}(9,9)`$, $`(7,7)\text{}(11,11)`$, $`(11,0)\text{}(19,0)`$, have the line group $`𝐓_2^1𝐂_1`$, while $`\mathrm{𝐓𝐂}_2`$ is the symmetry of $`(12,8)\text{}(18,12)`$ and CC $`(6,4)\text{}(12,8)`$. The incommensurate tubes with the symmetry $`𝐂_2`$ are: $`(8,8)\text{}(22,0)`$, $`(12,6)\text{}(18,10)`$, $`(8,14)\text{}(28,0)`$, $`(6,6)\text{}(12,10)`$, $`(16,0)\text{}(14,14)`$, $`(22,12)\text{}(28,16)`$, $`(16,8)\text{}(30,0)`$, $`(14,0)\text{}(16,10)`$, $`(10,8)\text{}(16,12)`$, $`(10,2)\text{}(20,0)`$, $`(26,0)\text{}(30,8)`$, $`(22,4)\text{}(22,16)`$, $`(8,2)\text{}(18,0)`$ and $`(8,8)\text{}(16,10)`$. Finally, the examples for the higher order principle axis are commensurate tubes $`(5,5)\text{}(10,10)`$ (with line group $`𝐓_4^0𝐂_5`$), $`(8,8)\text{}(12,12)`$ ($`𝐓_4^0𝐂_4`$), $`(9,0)\text{}(18,0)`$, ($`𝐓_9^0𝐂_9`$), $`(12,0)\text{}(21,0)`$, ($`𝐓_1^0𝐂_3`$), $`(14,0)\text{}(22,0)`$ ($`𝐓_4^1𝐂_2`$), and incommensurate ones: $`(9,9)\text{}(24,0)`$, $`(18,0)\text{}(15,15)`$, $`(9,3)\text{}(18,3)`$, $`(12,9)\text{}(27,0)`$, $`(15,0)\text{}(18,9)`$, $`(24,6)\text{}(21,21)`$ (all with symmetry $`𝐂_3`$), $`(24,0)\text{}(28,8)`$, $`(20,12)\text{}(32,8)`$, $`(28,0)\text{}(32,8)`$ ($`𝐂_4`$), $`(15,15)\text{}(35,0)`$ ($`𝐂_5`$), $`(7,7)\text{}(21,0)`$ ($`𝐂_7`$), $`(11,0)\text{}(11,11)`$ ($`𝐂_11`$), $`(12,0)\text{}(12,12)`$ ($`𝐂_12`$) and $`(13,0)\text{}(13,13)`$ ($`𝐂_13`$). ## III Concluding remarks All the geometrical symmetries of the nanotubes are found. In addition to the rotations, translations and screw-axes, observed previously, the single-wall tubes always possess horizontal rotational axes; the zig-zag and armchair tubes have mirror and glide planes in addition. Thus, the full symmetry group is $`𝐓_q^r𝐃_n`$ for single-wall chiral tubes and $`𝐓_2^1𝐃_{nh}`$ for zig-zag and armchair ones. The parameters $`q`$ and $`r`$ of the helical group are found in the simple and closed form. Since $`2\pi /q`$ is the angle of the minimal rotation (combined with fractional translation) performed by the symmetry group, the order of the principle axis of the isogonal group is $`q`$ and it is always even. Moreover, $`2q`$ is the number of the carbon atoms in the elementary translational cell of the tube. Let us only mention here that the different tubes cannot have the same symmetry parameters $`q`$, $`r`$, $`n`$ and $`a`$. This profound property means that the line group is sufficient to reconstruct the tube (as it is demonstrated by (6)), i. e. that the symmetry completely determines the geometry and all consequent characteristics of nanotube. The symmetries of the multi-wall tubes are quite diverse: depending on types of the single-wall components and their arrangements, all the line and axial-point groups emerge: armchair and zig-zag tubes can be combined to make a prototype for any line or axial symmetry group. This immediately shows that the properties of the nanotubes may vary greatly, depending not only on the single-wall constituents, but also on their mutual positions. There are many physical properties based on symmetry, and the presented classification of the nanotubes according to their symmetry can be widely exploited. The most familiar consequence of symmetry, the special forms of the tensors related to the characteristics of the system, depends on the isogonal group. Further, the symmetry can be used to find good quantum numbers. To begin with the single-wall nanotubes. The translational periodicity is reflected in the conserved quasi-momentum $`k`$, taking the values from the 1D Brillouine zone $`(\pi ,\pi ]`$, or its irreducible domain $`[0,\pi ]`$. Also, the $`z`$-component of the quasi-angular momentum $`m`$ is the quantum number caused by the symmetry of the principle rotational axis; it takes on the integer values from the interval $`(\frac{n}{2},\frac{n}{2}]`$, and characterizes the nanotube quantum states. The parity with respect to reversal of the $`z`$-axis, induced by the horizontal rotational axis $`U`$, is the last quantum number common to all the single-wall tubes. The even and the odd states with respect to this parity are conventionally denoted by $`+`$ and $``$. For the zig-zag and the armchair tubes there is additional vertical mirror plane parity, introducing the quantum numbers $`A`$ and $`B`$, to distinguish between the even and the odd states (the parity with respect to the horizontal mirror plane is dependent on the above discussed $`U`$ and $`\sigma _v`$ parities, $`\pm `$ and $`A/B`$). Concerning the multi-wall tubes, $`m`$ is quantum number again. Again, $`z`$-reversal and vertical mirror parities may appear, depending on the concrete symmetry of the nanotube. Nevertheless, the tubes with incommensurate components are not periodic, and in such cases the quasi-momentum $`k`$ is not an appropriate quantum number; it may be interesting experimental question whether the approach of modulated systems can be applied to restore this quantity. The simple criterion of commensurability of the single-wall tubes is derived: they have same $``$ and $`\sqrt{\frac{qq^{}}{nn^{}}}`$ is an integer. The involved symmetry parameters $`q`$ and $`n`$ are discrete, allowing exact experimental check of commensurability. The enumerated quantum numbers may be used to discuss and predict many characteristics of the nanotubes, but the most sophisticated approach to classification and properties of different quantum states is based on the irreducible representations of the corresponding line and point groups. Let us remind that these representations are labeled by the derived quantum numbers. The most exhaustive possible information on selection rules, comprising the conservation of quantum numbers, for the processes in the nanotubes has become available after the full line (or point) group symmetry has been established. The dimension of an irreducible representation equals degeneracy of the corresponding energy level. For the periodic tubes, the degeneracy of the energy bands is at most fourfold; nevertheless, if the time reversal symmetry of the (spin-independent) Hamiltonian is encountered, the maximal degeneracy is eight-fold . Further, the possible degeneracies are only two-, four- and eight-fold. As for the multi-wall nanotubes with incommensurate components, the dimensions of the irreducible representations of the axial point groups are one, two and (if the time reversal symmetry is included) four, showing the possible degeneracies of the energy levels. Note that the maximal of the enumerated degeneracies (eight- and four-fold) is not possible for the tubes containing at least one chiral single-wall component. Moreover, the degeneracy of the multi-wall tube in the general position of its component is at most two-fold, which is caused by the time reversal symmetry exclusively. Also, the lattice dynamics can be studied. As it has been mentioned above, the whole single-wall tube can be obtained from its arbitrary atom by the action of the elements of its symmetry group (1); in group theoretical language, this means that the whole nanotube is a single orbit of this group , and this is the orbit $`a_1`$ for the chiral, $`b_1`$ for the zig-zag, and $`d_1`$ for the armchair tubes . Thus, the normal modes (phonons), are already classified . The dynamical representation of the chiral tube is decomposed onto the following irreducible components (the summation over $`k`$ is over the interval $`(0,\pi )`$; in the primed sum $`m`$ takes integer values from $`(0,\frac{n}{2})`$, otherwise from $`(\frac{n}{2},\frac{n}{2}]`$; the components with $`m=n/2`$ appear only for $`n`$ even): $$D_{\mathrm{chiral}}^{\mathrm{dyn}}=3({}_{o}{}^{}A_{o}^{+}+{}_{o}{}^{}A_{o}^{}+{}_{\pi }{}^{}A_{o}^{+}+{}_{\pi }{}^{}A_{o}^{}+{}_{o}{}^{}A_{n/2}^{+}+{}_{o}{}^{}A_{n/2}^{}+{}_{\pi }{}^{}A_{n/2}^{+}+{}_{\pi }{}^{}A_{n/2}^{})+6\underset{m}{}^{}({}_{\pi }{}^{}E_{m}^{}+{}_{o}{}^{}E_{m}^{})+6\underset{k,m}{}{}_{k}{}^{}E_{m}^{}.$$ It can be seen that all of the $`6n`$ vibrational bands are double degenerate, as it has been anticipated. As for the zig-zag and armchair tubes the corresponding decompositions are (summation in $`m`$ is over integers from $`(0,n)`$, and in the primed sums from $`(0,\frac{n}{2})`$): $$D_{\mathrm{zig}\mathrm{zag}}^{\mathrm{dyn}}=2({}_{o}{}^{}A_{o}^{+}+{}_{o}{}^{}A_{o}^{}+{}_{o}{}^{}A_{n}^{+}+{}_{o}{}^{}A_{n}^{}+{}_{\pi }{}^{}E_{B}^{})+{}_{o}{}^{}B_{o}^{+}+{}_{o}{}^{}B_{o}^{}+{}_{o}{}^{}B_{n}^{+}+{}_{o}{}^{}B_{n}^{}+4{}_{\pi }{}^{}E_{A}^{}+$$ $$3\underset{m}{}({}_{o}{}^{}E_{m}^{+}+{}_{o}{}^{}E_{m}^{})+2\underset{k}{}[{}_{k}{}^{}E_{B_o}^{}+{}_{k}{}^{}E_{B_n}^{}+2({}_{k}{}^{}E_{A_o}^{}+{}_{k}{}^{}E_{A_n}^{})]+6\underset{m}{}^{}{}_{\pi }{}^{}G_{m}^{}+6\underset{k,m}{}{}_{k}{}^{}G_{m}^{}+3({}_{\pi }{}^{}E_{n/2}^{+}+{}_{\pi }{}^{}E_{n/2}^{}),$$ $$D_{\mathrm{armchair}}^{\mathrm{dyn}}=2({}_{o}{}^{}A_{o}^{+}+{}_{o}{}^{}B_{o}^{+}+{}_{o}{}^{}A_{n}^{+}+{}_{o}{}^{}B_{n}^{+})+{}_{o}{}^{}A_{o}^{}+{}_{o}{}^{}B_{o}^{}+{}_{o}{}^{}A_{n}^{}+{}_{o}{}^{}B_{n}^{}+3({}_{\pi }{}^{}E_{A}^{}+{}_{\pi }{}^{}E_{B}^{})+6\underset{k,m}{}{}_{k}{}^{}G_{m}^{}$$ $$+3\underset{k}{}({}_{k}{}^{}E_{A_o}^{}+{}_{k}{}^{}E_{B_o}^{}+{}_{k}{}^{}E_{A_n}^{}+{}_{k}{}^{}E_{B_n}^{})+\underset{m}{}(4{}_{o}{}^{}E_{m}^{+}+2{}_{o}{}^{}E_{m}^{})+6\underset{m}{}^{}{}_{\pi }{}^{}G_{m}^{}+3({}_{\pi }{}^{}E_{n/2}^{+}+{}_{\pi }{}^{}E_{n/2}^{}).$$ Analogous data can be directly found for each nanotube. This classification can be used to simplify calculation of the vibrational bands ; the obtained bands are automatically labelled by the symmetry based quantum numbers, meaning that Raman and IC spectra can be directly extracted by the selection rules. Note in this context, that the Jahn-Teller theorem is proved both for the point and for the line groups . Božović (OXXEL GmbH, Bremen) and dr G. Biczó (Central Research Institute for Chemistry, Budapest), who paid our attention to this subject. Also, we thank to dr R. Kostić (Institute of Physics, Beograd) for some remarks.
no-problem/9902/astro-ph9902311.html
ar5iv
text
# Reionization of the Intergalactic Medium and its Effect on the CMB ## 1 Introduction One of the most remarkable observational results in the last three decades in cosmology is the lack of the Gunn–Peterson trough in the spectra of high–redshift quasars and galaxies. This finding implies that the intergalactic medium (IGM) is highly ionized at least out to $`z5`$, the redshift of the most distant known quasars and galaxies<sup>1</sup><sup>1</sup>1Currently the best lower limit on the reionization redshift comes from the detection of high–$`z`$ Ly$`\alpha `$ emission lines, implying $`z5.64`$ (see discussion below).. Since the primeval plasma recombined at $`z1100`$, its subsequent ionization requires some form of energy injection into the IGM, naturally attributable to astrophysical sources. The two most popular examples of such sources are an early generation of stars, residing in sub–galactic size clusters (hereafter “mini–galaxies”), or accreting massive ($`10^6\mathrm{M}_{}`$) black holes in small ($`<\mathrm{\hspace{0.33em}10}^9\mathrm{M}_{}`$) halos (hereafter “mini–quasars”). Reionization is interesting for a variety of reasons. First, the most fundamental questions are still unanswered: What type of sources caused reionization, and around what redshift did it occur? How were the reionizing sources distributed relative to the gas? What was the size, geometry, and topology of the ionized zones, and how did these properties evolve? Second, and more directly relevant to the topic of this review, reionization leaves distinctive signatures on the cosmic microwave background (CMB) through the interaction between the CMB photons and free electrons. Although it may complicate the extraction of cosmological parameters from CMB data, this “contamination” could yield important information on the ionization history of the IGM. Through reionization, the CMB is a useful probe of nonlinear processes in the high–redshift universe. Thomson scattering from free electrons affects the CMB in several ways. Because the scattering leads to a blending of photons from initially different lines of sight, there is a damping of the primary temperature anisotropy. On the other hand, a new secondary anisotropy is generated by what can be thought of as a Doppler effect: as photons scatter off free electrons, they pick up some of their peculiar momentum. Finally, the polarization dependence of the Thomson cross section creates new polarization from the initially anisotropic photon field. The inhomogeneity of reionization affects these processes in two different ways. First, the inhomogeneity of the medium affects how the mean ionization fraction evolves with time. This is important since the damping, Doppler and polarization effects all occur even in the idealized case of spatially homogeneous reionization. Secondly, the spatial fluctuations in the ionization fraction can greatly enhance the contribution from the Doppler effect at small angular scales. Polarization and damping are much less influenced by the inhomogeneity, as we will see below. The aim of the present review is to summarize our current knowledge of reionization, describe the possible connections to the CMB, and assess what we can hope to learn about the high–redshift IGM from forthcoming observations of the CMB anisotropies. ## 2 Reionization ### 2.1 Homogeneous Reionization and the Reionization Redshift If the IGM was neutral, its optical depth to Ly$`\alpha `$ absorption would be exceedingly high: $`\tau _{\mathrm{igm}}10^5(\mathrm{\Omega }_\mathrm{b}h/0.03)[(1+z)/6]^{3/2}`$, wiping out the flux from any source at observed wavelengths shorter than $`\lambda _{\mathrm{Ly}\alpha }(1+z)`$. The spectra of high–redshift quasars and galaxies reveal Ly$`\alpha `$ absorption by numerous discrete Ly$`\alpha `$ forest clouds, separated in redshift, rather than the continuous Gunn–Peterson (GP) trough expected from a neutral IGM. The detection of the continuum flux in–between the Ly$`\alpha `$ clouds implies that $`\tau _{\mathrm{igm}}`$ is at most of order unity, or, equivalently, an upper limit on the average neutral fraction of $`x=n_{\mathrm{HI}}/n_{\mathrm{tot}}<\mathrm{\hspace{0.33em}10}^5`$. The intergalactic medium is therefore highly ionized. Note that there is no sharp physical distinction between the intergalactic medium and the low column density ($`N_{\mathrm{HI}}<\mathrm{\hspace{0.33em}10}^{14}\mathrm{cm}^2`$) Ly$`\alpha `$ forest (Reisenegger & Miralda-Escudé 1995). These Ly$`\alpha `$ clouds fill most of the volume of the universe, are highly ionized, and account for most of the baryons (implied by Big Bang nucleosynthesis) at high redshift (Weinberg et al. 1997; Rauch et al. 1997). The energy requirement of reionization can easily be satisfied either by the first mini–galaxies or mini–quasars<sup>2</sup><sup>2</sup>2More exotic possibilities that we do not discuss here include primordial black holes (e.g. Gibilisco 1996); cosmic rays (e.g. Nath & Bierman 1993); winds from supernovae (e.g. Ostriker & Cowie 1981); or decaying neutrinos (Sciama 1993).. Nuclear burning inside stars releases several MeV per hydrogen atom, and thin–disk accretion onto a Schwarzschild black hole releases ten times more energy, while the ionization of a hydrogen atom requires only 13.6 eV. It is therefore sufficient to convert a fraction of $`<\mathrm{\hspace{0.33em}10}^5`$ of the baryonic mass into either stars or black holes in order to ionize the rest of the baryons in the universe. In a homogeneous universe, the ratio of recombination time to Hubble time in the IGM is $`t_{\mathrm{rec}}/t_{\mathrm{Hub}}[(1+z)/11]^{3/2}`$ so that each H atom would only recombine a few times at most until the present time. Unless reionization occurred at $`z10`$, recombinations therefore would not change the above conclusion, and the number of ionizing photons required per atom would be of order unity. Based on three-dimensional simulations, Gnedin & Ostriker (1997) argued that the clumpiness of the gas increases the average global recombination rate by a factor $`C<\rho ^2>/<\rho >^23040`$ (the averages are taken over the simulation box). This would imply a corresponding increase in the necessary number of ionizing photons per atom. However, in a more detailed picture of reionization in an inhomogeneous universe (Gnedin 1998, Miralda-Escudé et al. 1999), the high-density regions are ionized at a significantly later time than the low density regions. If this picture is correct, then one ionizing photon per atom would suffice to reionize most of the volume (filled by the low-density gas); recombinations from the high-density regions would only contribute to the average recombination rate at a later redshift. The reionization redshift can be estimated from first principles. In the simplest picture \[first discussed by Arons & Wingert (1972) in the context of quasars\], a homogeneous neutral IGM is percolated by the individual HII regions growing around each isolated UV source. The universe is reionized when the ionized bubbles overlap, i.e. when the filling factor of HII regions reaches unity<sup>3</sup><sup>3</sup>3The expansion of spherical ionization fronts around steady sources can be calculated analytically (Shapiro & Giroux 1987).. Several authors studied this problem (Carr et al. 1984, Couchman & Rees 1986, Fukugita & Kawasaki 1994, Meiksin & Madau 1993; Shapiro et al. 1994, Tegmark et al. 1994; Aghanim et al. 1996; Haiman & Loeb 1997). Specifically, Haiman & Loeb (1998a) have used the Press–Schechter formalism to describe the formation of halos that potentially host ionizing sources, in order to estimate the redshift at which overlap occurs. In this study, star and quasar black hole formation was allowed only inside halos that can cool efficiently via atomic hydrogen, since molecular hydrogen would be photodissociated earlier, as argued by Haiman, Abel & Rees (1999) and Haiman, Rees & Loeb (1997). This requirement translates to a minimum virial temperature of $`10^4`$K, or halo masses above $`10^8\mathrm{M}_{}[(1+z)/11]^{3/2}`$ (see Fig. 1). For both types of sources, the total amount of light they produce was calibrated using data from redshifts $`z\begin{array}{c}<\hfill \\ \hfill \end{array}5`$. The efficiency of early star formation can be estimated from the observed metallicity of the intergalactic medium (Songaila & Cowie 1996, Tytler et al. 1995), utilizing the fact that these metals and the reionizing photons likely originate from the same stellar population. Similarly, the efficiency of black hole formation inside early mini–quasars can be constrained to match the subsequent evolution of the quasar luminosity function at redshifts $`z\begin{array}{c}<\hfill \\ \hfill \end{array}5`$ (Pei 1995). These assumptions generically lead to a reionization redshift $`8<z<\mathrm{\hspace{0.33em}15}`$. The uncertainty in the redshift reflects a range of cosmological parameters ($`H_0,\mathrm{\Omega }_m,\mathrm{\Omega }_b`$), CDM power spectra ($`n`$, $`\sigma _8`$), and efficiency parameters for the production and escape fraction of ionizing photons. In the stellar reionization scenario, the predicted redshift could be above or below this range, if the initial mass function was strongly biased (relative to Scalo 1986) towards massive or low-mass stars. ### 2.2 Importance and Treatments of Inhomogeneities Although the naive picture of overlapping HII spheres may give a good estimate for the “overlap redshift”, it is likely to be a crude and incomplete description of the real process of reionization. The ionizing sources are expected to be located in the highly overdense regions that form the complex large scale structure of the universe - along filaments or sheets, or at their intersections (see Fig. 2). The ionizing radiation will have to cross the local dense structures before propagating into the IGM, which itself has significant density fluctuations. Several important consequences of these inhomogeneities can be noted. First, reionization could occur “outside-in” (Miralda-Escudé et al. 1999), if the radiation from a typical source escapes through a relatively narrow solid angle from its host, without causing significant local ionization. In this case, the low–density regions could be ionized significantly earlier than the high–density regions. An important effect in this case could be the shadowing of ionizing radiation by the dense concentrations within the IGM. The same type of shadowing effect was found to be important at low redshift. Based on the number of Ly$`\alpha `$ absorbers, Madau, Haardt & Rees (1998) have concluded that as much as $`50`$% of the UV flux is absorbed by these systems. The redshift at which the IGM becomes optically thin to the ionizing continuum is therefore significantly delayed relative to the overlap epoch, when the ionized zones first percolate. An alternative possibility is that the ionizing sources are densely spaced along filaments, and their radiation ionizes most of the dense filament before escaping into the IGM. In this “inside-out” case, the description of ionized zones surrounding their host source would be more applicable. An important feedback during and after reionization is that the newly established UV background can photo–evaporate the gas from small ($`v_{\mathrm{circ}}<\mathrm{\hspace{0.33em}10}`$) halos (Shapiro, Raga & Mellema 1997; Barkana & Loeb 1999). A related point, important for the detection of the GP trough, is that at the time of overlap of the HII zones, each individual HII region could still have a non–negligible Ly$`\alpha `$ optical depth, due to its residual HI fraction. The GP trough therefore disappears only from the spectra of individual sources located below the later redshift when the average Ly$`\alpha `$ optical depth of the IGM drops below unity (Haiman & Loeb 1998b; Miralda-Escudé et al. 1999; Shapiro et al. 1987). Finally, the inhomogeneity of the reionized IGM leads to generation of anisotropies in the CMB which are strongly suppressed in the homogeneous case, as we shall see below. Reionization is now also being addressed by three-dimensional simulations, which can eventually shed light on some of the above issues. The first such studies approximated radiative transfer by assuming an isotropic radiation field, but were able to take into account the inhomogeneities in the gas distribution (Ostriker & Gnedin 1996; Gnedin & Ostriker 1997). The resulting reionization redshift of $`z7`$ in a $`\mathrm{\Lambda }`$CDM model is in good agreement with the semi–analytical estimates, given the differences in the assumed cosmology, power spectra, and star formation efficiency. However, an isotropic radiation field is necessarily a crude approximation, at least in the beginning stages of reionization, when each source still has its own isolated ionized zone. In order to understand the progressive overlap of ionized zones, including effects such as self-shielding and shadowing by dense clumps, it is necessary to incorporate radiative transfer into three dimensional simulations (Norman et al. 1998). The numerical algorithms for the first such attempts are described in Abel, Norman & Madau (1998) and Razoumov & Scott (1998). Results from an approximate treatment (Gnedin 1998) also support the outside–in picture of reionization described above. ### 2.3 The Nature of Reionizing Sources Several authors have emphasized that the known population of quasars or galaxies provide $``$10 times fewer ionizing photons than necessary to reionize the universe (see, e.g. Shapiro et al. 1994, or Madau et al. 1999, and references therein). The two most natural candidates for undetected ionizing sources are the mini–galaxies and mini–quasars expected to be associated with small ($`10^{810}\mathrm{M}_{}`$) dark halos at $`z10`$. As there are no compelling ab initio theoretical arguments to strongly favor one type of source over the other, the best hope is to distinguish these two possibilities from observations. One expects at least three major differences between mini–galaxies and mini–quasars: their spectra, their absolute brightness (or, alternatively, space density), and the angular size of their luminous regions. The flux from stellar populations drops rapidly with frequency above the ionization threshold of hydrogen, while the spectra of mini–quasars are expected to be harder and extend into the X–ray regime. Stars could therefore not reionize HeII, while quasars with typical spectra (Elvis et al. 1994) would reionize HeII at approximately the same redshift as H (Haiman & Loeb 1998a). However, recent high-resolution spectra of $`z3`$ quasars have shown a widely fluctuating HeII/HI optical depth (Reimers et al. 1997). The authors interpreted this observation as evidence for HeIII regions embedded in an otherwise HeII medium, i.e. a detection of the HeII reionization epoch (see also Wadsley et al. 1998). If this interpretation is verified by future data, it would constrain the number of mini–quasars with hard spectra extending to X–rays at high redshifts (Haiman & Loeb 1998b; Miralda-Escudé 1998; Miralda-Escudé & Rees 1994). At present, a plausible alternative interpretation of the HeII observations is that the observed HeII optical depth fluctuations are caused by statistical fluctuations in the IGM density or the ionizing background flux (Miralda-Escudé et al. 1999), rather than by the patchy structure of HeII/HeIII zones. The X–ray background (XRB; Miyaji et al. 1998; Fabian & Barcons 1992) might provide another useful constraint on mini–quasar models with hard spectra (Haiman & Loeb 1998c). These models overpredict the unresolved flux by a factor of $`27`$ in the 0.1–1 keV range. If an even larger fraction of the XRB will be resolved into low–redshift AGNs in the future, then the XRB could be used to place more stringent constraints on the X-ray spectrum or the abundance of the mini–quasars. A second distinction between galaxies and quasars can be made based on their absolute luminosity and space density. On average, based on their $`z<5`$ counterparts, high–redshift mini–quasars are expected to be roughly $`100`$ times brighter, but $`100`$ times more rare, than galaxies. At the overlap epoch, HII regions from mini–quasars would therefore be larger but fewer than those from mini–galaxies. The typical size and abundance of HII regions has at least three important implications. First, it affects the CMB anisotropies; the larger the HII regions, the larger the effect on the CMB (Gruzinov and Hu 1998, Knox et al. 1998), as we will see below. Second, large enough HII regions ($`>\mathrm{\hspace{0.33em}1}`$Mpc) would allow gaps to open up in the GP trough; the flux transmitted through such gaps around very bright quasars could be detectable (Miralda-Escudé et al. 1999). Third, if sources form inside the rare, high-$`\sigma `$ peaks, they would be strongly clustered (Knox et al. 1998). This clustering would increase the effective luminosity of each source, and would enhance both of the effects above. Finally, a constraint on the number of mini–quasars at $`z>3.5`$ can be derived from the Hubble Deep Field (HDF). Mini–quasars are expected to appear as faint point–sources in the HDF, unless their underlying extended host galaxies are resolved. The properties of faint extended sources found in the Hubble Deep Field (HDF) agree with detailed semi–analytic models of galaxy formation (Baugh et al. 1998). On the other hand, the HDF has revealed only a handful of faint unresolved sources, and none with the colors expected for high redshift quasars (Conti et al. 1999). The simplest mini–quasar models predict the existence of $`1015`$ B–band “dropouts” in the HDF, inconsistent with the lack of detection of such dropouts up to the $`50\%`$ completeness limit at $`V29`$. To reconcile the models with the data, a mechanism is needed that suppresses the formation of quasars in halos with circular velocities $`v_{\mathrm{circ}}<\mathrm{\hspace{0.33em}50}75\mathrm{km}\mathrm{s}^1`$ (Haiman, Madau & Loeb 1999). This suppression naturally arises due to the photo-ionization heating of the intergalactic gas by the UV background after reionization. Forthcoming data on point–sources from NICMOS observations of the HDF (see Thompson et al. 1998) could further improve these constraints. ### 2.4 When was the Universe Reionized? As described above, theoretical expectations place the reionization redshift at $`8<z<\mathrm{\hspace{0.33em}15}`$. What about observations? At present, the best lower limit in the reionization redshift comes from the detection of high–redshift Ly$`\alpha `$ emitters (Hu et al. 1998; Weymann et al. 1998). As argued by Miralda-Escudé (1998), the damping wing of Ly$`\alpha `$ absorption by a neutral IGM has a large residual optical depth that would severely damp any Ly$`\alpha `$ emission line. Currently the highest redshift at which a Ly$`\alpha `$ emitter is seen is $`z=5.64`$; the existence of this object implies that reionization occurred prior to this redshift (see Haiman & Spaans 1998). Note that HII regions around individual sources, excepting only the brightest quasars, would be too small to allow the escape of Ly$`\alpha `$ photons, because the damping wings of the GP troughs around the HII region would still overlap. The best upper limit on the reionization redshift can be obtained from the CMB anisotropy data. Several experiments have revealed a rise in the power on small angular scales, and a drop at even smaller scales (e.g., Bond et al. 1999), showing evidence for the first Doppler peak expected from acoustic oscillations in the baryon-photon fluid prior to recombination (e.g., Hu & Sugiyama 1994). These observations can also be used to set limits on the electron scattering optical depth (or the corresponding reionization redshift) that would suppress the Doppler peak. From a compilation of all the existing measurements, Griffiths et al. (1998) have derived the stringent (although model-dependent) constraint $`z<\mathrm{\hspace{0.33em}40}`$ on the reionization redshift – if reionization occurred earlier, the electron scattering optical depth would have reduced the amplitude of the anisotropies below the observed level. Note that another upper limit results from the spectral distortion of the CMB caused by scattering on the reionized IGM, which is constrained by the upper limit on the Compton $`y`$–parameter measured by COBE. The inferred upper limit on the reionization redshift, however, is much weaker, $`z<\mathrm{\hspace{0.33em}400}`$ for typical parameters in a low–density universe (Griffiths et al. 1998, see also Stebbins & Silk 1986). In summary, present observations have narrowed down the possible redshift of the reionization epoch to $`6<z<\mathrm{\hspace{0.33em}40}`$, a relatively narrow range that is in good agreement with the theoretical predictions described above. ### 2.5 Future Observational Signatures Further observational progress in probing the reionization epoch could come either from the Next Generation Space Telescope (NGST), or from more precise measurements of the CMB anisotropies by MAP<sup>4</sup><sup>4</sup>4http://map.gsfc.nasa.gov and Planck<sup>5</sup><sup>5</sup>5http://astro.estec.esa.nl/SA-general/Projects/Planck. NGST, scheduled for launch in 2007, is expected to reach the $`1`$nJy sensitivity<sup>6</sup><sup>6</sup>6See the NGST Exposure Time Calculator at http://augusta.stsci.edu. required to detect individual sources to $`z10`$, and to perform medium–resolution spectroscopy to $`z8`$. If reionization occurred close to the low end of the allowed redshift range, $`6<z<\mathrm{\hspace{0.33em}8}`$, then it may be possible to infer the redshift directly from the spectra of bright sources. One specific method relies on the spectrum of a bright source just beyond the reionization redshift, so that the individual Ly$`\alpha `$, Ly$`\beta `$, and other GP troughs do not overlap in frequency, leaving gaps of transmitted flux (Haiman & Loeb 1998b). The measurement of this transmitted flux would be possible with NGST, despite absorption by the high–redshift Ly$`\alpha `$ forest, as long as the number density of absorbers does not rise much more steeply with redshift than an extrapolation of the current $`z<5`$ data would imply (Fardal et al. 1998). If reionization is gradual, rather than abrupt, than this method would measure the redshift at which the GP optical depth drops to near unity, i.e. the final stages of the reionization epoch. An alternative signature to look for would be the precise shape of the damping wing of Ly$`\alpha `$ absorption from the neutral IGM along the line of sight to the source (Miralda-Escudé 1998). The shape, if measured by high–resolution spectroscopy, could be used to determine the total optical depth of the IGM. The caveat of this method is the inability to distinguish the neutral IGM from a damped Ly$`\alpha `$ absorber along the line of sight, close to the source. An alternative signature is the background Ly$`\alpha `$ emission from the reionization epoch. Recombinations are slow both at high redshifts, when the IGM is still neutral, and at low redshifts when the IGM density is low. As shown by simulations (Gnedin & Ostriker 1997), the global recombination rate has a pronounced peak around the reionization epoch. The resulting recombinant Ly$`\alpha `$ emission has been computed by Baltz, Gnedin & Silk (1998), and a detailed detectability study has shown that this signal could be measured by NGST, or perhaps even by HST (Shaver et al. 1999). Yet another signature could result from the 21 cm hyperfine transitions in the IGM before reionization. Prior to reionization, the excitation of the 21 cm line in neutral HI atoms depends on the spin temperature. The coupling of the spin temperature to that of the CMB is determined by the local gas density, temperature, and the radiation background from the first mini–galaxies and mini–quasars. In general, the 21 cm line could be seen either in absorption or emission against the CMB, and could serve as a ’tomographic’ tool to diagnose the density and temperature of the high redshift neutral gas (see, e.g. Madau et al. 1997; Scott & Rees 1990). If reionization occurred at $`6<z<\mathrm{\hspace{0.33em}10}`$, the redshifted 21 cm signals would be detectable by the Giant Metrewave Radio Telescope; a study of the effect at higher redshifts would be possible with next generation instruments such as the THousand Element Array, or the Square Kilometer Array (Shaver et al. 1999). The reionization redshift could turn out to be closer to the high end of the allowed range, $`10<z<\mathrm{\hspace{0.33em}40}`$, rendering direct detection of emission from the reionization epoch implausible (except perhaps the 21cm signal). There is, however, a fortunate “complementarity”, since the electron scattering optical depth increases with reionization redshift. Due to the effect on the polarization power spectrum on large angular scales, MAP and Planck may be able to discern an electron scattering optical depth as small as a few percent (Zaldarriaga et al. 1997, Eisenstein et al. 1998, Prunet et al. 1998). By temperature anisotropy data alone, they may be able to determine an optical depth as small as $`20\%`$ due to the damping effect mentioned earlier—and to be discussed in more detail later. The reionizing sources may also change the spectral shape of the CMB. The dust that is inevitably produced by the first type II supernovae, absorbs the UV emission from early stars and mini–quasars and re-emits this energy at longer wavelengths. Loeb & Haiman (1997) have quantified the resulting spectral distortion in Press–Schechter type models, assuming that each type II supernova in a Scalo IMF yields $`0.3\mathrm{M}_{}`$ of dust with the wavelength-dependent opacity of Galactic dust, uniformly distributed throughout the intergalactic medium. Under these assumptions, the dust remains cold (close to the CMB temperature), and its emission peaks near the CMB peak. The resulting spectral distortion can be expressed as a Compton $`y`$–parameter $`10^5`$, near the upper limit derived from measurement of the CMB spectrum by COBE (Fixsen et al. 1996). A substantial fraction ($`10`$$`50`$%) of this total $`y`$–parameter results simply from the direct far-infrared emission by early mini–quasars and could be present even in the absence of any intergalactic dust. Inhomogeneities in the dust distribution could change these conclusions. Instead of being homogeneously mixed into the IGM, the high–redshift dust may remain concentrated inside or around the galaxies where it is produced. In this case, the average dust particle would see a higher flux than assumed in Loeb & Haiman (1997), so that the dust temperature would be higher, and dust emission would peak at a shorter wavelength. The magnitude of the spectral distortion would be enhanced, and may account for the recently discovered cosmic infrared background (Puget et al. 1996, Schlegel et al. 1998, Fixsen et al. 1998, Hauser et al. 1998) as shown by the semi–analytic models of Baugh et al. (1998). An angular fluctuation in the magnitude of the distortion would also be expected in this case, reflecting the discrete nature of the sources contributing to the effect. As seen above, reionization has several consequences; in the rest of this review we focus on the connection to the CMB, and the effects of inhomogeneities. ## 3 Effect on the CMB In this section, we give an in-depth treatment of the effects on the CMB photons of Thomson-scattering off of the free electrons produced by reionization. After defining the optical depth and giving its dependence on the redshift of reionization and cosmological parameters, we discuss the three separate effects: damping, Doppler and polarization generation. Then we derive the evolution equations from the Boltzmann equation, argue that the Doppler effect is the most important inhomogeneous effect, and calculate the resulting power spectrum for several simple models of inhomogeneous reionization (IHR). The probability of a photon scattering in the time interval from some initial time $`t_i`$ to the present, $`t_0`$ is given by $`1e^{\tau (t_0)}`$ where $$\tau (t_0)_{t_i}^{t_0}\sigma _Tn_e(t)𝑑t$$ (1) is the optical depth to Thomson scattering and $`\sigma _T`$ is the Thomson cross-section. If one assumes a step-function transition from a neutral to an ionized IGM at a redshift of $`z_{\mathrm{ion}}`$, the mean optical depth is given by (e.g., Griffiths et al., 1998) $$\tau (z_{\mathrm{ion}})=\tau ^{}/\mathrm{\Omega }_0\left[\left(1\mathrm{\Omega }_0+\mathrm{\Omega }_0\left(1+z_{\mathrm{ion}}\right)^3\right)^{1/2}1\right]$$ (2) where $$\tau ^{}=\frac{H_0\mathrm{\Omega }_b\sigma _T}{4\pi Gm_p}\times \left(1Y/2\right)0.033\mathrm{\Omega }_bh,$$ (3) $`\mathrm{\Omega }_b`$ and $`\mathrm{\Omega }_0`$ are the density of baryonic matter and of all matter respectively, in units of the critical density today, $`H_0`$ is the Hubble constant, $`H_0=100h\mathrm{km}\mathrm{sec}^1\mathrm{Mpc}^1`$, $`G`$ is Newton’s constant, $`m_p`$ is the proton mass and $`Y`$ is the mass fraction of baryons in helium. The final equality assumes $`Y=0.24`$. To get equation 2, we assumed that $`n_e(1+z)^3`$, and adopted the line element, $`dz/dt`$, of a spatially flat universe with only non-relativistic matter and a cosmological constant. Note that for $`z_{\mathrm{ion}}=5.64`$ (the observational lower bound), $`\mathrm{\Omega }_bh^2=0.02`$ and $`h=0.65`$, we get $`\tau =0.016`$ for $`\mathrm{\Omega }_0=1`$ and $`\tau =0.049`$ for $`\mathrm{\Omega }_0=0.3`$. For $`z_{\mathrm{ion}}`$ only slightly larger than this limiting value, it is a good approximation to rewrite equation 2 as $$\tau (z_{\mathrm{ion}})=0.037/\sqrt{\mathrm{\Omega }_0}\left(\frac{1+z_{\mathrm{ion}}}{11}\right)^{3/2}\left(\frac{\mathrm{\Omega }_bh^2}{0.02}\right)\left(\frac{0.65}{h}\right).$$ (4) Thus we see that the fraction of CMB photons that have been scattered may be very small, but that this fraction grows fairly rapidly with increasing redshift. The increase in optical depth for fixed $`z_{\mathrm{ion}}`$ with increasing $`1\mathrm{\Omega }_0`$ is due to the fact that the proper time from the present to $`z_{\mathrm{ion}}`$ increases as the cosmological constant increases. ### 3.1 Simple Arguments Here we give simple arguments about the nature of the damping, Doppler and polarization effects. Before doing so, it is useful to define the two-point correlation function, $`C(\theta )`$ which is perhaps the most important statistical property of the CMB fluctuations: $$C(\theta )\mathrm{\Delta }(\widehat{\gamma }_1,𝐱,\eta _0)\mathrm{\Delta }(\widehat{\gamma }_2,𝐱,\eta _0);\mathrm{cos}\theta \widehat{\gamma }_1\widehat{\gamma }_2$$ (5) where $`\mathrm{}`$ denotes ensemble average. The same information is contained in its Legendre transform, the angular power spectrum, $`C_{\mathrm{}}`$: $$C(\theta )=\underset{\mathrm{}}{}\frac{2\mathrm{}+1}{4\pi }C_{\mathrm{}}P_{\mathrm{}}(\mathrm{cos}\theta ).$$ (6) Roughly speaking, $`C_l`$ is the power in modes with wavelength $`\pi /l`$ so that $`l=180`$ corresponds to about a degree. If the fluctuations are Gaussian-distributed and statistically isotropic, all other statistics can be derived from $`C_l`$. #### 3.1.1 Damping For an initially homogeneous and isotropic photon field, there is just as much probability to scatter into a line of sight as there is to scatter out of it. Therefore, the optical depth to scattering produces no net effect. If, however, there is initial anisotropy, then this anisotropy is damped. Consider a line of sight along which the temperature differs by $`\mathrm{\Delta }T`$ from the mean $`\overline{T}`$ in the absence of damping. If damping is present, then the temperature is changed to $`\overline{T}+\mathrm{\Delta }T`$ $``$ $`\left(\overline{T}+\mathrm{\Delta }T\right)\left(\overline{T}+\mathrm{\Delta }T\right)\left(1e^\tau \right)+\overline{T}\left(1e^\tau \right)`$ (7) $``$ $`\overline{T}+\mathrm{\Delta }Te^\tau `$ This equation expresses the fact that the final temperature is given by the initial temperature, reduced by the photons that have been lost, and increased by the photons that have been scattered in from other lines of sight. Since the photons that have been scattered in come from many different lines of sight, their average temperature is very nearly the global average—an assumption made in equation 7. The net result is that $`\mathrm{\Delta }T\mathrm{\Delta }Te^\tau `$ and therefore $`C(\theta )C(\theta )e^{2\tau }`$ or $`C_lC_le^{2\tau }`$. This simple calculation would imply that the damping is independent of scale, which is not exactly true. Our assumption that the average $`T`$ along lines of sight that scattered in equals the global average holds only when the distance between the scattering surfaces is much larger than the length scale of interest. At very large angular scales, the damping does not occur, as one would expect from simple causality considerations. More precisely, the damping factor is $`l`$-dependent and, defined via $`C_l=R_l^2C_l^{\mathrm{primary}}`$, is given by the fitting formula of Hu & White 1997: $$R_l^2=\frac{1\mathrm{exp}\left(2\tau \right)}{a+c_1x+c_2x^2+c_3x^3+c_4x^4}+\mathrm{exp}\left(2\tau \right),$$ (8) with $`x=l/(l_r+1)`$ and $`c_1=0.267,c_2=0.581,c_3=0.172`$ and $`c_4=0.0312`$. The characteristic angular scale $`l_r`$ is roughly the angular scale subtended by the horizon at the new last-scattering surface and is given approximately by (Griffiths et al. 1998, Hu & White 1997): $$l_r=\left(1+z_{\mathrm{ion}}\right)^{1/2}\left(1+0.84\mathrm{ln}\mathrm{\Omega }_0\right)1.$$ (9) If not for the $`l`$-dependence, the damping effect would be completely degenerate with the amplitude of the primary power spectrum; i.e., their effects would be indistinguishable. The similar response to amplitude and $`\tau `$ over a large range of $`l`$ makes them approximately degenerate and is the reason why the optical depth can only be determined to about 10% (Zaldarriaga et al. 1997) or possibly worse (Eisenstein et al. 1998) based on temperature anisotropy alone. #### 3.1.2 Polarization The Thomson scattering differential cross-section is polarization-dependent, and therefore the scattered radiation may be polarized even if the incident radiation is not. However, by symmetry considerations alone one can see that initially isotropic radiation will not become polarized<sup>7</sup><sup>7</sup>7Unless the electron spins are aligned, perhaps by a magnetic field.. In fact, it is easy to show that the particular anisotropy required for the creation of polarization is a quadrupole moment. This is essentially due to the $`\mathrm{cos}2\theta `$ dependence of the differential cross-section. The Fourier analogue of the angle-distance relation can be written as $`\mathrm{}(\eta \eta _{})k`$ where $`k`$ is the comoving wave-number that projects from conformal time $`\eta _{}`$ into multipole moment $`\mathrm{}`$ at conformal time $`\eta `$. Conformal time is related to proper time by $`d\eta =dt/a`$ where $`a`$ is the scale factor of the expansion. Applying this relationship twice, and using the fact that polarization is generated by the quadrupole moment, $`\mathrm{}=2`$, we find $$\mathrm{}_p(\eta _0\eta _{\mathrm{ion}})\left(\frac{2}{\eta _{\mathrm{ion}}\eta _{\mathrm{LSS}}}\right)2\left(\sqrt{z_{\mathrm{ion}}+1}1\right).$$ (10) For the last equality, we assumed a matter-dominated Universe and that $`\eta _{\mathrm{ion}}>>\eta _{\mathrm{LSS}}`$. Thus we expect a peak in the polarization power spectrum near $`\mathrm{}_p2z_{\mathrm{ion}}^{1/2}`$. Determining the location of this peak allows one to determine the epoch of reionization. The amplitude of the peak is proportional to the optical depth, and thus also helps in determining $`z_{\mathrm{ion}}`$. Unfortunately, the signal is very weak and present only at large angular scales, e.g., $`\mathrm{}_p5`$ for $`z_{\mathrm{ion}}=10`$. Since there are only $`2l+1`$ independent modes from which to determine $`C_l`$, sample variance is worst at low $`l`$. Also, the large-scale features of polarization maps reconstructed from time-ordered data are likely to be those most sensitive to systematic errors. Finally, polarized galactic foregrounds (dust and synchrotron) are expected to be most troublesome at small $`l`$ (Bouchet et al. 1998, Knox 1998). However, a benefit of the low-$`\mathrm{}`$ location of this signature is that if it can be measured, its interpretation will not be complicated by the inhomogeneity of the reionization, which is a much smaller-scale phenomenon. If the systematic errors are negligible, the reionization feature in the polarization power spectrum can be used to discern very small optical depths. Forecasts of parameter determination by Eisenstein et al. (1998) using both the temperature anisotropy and polarization data are for one sigma errors on $`\tau `$ of 0.022 for MAP and $`0.004`$ for Planck. Also see Zaldarriaga et al. (1997) for similar results. Note that these sensitivities, especially in the case of Planck, are high enough to detect the effects of reionization at any epoch given that we know it happened prior to $`z=5.64`$. #### 3.1.3 Doppler Effect The contribution to $`\mathrm{\Delta }T/T`$ at location $`𝐱_0`$, in direction $`\widehat{\gamma }`$, at conformal time $`\eta _0`$ (today) is given by $$\frac{\mathrm{\Delta }T}{T}(𝐱_0,\widehat{\gamma },\eta _0)=\sigma _T_{\eta _{\mathrm{ion}}}^{\eta _0}𝑑\eta n_e\left(𝐱\right)\widehat{\gamma }\text{v}_e(𝐱)a$$ (11) where $`𝐱=𝐱_0+\widehat{\gamma }\left(\eta _0\eta \right)`$. As one might expect, it is proportional to the line-of-sight integral of the parallel component of electron velocity, $`𝐯_e`$ times the number density of scatterers, $`n_e`$. As Sunyaev (1978) and Kaiser (1984) pointed out, in the homogeneous case this contribution to $`\frac{\mathrm{\Delta }T}{T}`$ is suppressed by cancellations due to oscillations in $`𝐯_e`$. One way to think of these cancellations is that photons get nearly opposite Doppler shifts on different sides of a density peak, which is a consequence of potential flows generated by gravitational instability. The cancellations are less complete at large angular scales ($`l\begin{array}{c}<\hfill \\ \hfill \end{array}100`$) and there is a measurable effect which is exactly taken into account in Boltzmann codes such as CMBFAST<sup>8</sup><sup>8</sup>8 http://www.sns.ias.edu/ matiasz/CMBFAST/cmbfast.html. If speed is critical, the fitting formulae of Griffiths et al. (1998) can be employed for an approximate treatment. The cancellations at small scales can be greatly reduced by the modulation of the number density of free electrons which we can write as $`n_e=x_en_p`$ where $`x_e`$ is the ionization fraction and $`n_p`$ is the number density of all protons, including those in H and He. This modulation occurs even in the case of homogeneous reionization (spatially constant $`x_e`$) due to spatial variations in $`n_p`$. Because $`v_e`$ is zero to zeroth order, and $`n_p`$ is uniform to zeroth order, this is a second-order effect. That this second order effect could dominate the first-order one (which is small due to cancellations) was pointed out by Ostriker and Vishniac and is known as the Ostriker-Vishniac effect (Ostriker & Vishniac 1986, Vishniac 1987). Subsequent systematic study of all 2nd order contributions have shown that it is the dominant second-order contribution as well (Hu et al. 1994, Dodelson & Jubas 1995). Another source of modulation of the number density of free electrons is spatial variations in $`x_e`$, i.e., IHR. As a photon streams towards us from the last-scattering surface, it may pick up a Doppler “up kick” on one side of an overdensity, but avoid the canceling “down kick” on the other because the IGM there is still neutral. How effectively the cancellation is avoided depends on the matching between the typical sizes of the ionized domains and the correlation length of the velocity field, as we will discuss below. ### 3.2 Evolution Equations We now derive equation 11, as well as a more general expression which includes the contribution from damping as well. More complete treatments of the evolution of a photon distribution function in an expanding, inhomogeneous Universe can be found in, e.g., Ma & Bertschinger (1995). The Boltzmann equation governing the evolution of the photon phase-space distribution function, $`f`$, is simply $`\frac{df}{dt}=C`$, where $`C`$ takes into account the effect of collisions—in this case with electrons via Thomson scattering. The Thomson scattering cross-section is independent of frequency and therefore does not induce spectral distortions; the effect on the distribution function can be fully described by the change in brightness. Thus we use the brightness perturbation variable defined by $`\mathrm{\Delta }(f_0/q)^1f_1`$ where $`q`$ is the comoving photon momentum, and the phase-space distribution function has been partitioned into a homogeneous and inhomogeneous part, $`f=f_0+f_1`$. For a Planckian $`f`$ with thermal fluctuations, $`\mathrm{\Delta }T`$, about a mean temperature of $`T`$, $`\mathrm{\Delta }=4\mathrm{\Delta }T/T`$. Expanding the total derivative on $`f`$, keeping terms to first order in the perturbation variables and Fourier transforming the result, the Boltzmann equation becomes: $`\dot{\stackrel{~}{\mathrm{\Delta }}}(\text{k},\widehat{\gamma },\eta )+ik\mu \stackrel{~}{\mathrm{\Delta }}(\text{k},\widehat{\gamma },\eta )+{\displaystyle \frac{2}{3}}\dot{h}(\text{k},\eta )`$ $`+{\displaystyle \frac{2}{3}}\left(3\dot{h}_{33}(\text{k},\eta )\dot{h}(\text{k},\eta )\right)P_2(\mu )`$ $`=`$ $`{\displaystyle d^3xe^{i𝐤𝐱}\dot{\tau }(𝐱,\eta )S(𝐱,\widehat{\gamma },\eta )}`$ (12) where $`\stackrel{~}{\mathrm{\Delta }}`$ is the Fourier transform of the brightness perturbation, $`\widehat{\gamma }`$ is the direction of the photon momentum, $`\mu \widehat{k}\widehat{\gamma }`$, the dot indicates differentiation with respect to conformal time and $`h_{33}`$ and $`h`$ are the 3,3 component and trace of the synchronous gauge metric perturbation, respectively, in the coordinate system defined by $`x_3=\widehat{\gamma }`$. On the right-hand side of equation (3.2) we have explicitly separated out the dependence of the collision term on the differential optical depth, $`\dot{\tau }=a\sigma _Tn_e(𝐱,\eta )`$. The quantity $`S`$ is a function of the brightness perturbation and the electron velocity. To first order in the perturbation variables its Fourier transform is (neglecting the polarization dependence of the Thomson cross-section) $$\stackrel{~}{S}(𝐤,\widehat{\gamma },\eta )=\stackrel{~}{\mathrm{\Delta }}+\stackrel{~}{\mathrm{\Delta }}_0+4\widehat{\gamma }𝐯_e\frac{1}{2}\stackrel{~}{\mathrm{\Delta }}_2P_2(\mu )$$ (13) where $`\mathrm{\Delta }=_{\mathrm{}}(2l+1)\mathrm{\Delta }_{\mathrm{}}P_{\mathrm{}}(\mu )`$. If we were to allow both $`\dot{\tau }`$ and $`S`$ to depend on $`𝐱`$, the Fourier modes would no longer evolve independently. To avoid this complication, we use an expansion in small $`\dot{\tau }`$. Let $`\stackrel{~}{\mathrm{\Delta }}^{(0)}`$ be the solution for $`\dot{\tau }0`$ and let $`\stackrel{~}{\mathrm{\Delta }}^{(1)}=\stackrel{~}{\mathrm{\Delta }}\stackrel{~}{\mathrm{\Delta }}^{(0)}`$. Then to first order in $`\dot{\tau }`$: $$\dot{\stackrel{~}{\mathrm{\Delta }}}^{(1)}(𝐤)+ik\mu \stackrel{~}{\mathrm{\Delta }}^{(1)}(𝐤)=d^3xe^{i𝐤𝐱}\dot{\tau }(𝐱)S^{(0)}(𝐱,\widehat{\gamma }).$$ (14) which has the solution in real space: $$\mathrm{\Delta }^{(1)}(\widehat{\gamma },𝐱,\eta _0)=_{\eta _i}^{\eta _0}𝑑\eta ^{}\dot{\tau }(𝐱^{})S^{(0)}(𝐱^{},\widehat{\gamma })$$ (15) where $`𝐱^{}=𝐱+\widehat{\gamma }(\eta _0\eta ^{})`$. This solution can be derived with a Green function approach or simply verified by substitution. It can be taken to higher order, if desired, by getting $`\mathrm{\Delta }^{(2)}`$ from $`S^{(1)}`$, etc. Note that for the Doppler effect equation 15 is exact because $`v_e`$ (unlike, e.g., $`\mathrm{\Delta }`$) receives no corrections due to the optical depth. We can now argue that the Doppler effect, due to the $`𝐯_e`$ term in equation 15, is the most important inhomogeneous effect. The dominance of this effect is due to the fact that by the time of reionization, velocities have grown substantially as they react to the gravitational field. On the relevant length scales, the rms peculiar velocities are about $`10^3`$. In contrast, the damping of the anisotropy is due to the $`\mathrm{\Delta }`$ term in $`S`$, which has not grown from its primordial value of $`10^5`$. Similarly, polarization is sourced by the quadrupole moment which is also still at the $`10^5`$ level at the epoch of reionization. Therefore we expect these contributions to be down by two orders of magnitude in amplitude, or four in power. If we break up the correlation function into components according to the expansion in optical depth: $$C(\theta )=C^{(0)}(\theta )+2C^{(01)}(\theta )+C^{(1)}(\theta )$$ (16) where $`C^{(0)}`$, $`C^{(1)}`$ and $`C^{(01)}`$ are the correlations between the two zeroth order terms, between the two first order corrections and between the first order and zeroth order terms, respectively. For the Doppler effect, the (exact) correction to the homogeneous case comes entirely from the $`C^{(1)}`$ term which is equal to $$C^{(1)}(\theta )=_{\eta _i}^{\eta _0}𝑑\eta _1_{\eta _i}^{\eta _0}𝑑\eta _2\left(\dot{\tau }\widehat{\gamma }_1𝐯_𝐞\right)_1\left(\dot{\tau }\widehat{\gamma }_2𝐯_𝐞\right)_2$$ (17) where the numeric subscript on the quantities in parentheses means evaluate at, e.g., $`(𝐱+\widehat{\gamma }_1(\eta _0\eta _1),\eta _1)`$. From equation 17 we see that the relevant quantity for the CMB 2-point autocorrelation function, is the two-point function of the product, $`\dot{\tau }\widehat{\gamma }𝐯_e`$, integrated over the lines of sight. As pointed out by Knox et al. (1998) it is this latter 2-point function which must be calculated from either simulations or analytical models of the reionization process. ### 3.3 Results from Simple Models A very simple toy model was used by Gruzinov and Hu (1998, hereafter GH) and by Knox et al. (1998, hereafter KSD) for investigating the effect of IHR on the power spectrum. In this model, independent sources turn on randomly and instantaneously ionize a sphere with comoving radius $`R`$, which then remains ionized. The rate of source creation is such that the mean ionization fraction is zero prior to $`z_{\mathrm{ion}}+\delta z`$ and then rises linearly with redshift to unity at $`z_{\mathrm{ion}}`$. GH derived the following approximate expression for the resulting power spectrum: $`{\displaystyle \frac{l^2C_l}{2\pi }}`$ $`=`$ $`Al^2\theta _0^2e^{l^2\theta _0^2/2},`$ $`\theta _0`$ $``$ $`{\displaystyle \frac{R}{\eta _0\eta _{\mathrm{ion}}}},`$ $`A`$ $`=`$ $`{\displaystyle \frac{\sqrt{2\pi }}{36}}\tau _0^2<v^2>R/\eta _0\delta z\left(1+z_{\mathrm{ion}}\right)^{3/2}`$ (18) where $`v^2`$ is the variance of peculiar velocities and $`\tau _0\sigma _T\eta _0n_0`$ and $`n_0`$ is the number density of free electrons today. The qualitative shape of this power spectrum is simple to understand. On large scales it is the shape of a white noise power spectrum ($`C_l=`$ constant), due to the lack of correlations between the patches. At small scales there is an exponential cutoff corresponding to the angular extent of the patches themselves, since they have no internal structure. The exact calculations for this model by KSD result in a very similar shape but with the power reduced by a factor of two and the location of the peak shifted to larger $`l`$ by 30%. Thus we can write down very simple, and accurate, equations for the location of the maximum, which are those of GH with the prefactors modified to fit the results of KSD: $`\left({\displaystyle \frac{l^2C_l}{2\pi }}\right)_{\mathrm{max}}`$ $`=`$ $`{\displaystyle \frac{\sqrt{2\pi }}{36e}}\tau _0^2<v^2>R/\eta _0\delta z\left(1+z_{\mathrm{ion}}\right)^{3/2},\mathrm{and}`$ $`l_{\mathrm{max}}`$ $`=`$ $`{\displaystyle \frac{1.8\eta _0}{R}}\left[1(1+z_{\mathrm{ion}})^{1/2}\right].`$ (19) Note that for sufficiently large $`R`$, velocities are not coherent within a patch and equations 3.3 and 3.3 break down; cancellations once again become important. However, this only happens for $`R\begin{array}{c}>\hfill \\ \hfill \end{array}10`$ Mpc since the velocity correlation function only becomes negative for separations greater than about $`30h^1`$ Mpc (KSD). As long a $`R`$ is less than 10’s of Mpc, the power scales linearly with $`R`$. Aghanim et al. (1996, hereafter “A96”) used a slightly more complicated model in which there was a distribution of patch sizes due to an assumed distribution of mini–quasar luminosities. The larger patches dominate at low $`l`$ and the cutoff at higher $`l`$ is much softer than in the single patch case, due to the existence of the smaller patches. For an order-of-magnitude calculation, their largest patches have $`R10`$ Mpc and reionization proceeds from $`z_{\mathrm{ion}}+\delta z=10.6`$ to $`z_{\mathrm{ion}}=5.6`$. For this simplification of the A96 model we find the curve in figure 3 which has $`\left(\frac{l^2C_l}{2\pi }\right)_{\mathrm{max}}1\times 10^{12}`$ and $`l_{\mathrm{max}}1300`$. This amplitude is just large enough to possibly be a source of systematic error in the determination of cosmological parameters from MAP data, which has sensitivity out to $`l1000`$. To be more precise, MAP can determine the power in a broad band centered on $`l=800`$ with width $`\delta l=400`$ to about 0.5 %. Thus, given the primary signal assumed in the figure, we may expect signals on the order of $`5\times 10^{13}`$ to be significant contaminants. This IHR signal from a 10 Mpc patch would be an even more noticeable effect in the Planck data since Planck has sensitivity out to about $`l2000`$ and can determine the power in a broad band centered on $`l=1500`$ with width $`\delta l=1000`$ to about 0.2 %. The quasar luminosity function (QLF) is relatively well measured in the optical below redshift $`z<5`$, with a typical average optical luminosity of $`10^{46}\mathrm{erg}\mathrm{s}^1`$. In order to estimate the expected typical HII patch size around the observed quasars, one needs to know their lifetime (or ”duty–cycle”). Taking the lifetime of quasars to be universal, and near the Eddington time ($`5\times 10^7`$ yr), and further ignoring recombinations, one obtains a comoving radius of $``$10–20Mpc. As seen above, this would be large enough for its effects on the CMB to be detectable by Planck and maybe MAP. We emphasize however, that the typical quasar luminosity at $`z=10`$ can be much lower than at $`z3`$. In hierarchical structure formation models, the nonlinear mass–scale is $`3`$ orders of magnitude smaller at $`z=10`$ than at $`z3`$. With a linear scaling between quasar luminosity and halo mass, this would reduce the average patch–size by a factor of $`10`$. In addition, the duty–cycles of quasars are not well established. In theoretical models where the QLF is derived by associating quasars with collapsed dark halos (in the Press–Schechter formalism), a typical lifetime of $`5\times 10^7`$ yr would generically lead to an overestimate of the number of observed quasars (e.g. Small& Blandford 1992). To avoid these overpredictions, one needs to assume that either (1) the quasar shines at a small fraction of the Eddington luminosity, or (2) the efficiency of black hole formation, expressed as $`M_{\mathrm{bh}}/M_{\mathrm{halo}}`$, is 2 orders of magnitude lower than suggested by nearby galaxies (Magorrian et al. 1998), or (3) the black hole masses grow in an optically inactive phase (e.g. Haehnelt, Natarajan & Rees 1998). Alternatively, one can fit the observed QLF by assuming that the quasar lifetime is $``$100 times shorter than the Eddington time (Haiman & Loeb 1997). In the latter case, the HII patch sizes would be reduced to $`R<\mathrm{\hspace{0.33em}1}`$ Mpc (i.e. reionization would correspond to a larger number of smaller patches). According to equation 3.3, patches smaller than a few Mpc result in power spectra with smaller maxima at values of $`l`$ that are too high, and well outside the range of sensitivity of MAP and Planck. Even for patches as large as 1 Mpc, one can see that, at least for reionization at the extreme low end of the redshift range, their effect is negligible (see figure 3). A detection of an IHR signal in the CMB with a peak could have important implications for quasar evolution theories, e.g. by ruling out models that lead to short duty cycles and small patch sizes. A detection might also prove that reionization was caused by mini–quasars, rather than mini–galaxies, since for the mini-galaxy models, the patch sizes are expected to be only a few hundred kpc. However, in light of the small expected patch sizes, it seems unlikely that MAP or Planck reaches the required angular resolution for such a detection. The above calculations of the IHR power spectrum all assumed that the ionized regions are uncorrelated. However, the pattern of density fluctuations will affect the pattern of ionized regions and therefore correlations in the density field should give rise to correlations in the ionization fraction field. Determining what these correlations should be is complicated by the fact that the density field affects the ionization field in two different ways: by the fact that the sources are most likely located in high density regions and the fact that the recombination rates are highest in the densest regions. To take the former effect into account, KDS gave the ionization field the same correlation structure as that of the “high-peaks” of the density field. These high-peaks are regions where the fractional density perturbation, $`\delta =\frac{\delta \rho }{\rho }`$, smoothed on the appropriate scale, exceeds the critical value for spherical collapse. The smoothing scale was taken to be that corresponding to $`10^8\mathrm{M}_{}[(1+z)/11]^{3/2}`$, since objects below this size will not be able to cool sufficiently to fragment and form stars (Haiman, Rees & Loeb 1997). One further assumption was that the sources ionized a region $`E`$ times larger than that from which they collapsed where the efficiency factor is time-dependent and given by (Haiman & Loeb 1997, Haiman & Loeb 1998a) $`E=7\times 10^5(\eta /\eta _0)^6`$. For this rather high choice for the efficiency, reionization is complete at $`z_{\mathrm{ion}}=26`$. The resulting spectrum is shown as the dashed line in figure 3. The correlations between the patches—which are individually only hundreds of kiloparsecs in radius—drastically alter the low $`l`$ shape. Even given the high redshift of reionization in this scenario, the corresponding uncorrelated model of equation 3.3 would predict orders of magnitude less power at $`l10^3`$ to $`10^4`$. If this signal were in the data, but not in ones model of the data, it would lead to a bias in the determination of cosmological parameters. KSD estimated the bias that would occur in a nine parameter fit, assuming the primary signal was that of standard cold dark matter. For Planck, the systematic in the estimate of the baryon density was equal to the statistical error. For all the other parameters the systematic error was less than the statistical error. ### 3.4 Discussion If the mini–quasar scenario is correct, with bright, long-lived quasars, then the Doppler effect will lead to a significant contamination of the Planck data and possibly a marginally important one for MAP data. For the more likely mini–galaxy scenario, if one ignores the correlations in the ionized patches, the contamination is completely irrelevant for both MAP and Planck. However, correlations may indeed be important, and in one attempt to take them into account, we have seen that for an extreme mini–galaxy scenario, the contamination is marginally significant for Planck. Work by Oh (1999) supports the idea that the ionized patches should be correlated. Oh has modeled the ionizing sources as characterized by a mean separation (given by Press-Schechter theory), an isotropic attenuation length which specifies how far the ionizing radiation propagates, and a power-law correlation function (appropriate for these highly-biased objects). Placing sources in a box and working out the resulting flux densities he can then calculate the correlation function of the flux density. It is indeed correlated, with significant (0.2) correlation at a comoving separations of 10 Mpc. Although a clumping factor for the IGM from Gnedin & Ostriker (1997) is used to estimate the attenuation length, all propagation from the sources is taken to be isotropic, and hence this work does not yet include the influence of shielding and shadowing. As one can see in Fig. 3, the contribution from IHR may be sub-dominant to that from the Ostriker-Vishniac (O-V) effect, as it is for all three models of IHR shown. The O-V contribution here is calculated using the formalism of Jaffe & Kamionkowski (1998, hereafter “JK98”). One must keep in mind that as a second-order effect, it is highly sensitive to the normalization. The curve here is COBE-normalized ($`\sigma _8=1.2`$); a cluster normalization of $`\sigma _8=0.6`$ would reduce the $`C_l`$ by a factor of 16. It is also calculated for the very early reionization redshift (and resulting high optical depth) of $`z_{\mathrm{ion}}=26`$. However, the amplitude is not very sensitive to $`\tau `$ because much of the effect actually comes from more recent times—the dropping number density of electrons must compete with the growth in the density contrasts and peculiar velocities(Hu & White 1995, JK98). Of course, the separation into IHR and O-V is artificial, and efforts to understand the contribution from reionization are surely going to benefit from treating them simultaneously. Indeed, the 2-point function of $`\dot{\tau }\widehat{\gamma }𝐯_e`$, measured from a simulation, would include both effects. Gravity is the only other late-time, frequency-independent, influence on the CMB photons, causing lensing and the Rees-Sciama effect (Rees & Sciama 1968). The angular-power spectrum from the Rees-Sciama effect has been calculated by Seljak (1995) and is substantially sub-dominant to either the primary spectrum or the O-V effect at all angular scales. Gravitational lensing results in a smearing of the primary CMB angular power spectrum and has also been calculated by Seljak (1996). Uncertainties in this calculation may indeed be important for any attempt to recover the contribution from the IHR and Ostriker-Vishniac effects. ## 4 Conclusions The principal unanswered questions about reionization are: what type of sources caused it, and at what redshift did it occur? These questions have been addressed both theoretically and observationally. From a theoretical point of view, the two leading candidate sources are an early generation of stars (“mini–galaxies”), or massive black holes in small halos (“mini–quasars”). It is possible to estimate the efficiencies with which objects form in the earliest collapsed halos: from the metals in the Ly$`\alpha `$ forest, and from the evolution of the quasar luminosity function at $`z<5`$, respectively. However, the uncertainties in these efficiencies are still too large to allow definite predictions. The expected reionization redshift, depending on the type of source, cosmology, power spectrum, and a combination of the efficiency factors, is between $`7<z<\mathrm{\hspace{0.33em}20}`$ in homogeneous models. A further complication is the inhomogeneous nature of reionization: the sources of ionizing radiation are likely clustered, and embedded in complex dense regions, such as the filaments and sheets seen in 3D simulations. At the same time, the gas to be ionized likely has significant density fluctuations. Further theoretical progress on the problem will likely come from 3D simulations. Such simulations must be able to follow the propagation of a non-spherical ionization front into a medium that has large density fluctuations, as well as opaque clumps of absorbing material. In a homogeneous medium populated by randomly distributed sources, the background radiation would have negligible fluctuations (e.g. Zuo 1992). However, because the density inhomogeneities absorb the UV flux of the ionizing sources, and re-emit diffuse ionizing photons as they recombine, the ionizing background will likely become significantly inhomogeneous and anisotropic (Norman et al. 1998). The first numerical algorithms to deal in full detail with these problems have been proposed by Abel, Norman & Madau (1998) and Razoumov & Scott (1998) (see also Gnedin 1998). The reionization redshift can also be constrained observationally, with a resulting uncertainty that is comparable to the theoretical range: $`6<z<\mathrm{\hspace{0.33em}40}`$. On both ends of this redshift range, observational progress is likely in the near feature. If new Ly$`\alpha `$ emitters are discovered at higher redshifts, this would improve the lower bound. Conversely, as the CMB data are collected, the constraint on the electron scattering optical depth will improve, tightening the upper bound. We have seen how the optical depth to Thomson-scattering, and hence the epoch of reionization, can be constrained by determining the amount of damping in the CMB temperature anisotropy and by detection of the polarization contribution at very large angular scales. The spatial inhomogeneity of the reionization process gives us a further opportunity to probe in more detail the epoch of reionization. For this probe to become reality, further development of theoretical predictions is required, as well as more sensitive measurements at arc minute and smaller angular scales. Fortunately, the inhomogeneity of reionization is unlikely to spoil our ability to interpret the primary CMB anisotropy, to be measured with exquisite precision over the next decade, although this may not be the case if reionization did indeed occur via mini–quasars. For the mini–galaxy models, even when reionization occurs at the very high end of the allowed redshift range, very little signal is produced at the relevant angular scales. Further progress in these predictions is likely to be motivated by the desire to understand the spectrum at $`l\begin{array}{c}>\hfill \\ \hfill \end{array}3000`$ as a means of probing the end of the dark ages, rather than at $`l\begin{array}{c}<\hfill \\ \hfill \end{array}3000`$, as a possible contaminant of the primary signal. ###### Acknowledgements. We are grateful to our collaborators S. Dodelson, A. Loeb, M. Rees, and R. Scoccimarro for contributing to our understanding of this subject, to A. Jaffe for supplying the “O-V” effect curve in Fig. 3, and to M. Norman for supplying Fig. 2. ZH was supported at Fermilab by the DOE and the NASA grant NAG 5-7092. References Abel, T., Norman, M. L., & Madau, P. 1998, ApJL, submitted, preprint astro-ph/9812151 Aghanim, N., Désert, F. X., Puget, J. L., & Gispert, R. 1996, A&A, 311, 1 Arons, J., & Wingert, D. W. 1972, ApJ, 177, 1 Baltz, E. A., Gnedin, N. Y., & Silk, J. 1998, ApJL, 493, 1 Barkana, R., & Loeb, A. 1999, ApJ, submitted, preprint astro-ph/9901114 Baugh, C. M., Cole, S., Frenk, C. S. & Lacey, C. G. 1998, ApJ, 498, 504 Bouchet, F. R., Prunet, S., Sethi, K. S., preprint astro-ph/9809353 Carr, B. J., Bond, J. R. & Arnett, W. D. 1986, ApJ., 306, L51 Conti, A., Kennefick, J. D., Martini, P., & Osmer, P. S. 1999, AJ, in press, astro-ph/9808020 Couchman, H. M. P. & Rees, M. J. 1986, MNRAS, 221, 53 Dodelson, S. & Jubas, J. 1995, ApJ, 439, 503 Elvis, M., Wilkes, B. J., McDowell, J. C., Green, R. F., Bechtold, J., Willner, S. P., Oey, M. S., Polomski, E., & Cutri, R. 1994, ApJS, 95, 1 Eisenstein, D. J., Hu, W., Tegmark, M., preprint astro-ph/9807130 Fabian, A. C. & Barcons, X. 1992, ARA&A, 30, 429 Fardal, M. A., Giroux, M. L., & Shull, J. M. 1998, AJ, 115, 2206 Fixsen, D. J., Cheng, E. S., Gales, J. M., Mather, J. C., Shafer, R. A., & Wright, E. L. 1996, ApJ, 473, 576 Fixsen, D. J., Dwek, E., Mather, J. C., Bennett, C. L., Shafer, R. A., 1998, ApJ, inpress, preprint astro-ph/9803021. Fukugita, M. & Kawasaki, M. 1994, MNRAS, 269,563. Gibilisco, M. 1996, Int. J. Mod. Phys. A11, 5541, preprint astro-ph/9611227 Gnedin, N. Y. 1998, in Proc. of 19<sup>th</sup> Texas Symposium on Relativistic Astrophysics and Cosmology, held in Paris, France, Dec. 14-18, 1998, Eds. J. Paul, T. Montmerle, and E. Aubourg (CEA Saclay), in press Gnedin, N. Y., & Ostriker, J. P. 1997, ApJ, 486, 581 Griffiths, L. M., Barbosa, D., Liddle, A. R. 1998, MNRAS, submitted, preprint astro-ph/9812125 Gruzinov, A., & Hu, W. 1998, ApJ, in press, preprint astro-ph/9803188 Gunn, J. E., & Peterson, B. A., 1965, ApJ, 142, 1633 Haehnelt, M. G., Natarajan, P., & Rees, M. J. 1998, MNRAS, 300, 817 Haiman, Z., Abel, T., & Rees, M. J. 1999, ApJ, to be submitted Haiman, Z., & Loeb, A. 1997, ApJ, 483, 21 Haiman, Z., & Loeb, A. 1998a, ApJ, 503, 505 Haiman, Z., & Loeb, A. 1998b, ApJ, in press, preprint astro-ph/9807070 Haiman, Z., & Loeb, A. 1998c, invited contribution to the Proceedings of 9<sup>th</sup> Annual October Astrophysics Conference, After the Dark Ages: When Galaxies Were Young, October 1998, College Park, MD, preprint astro-ph/9811395 Haiman, Z., Madau, P., & Loeb, A. 1999, ApJ, in press, preprint astro-ph/9805258 Haiman, Z., & Spaans, M. 1998, ApJ, in press, preprint astro-ph/9809223 Haiman, Z., Rees, M. J., & Loeb, A. 1997, ApJ, 476, 458 Hauser, M. G. 1998, ApJ, 508, 25 Hu, E. M., Cowie, L. L., & McMahon, R. G. 1998, ApJ, 502, 99 Hu, W., & White, M. 1995, Astronomy and Astrophysics 315, 33 Hu, W., & White, M. 1997, ApJ, 479, 568 Hu, W., Scott, D. & Silk, J. 1994, Phys. Rev. D49, 648 Jaffe, A. H., & Kamionkowski, M., Phys. Rev. D58, 1998, 043001 Kaiser, N. 1984, ApJ, 282, 374 Knox, L., Scoccimarro, R., Dodelson, S., 1998, Phys. Rev. Lett., 81, 2004 Knox, L., 1998, preprint astro-ph/9811358 Loeb, A., & Haiman, Z. 1997, ApJ, 490, 571 Ma, C.-P. & Bertschinger, E. (1995), ApJ, 455, 7 Madau, P., Haardt, F., & Rees, M. J. 1998, ApJ, submitted, preprint astro-ph/9809058 Madau, P., Meiksein, A., & Rees, M. J. 1997, ApJ, 475, 429 Magorrian, J., et al. 1998, AJ, 115, 2285 Meiksin, A., & Madau, P. 1993, ApJ, 412, 34 Miralda-Escudé, J. 1998, ApJ, 501, 15 Miralda-Escudé, J., Haehnelt, M., & Rees, M. J. 1999, ApJ, submitted, preprint astro-ph/9812306 Miralda-Escudé, J., & Rees, M. J. 1994, MNRAS, 266, 343 Nath, B. B., & Bierman, P. L. 1993, MNRAS, 265, 241 Norman, M. L., Paschos, P., & Abel, T. 1998, in Proc. of $`H_2`$ in the Early Universe, Workshop held in Florence, Italy, eds. E. Corbelli, D. Galli, and F. Palla, Memorie Della Societa Astronomica Italiana, p. 455 Oh, S.P., work in progress Ostriker, J. P., & Vishniac, E. T., 1986, ApJL, 306, 51 Ostriker, J. P., & Gnedin, N. Y. 1996, ApJ, 472, 603 Ostriker, J. P., & Cowie, L. L., 1981, ApJL, 243, 127 Pei, Y. C. 1995, ApJ, 438, 623 Prunet, S., Sethi, K. S., Bouchet, F. R., preprint astro-ph/9803160 Press, W. H., & Schechter, P. L. 1974, ApJ, 181, 425 Puget, J.-L., Abergel, A., Bernard, J.-P., Boulanger, F., Burton, W. B., Desert, F.-X., Hartmann, D., 1996, A&A 308, L5 Rauch, M., Miralda-Escudé, J., Sargent, W. L. W., Barlow, T. A., Weinberg, D. H., Hernquist, L., Katz, N., Cen, R., & Ostriker, J. P. 1997, ApJ, 489, 7 Razoumov, A., & Scott, D. 1998, MNRAS, submitted, preprint astro-ph/9810425 Rees. M. J., & Sciama, D. W. 1968, Nature, 517, 611 Reimers, D., Köhler, S., Wisotzki, L., Groote, D., Rodriguez-Pascual, P., & Wamsteker, W. 1997, A&A, 326, 489 Reisenegger, A., & Miralda-Escudé, J. 1995, ApJ, 449, 476 Scalo, J. M. 1986, Fundamentals of Cosmic Physics, vol. 11, p. 1-278 Sciama, D. W. 1993, Modern Cosmology and the Dark Matter Problem, Cambridge University Press Scott, D., & Rees, M. J. 1990, MNRAS, 247, 510 Seljak, U. 1995, preprint astro-ph/9506048 Seljak, U., 1996, ApJ, 463, 1 Schlegel, D. J., Finkbeiner, D. P., Davis, M., 1998, ApJ, 500, 525 Shapiro, P. R., Raga, A. C., & Mellema, G. 1997, in Structure and Evolution of the IGM from QSO Absorption Line Systems, 13<sup>th</sup> IAP Colloquium, eds. P. Petitjean and S. Charlot (Paris: Editions Frontiere), in press, preprint astro-ph/9710210 Shapiro, P. R., & Giroux, M. L. 1987, ApJ, 321, L107 Shapiro, P. R., Giroux, M. L., & Babul, A. 1994, ApJ, 427, 25 Shapiro, P. R., Giroux, M. L., & Kang, H. 1987, in High Redshift and Primeval Galaxies, eds. J. Bergeron, D. Kunth, B. Rocca-Volmerange, and J. Tran Thanh Van (Paris: Editions Frontieres), pp. 501-515 Shaver, P., Windhorst, R. A., Madau, P., & de Bruyn, A. G. 1999, A&A, submitted, preprint astro-ph/9901320 Small, T. A., & Blandford, R. D. 1992, MNRAS, 259, 725 Songaila, A., & Cowie, L. L. 1996, AJ, 112, 335 Stebbins, A., & Silk, J. 1986, ApJ, 300, 1 Sunyaev, R. A. 1978, in Large-Scale Structure of the Universe, eds. M.S. Longair & J. Einasto (Dordrecht: Reidel), p. 393 Tegmark, M., Silk, J., & Blanchard, A. 1994, ApJ, 420, 484 Thompson, R., et al. 1998, preprint astro-ph/9810285 Tytler, D. et al. 1995, in QSO Absorption Lines, ESO Astrophysics Symposia, ed. G. Meylan, Springer, Heidelberg, p.289 Vishniac, E. T., ApJ, 1987, 322, 597. Wadsley, J. W., Hogan, C. J., Anderson, S. F. 1998, to appear in the proceedings of ”After the Dark Ages: When Galaxies were Young (the Universe at $`2<z<5`$)”, 9th Annual October Astrophysics Conference in Maryland, preprint astro-ph/9812239 Weinberg, D. H., Miralda-Escudé, J., Hernquist, L., & Katz, N. 1997, ApJ, 490, 564 Weymann, R.J., Stern, D., Bunker, A., Spinrad, H., Chaffee, F.H., Thompson, R.I., & Storrie-Lombardi, L.J. 1998, ApJL, 505, 95 Zaldarriaga, M., Spergel, D., & Seljak, U. 1997, ApJ, 488, 1 Zhang, Y., Meiksin, A., Anninos, P., & Norman, M. L. 1998, ApJ, 495, 63 Zuo, L. 1992, MNRAS, 258, 36
no-problem/9902/nucl-th9902001.html
ar5iv
text
# 1 Chiral Unitary Approach ## 1 Chiral Unitary Approach Chiral perturbation theory ($`\chi PT`$) has proved to be a very suitable instrument to implement the basic dynamics and symmetries of the meson meson and meson baryon interaction at low energies. The essence of the perturbative technique, however, precludes the possibility of tackling problems where resonances appear, hence limiting tremendously the realm of applicability. The method that we expose naturally leads to low lying resonances and allows one to face many problems so far intractable within $`\chi PT`$. The method incorporates new elements: 1) Unitarity is implemented exactly; 2) It can deal with coupled channels allowed with pairs of particles from the octets of stable pseudoscalar mesons and ($`\frac{1}{2}^+`$) baryons; 3) A chiral expansion in powers of the external four-momentum of the lightests pseudoscalars is done for Re $`T^1`$, instead of the $`T`$ matrix itself which is the case in standard $`\chi PT`$. We sketch here the steps involved in this expansion for the meson meson interaction. One starts from a $`K`$ matrix approach in coupled channels where unitarity is automatically fulfilled and writes $$T^1=K^1i\sigma ,$$ (1) where $`T`$ is the scattering matrix, $`K`$ a real matrix in the physical region and $`\sigma `$ is a diagonal matrix which measures the phase-space available for the intermediate states $$\sigma _{nn}(s)=\frac{k_n}{8\pi \sqrt{s}}\theta \left(s(m_{1n}+m_{2n})^2\right),$$ (2) where $`k_n`$ is the on shell CM momentum of the meson in the intermediate state $`n`$ and $`m_{1n}`$, $`m_{2n}`$ are the masses of the two mesons in the state $`n`$. The meson meson states considered here are $`K\overline{K}`$, $`\pi \pi `$, $`\pi \eta `$, $`\eta \eta `$, $`\pi K`$, $`\pi \overline{K}`$, $`\eta K`$, $`\eta \overline{K}`$. Since $`K`$ is real, from eq. (1) one sees that $`K^1`$ = Re $`T^1`$. In non-relativistic Quantum Mechanics, in the scattering of a particle from a potential, it is possible to expand $`K^1`$ in powers of the momentum of the particle at low energies as follows (in the s-wave for simplicity) $$\text{Re}T^1K^1=\sigma ctg\delta \frac{1}{a}+\frac{1}{2}r_0k^2,$$ (3) with $`k`$ the particle momentum, $`a`$ the scattering length and $`r_0`$ the effective range. The ordinary $`\chi `$PT expansion up to $`O(p^4)`$ is given by $$T=T_2+T_4,$$ (4) where $`T_2`$, which is $`O(p^2)`$, is obtained from the lowest order chiral Lagrangian, $`L^{(2)}`$, whereas $`T_4`$ contains one loop diagrams in the s, t, u channels, constructed from the lowest order Lagrangian, tadpoles and the finite contribution from the tree level diagrams of the $`L^{(4)}`$ Lagrangian. This last contribution, after a suitable renormalization, is just a polynomial, $`T^{(p)}`$. Our $`T`$ matrix, starting from eq. (1) is given by $$T=[\text{Re}T^1i\sigma ]^1T_2[T_2\text{Re}T^1T_2iT_2\sigma T_2]^1T_2,$$ (5) where, in the last step, we have multiplied by $`T_2T_2^1`$ on the left and $`T_2^1T_2`$ on the right for technical reasons. But using standard $`\chi `$PT we obtain the following expansion up to order $`O(p^4)`$, $$T_2ReT^1T_2=T_2\text{Re}T_4\mathrm{}$$ (6) and hence, recalling that $`\text{Im}T_4=T_2\sigma T_2`$, one obtains $$T=T_2[T_2T_4]^1T_2,$$ (7) which is the coupled channel generalization of the inverse amplitude method of . Once this point is reached one has several options to proceed: a) A full calculation of $`T_4`$ within the same renormalization scheme as in $`\chi PT`$ can be done. The eight $`L_i`$ coefficients from $`L^{(4)}`$ are then fitted to the existing meson meson data on phase shifts and inelasticities up to 1.2 GeV, where 4 meson states are still unimportant. This procedure has been carried out in . The resulting $`L_i`$ parameters are compatible with those used in $`\chi PT`$. At low energies the $`O(p^4)`$ expansion for $`T`$ of eq. (7) is identical to that in $`\chi PT`$. However, at higher energies the nonperturbative structure of eq. (7), which implements unitarity exactly, allows one to extend the information contained in the chiral Lagrangians to much higher energy than in ordinary $`\chi `$ PT, which is up to about $`\sqrt{s}400`$ MeV. Indeed it reproduces the resonances present in the L = 0, 1 partial waves. b) A technically simpler and equally successful additional approximation is generated by ignoring the crossed channel loops and tadpoles and reabsorbing them in the $`L_i`$ coefficients given the weak structure of these terms in the physical region. The fit to the data with the new $`\widehat{L}_i`$ coefficients reproduces the whole meson meson sector, with the position, widths and partial decay widths of the $`f_0(980)`$, $`a_0(980)`$, $`\kappa (900)`$, $`\rho (770)`$, $`K^{}(900)`$ resonances in good agreement with experiment . A cut off regularization is used in for the loops in the s-channel. By taking the loop function with two intermediate mesons $$G_{nn}(s)=i\frac{d^4q}{(2\pi )^4}\frac{1}{q^2m_{1n}^2+iϵ}\frac{1}{(Pq)^2m_{2n}^2+iϵ},$$ (8) where $`P`$ is the total meson meson momentum, one immediately notices that $$\text{Im}G_{nn}(s)=\sigma _{nn}.$$ (9) Hence, we can write $$\text{Re}T_4=T_2\text{Re}GT_2+T_4^{(p)},$$ (10) where $`\text{Re}G`$ depends on the cut off chosen for $`|\stackrel{}{q}|`$. This means that the $`\widehat{L}_i`$ coefficients of $`T_4^{(p)}`$ depend on the cut off choice, much as the $`L_i`$ coefficients in $`\chi PT`$ depend upon the regularization scale. c) For the L = 0 sector (also in L = 0, S = $`1`$ in the meson baryon interaction) a further technical simplification is possible. In these cases it is possible to choose the cut off such that, given the relation between $`\text{Re}G`$ and $`T_4^{(p)}`$, this latter term is very well approximated by $`\text{Re}T_4=T_2\text{Re}GT_2`$. This is impossible in those cases because of the predominant role played by the unitarization of the lowest order $`\chi PT`$ amplitude, which by itself leads to the low lying resonances, and because other genuine QCD resonances appear at higher energies. In such a case eq. (5) becomes $$T=T_2[T_2T_2GT_2]^1T_2=[1T_2G]^1T_2,$$ (11) or, equivalently, $$T=T_2+T_2GT,$$ (12) which is a Bethe-Salpeter equation with $`T_2`$ and $`T`$ factorized on shell outside the loop integral, with $`T_2`$ playing the role of the potential. This option has proved to be successful in the L = 0 meson meson sector in and in the L = 0, S = $`1`$ meson baryon sector in . In the meson baryon sector with S = 0, given the disparity of the masses in the coupled channels $`\pi N`$, $`\eta N`$, $`K\mathrm{\Sigma }`$, $`K\mathrm{\Lambda }`$, the simple “one cut off approach” is not possible. In higher order Lagrangians are introduced while in different subtraction constants (or equivalently different cut offs) in G are incorporated in each of the former channels leading in both cases to acceptable solutions when compared with the data. ## 2 Applications for processes involving pairs of mesons. Given the shortness of space we shall not show results on the meson meson and meson baryon scattering that can be found in , together with all other technical details. Instead, we make a short summary of applications of these ideas to other processes with results for the latest developments. One of the applications in the meson meson sector is the study of the $`\gamma \gamma \pi ^+\pi ^{}`$, $`\pi ^0\pi ^0`$, $`K^+K^{}`$, $`K^0\overline{K^0}`$, $`\pi \eta `$ reactions. The $`\gamma \gamma \pi ^+\pi ^{}`$, $`\pi ^0\pi ^0`$ reaction has been one of the standard places to test $`\chi PT`$ , with the obvious limitations to small energies. The new techniques have allowed to extend the calculations up to about $`\sqrt{s}=1.4`$ GeV and include the $`K^+K^{}`$, $`K^0\overline{K^0}`$, $`\pi \eta `$ channels that were not accessible with $`\chi PT`$. Results for all these channels are presented in where a good agreement with experiment is found in all cases. The decay channels of the $`\varphi (1020)`$ resonance also offer a good testing ground for the chiral unitarity theory. In the $`\varphi \gamma K^0\overline{K^0}`$ decay channel was studied providing a calculation of the background in future experiments testing CP violation at $`DA\mathrm{\Phi }NE`$. In a recent work these ideas have been extended to study the $`\rho ^0\pi ^+\pi ^{}\gamma `$, $`\pi ^0\pi ^0\gamma `$ and $`\varphi \pi ^+\pi ^{}\gamma `$, $`\pi ^0\pi ^0\gamma `$ process . The latter proceeds via $`K^+K^{}`$ loops as shown in fig. 1. The transition amplitude for this process behaves as $$t^\gamma eg_\varphi \stackrel{~}{G}(M_\mathrm{\Phi },M_I)t_{K^+K^{}\pi ^+\pi ^{}}(M_I),$$ (13) where $`g_\varphi `$ is the $`\varphi K^+K^{}`$ decay coupling, $`M_I`$ the invariant mass of the $`\pi ^+\pi ^{}`$ system and $`\stackrel{~}{G}`$ sums the loop functions of the three diagrams, which using arguments of gauge invariance one can prove to be finite. At the same time $`t_{K^+K^{}\pi ^+\pi ^{}}(M_I)`$, as shown in , is the on shell strong scattering matrix for $`K^+K^{}\pi ^+\pi ^{}`$, which is evaluated using the chiral unitary techniques in . The branching ratio $`\mathrm{\Gamma }_{\pi ^+\pi ^{}\gamma }/\mathrm{\Gamma }_\varphi `$ obtained is 2.6 $`10^4`$ while the $`f_0`$ peak contributes 0.76 $`10^4`$. This latter value is compatible and close to the latest boundaries of $`B<1710^4`$ obtained in Novosibirsk . After completion of this work a publication has appeared , where the spectrum for $`\varphi \pi ^0\pi ^0\gamma `$ is measured. Taking into account that the rate for this process is one half of the one for $`\varphi \pi ^+\pi ^{}\gamma `$ , the value for the branching ratio which are obtain is 1.3 $`10^4`$ which compares very well with the experimental value (1.14 $`\pm `$ 0.10 $`\pm `$ 0.12) $`10^4`$. The invariant mass distribution is also in agreement with the experimental one, within statistical and systematic errors, and this latter distribution shows clearly the $`f_0`$ peak as predicted by theory. In fig. 2 we show our predicted results compared with the recent measurements of . The agreement found with the whole spectrum is a good support for the chiral unitary theory used. Another recent calculation is the reaction $`\gamma pf_0(a_0)p`$ close to threshold , which requires energies of $`E_\gamma 1.7`$ GeV, or larger, and which can be studied in planned experiments at SPring8/RCNP or TJNAF. The scattering of mesons in a nuclear medium is an interesting problem and predictions of a quasibound $`\pi \pi `$ system just below threshold have been made . Interestingly, an enhancement in the mass distribution of two pions close to threshold in the $`\pi ^{}A\pi ^+\pi ^{}A^{}`$ reaction is seen which is absent in the $`\pi ^+A\pi ^+\pi ^+A^{}`$. The calculations done in using the chiral unitary model and medium modifications lead to an appreciable enhancement of the strength around the two pion threshold, although peaks corresponding to the bound state do not appear. The results are remarkably similar to those obtained using a meson exchange picture for the $`\pi \pi `$ interaction, yet imposing minimal chiral constraints , hence stressing the role of chiral symmetry in this process. Another interesting development along these lines is the work of , where, by taking the lowest order chiral Lagrangian and assuming that higher orders of the chiral Lagrangian are generated by the exchange of genuine resonances which survive in the large $`N_c`$ limit , the unitary approach together with the comparison with data, allows one to distinguish between the genuine resonances corresponding to QCD states, surviving in the large $`N_c`$ limit, and scattering resonances which appear only as a consequence of the nature of the lowest order Lagrangian together with unitarity. These are only examples of problems which can be tackled with the new techniques. Given the broad range of applications of $`\chi PT`$ and the increased range of energies now accessible with the chiral unitary approach, one can envisage a fruitful field of applications with the new approach in strong, weak and electromagnetic processes at intermediate energies. ## 3 Applications in the meson baryon sector As quoted above, a good description of the $`K^{}p`$ and coupled channel interaction is obtained in terms of the lowest order Lagrangians and the Bethe Salpeter equation with a single cut off. One of the interesting features of the approach is the dynamical generation of the $`\mathrm{\Lambda }(1405)`$ resonance just below the $`K^{}p`$ threshold. The threshold behavior of the $`K^{}p`$ amplitude is thus very much tied to the properties of this resonance. Modifications of these properties in a nuclear medium can substantially alter the $`K^{}p`$ and $`K^{}`$ nucleus interaction and experiments looking for these properties are most welcome. Some electromagnetic reactions appear well suited for these studies. Application of the chiral unitary approach to the $`K^{}p\gamma \mathrm{\Lambda }`$, $`\gamma \mathrm{\Sigma }^0`$ reactions at threshold has been carried out in and a fair agreement with experiment is found. In particular one sees there that the coupled channels are essential to get a good description of the data, increasing the $`K^{}p\gamma \mathrm{\Sigma }^0`$ rate by about a factor 16 with respect to the Born approximation. In a recent paper we propose the $`\gamma pK^+\mathrm{\Lambda }(1405)`$ reaction as a means to study the properties of the resonance, together with the $`\gamma AK^+\mathrm{\Lambda }(1405)A^{}`$ reaction to see the modification of its properties in nuclei. The resonance $`\mathrm{\Lambda }(1405)`$ is seen in its decay products in the $`\pi \mathrm{\Sigma }`$ channel, but as shown in the sum of the cross sections for $`\pi ^0\mathrm{\Sigma }^0`$, $`\pi ^+\mathrm{\Sigma }^{}`$, $`\pi ^{}\mathrm{\Sigma }^+`$ production has the shape of the resonance $`\mathrm{\Lambda }(1405)`$ in the I = 0 channel. Hence, the detection of the $`K^+`$ in the elementary reaction, looking at $`d\sigma /dM_I`$ ($`M_I`$ the invariant mass of the meson baryon system which can be induced from the $`K^+`$ momentum), is sufficient to get a clear $`\mathrm{\Lambda }(1045)`$ signal. In nuclear targets Fermi motion blurs this simple procedure (just detecting the $`K^+`$), but the resonance properties can be reconstructed by observing the decay products in the $`\pi \mathrm{\Sigma }`$ channel. In fig. 3 we show the cross sections predicted for the $`\gamma pK^+\mathrm{\Lambda }(1405)`$ reaction looking at $`K^+\pi ^0\mathrm{\Sigma }^0`$, $`K^+`$ $`all`$ and $`K^+\mathrm{\Lambda }(1405)`$ (alone). All of them have approximately the same shape and strength given the fact that the I = 1 contribution is rather small. The energy chosen for the photon is $`E_\gamma `$ = 1.7 GeV which makes it suitable of experimentation at SPring8/RCNP, where the experiment is planned , and TJNAF. One variant of this reaction is its time reversal $`K^{}p\mathrm{\Lambda }(1405)\gamma `$. This reaction, for a $`K^{}`$ momentum in the 300 to 500 MeV/c range, shows clearly the $`\mathrm{\Lambda }(1405)`$ resonant production and has the advantage that the analogous reaction in nuclei still allows the observation of the $`\mathrm{\Lambda }(1405)`$ resonance with the mere detection of the photon, the Fermi motion effects being far more moderate than in the case of the $`\gamma AK^+\mathrm{\Lambda }(1405)X`$ reaction which requires larger photon momenta and induces a broad distribution of $`M_I`$ for a given $`K^+`$ momentum. One of the interesting developments around these lines is the interaction of the $`K^{}`$ with the nuclei, with its relationship to problems like $`K^{}`$ atoms or the possible condensation of $`K^{}`$ in neutron stars. The problem has been looked at from the chiral perspective by evaluating Pauli blocking effects on the nucleons of the intermediate $`\overline{K}N`$ states . These effects lead to a $`K^{}`$ self-energy in nuclei which is attractive already at very low densities, as a consequence of pushing the resonance at energies above $`K^{}p`$ threshold. However, more recent investigations considering the $`\overline{K}`$ self-energy in a self-consistent way lead to quite different results since the resonance barely changes its position. Yet one still gets an attractive self-energy which is demanded by the $`K^{}`$ atom data . A step forward in this direction is given in , where in addition to the $`K^{}`$ self-energy in the medium, one also renormalizes the pions and takes into account the different binding of N, $`\mathrm{\Sigma }`$ and $`\mathrm{\Lambda }`$ in nuclei. Preliminary results from indicate that the $`K^{}`$ self-energy obtained in can lead to a good microscopical description of present data on $`K^{}`$ atoms, hence providing an accurate tool to study the properties of $`K^{}`$ at higher densities and the eventual condensation in neutron stars. ## 4 Summary We have reported on the unitary approach to meson meson and meson baryon interaction using chiral Lagrangians, which has proved to be an efficient method to extend the information contained in these Lagrangians to higher energies where $`\chi PT`$ cannot be used. This new approach has opened the doors to the investigation of many problems so far intractable with $`\chi PT`$ and a few examples have been reported here. It is clear that these are only a few of the many and interesting problems which can now be tackled from this perspective. At the same time we have shown that many interesting predictions can be tested with present machines, and a few experiments in this direction are already planned. Further research, theoretical and experimental, along these lines appears to us a very interesting task to undertake in a near future. ## Acknowledgments. We are thankful to the COE Professorship program of Monbusho which enabled E. O. to stay at RCNP where part of the work reported here has been done. E. M. and J. C. N. would like to acknowledge the hospitality of the RCNP of the Osaka University and support from the Ministerio de Educacion y Cultura. This work is partly supported by DGICYT, contract number PB 96-0753.
no-problem/9902/cond-mat9902301.html
ar5iv
text
# Quantum tunneling across spin domains in a Bose-Einstein condensate \[ ## Abstract Quantum tunneling was observed in the decay of metastable spin domains in gaseous Bose-Einstein condensates. A mean-field description of the tunneling was developed and compared with measurement. The tunneling rates are a sensitive probe of the boundary between spin domains, and indicate a spin structure in the boundary between spin domains which is prohibited in the bulk fluid. These experiments were performed with optically trapped $`F=1`$ spinor Bose-Einstein condensates of sodium. \] A metastable system trapped in a local minimum of the free energy can decay to lower energy states in two ways. Classically, the system may decay by acquiring thermal energy greater than the depth of the local energy well (the activation energy). Yet, according to quantum mechanics, the system may decay even in the absence of thermal fluctuations by tunneling through the classically forbidden energy barrier. Quantum tunneling describes a variety of physical and chemical phenomena and finds common applications in, for example, scanning tunneling microscopy. In these systems, tunneling dominates over thermal activation because the energy barriers are much larger than the thermal energy. Bose-Einstein condensates of dilute atomic gases offer a new system to study quantum phenomena. Recently, metastable Bose-Einstein condensates were observed in which a configuration of phase-separated component domains persisted for tens of seconds in spite of an external force which favored their rearrangement . The metastability was due both to the restriction of motion to one dimension by the narrow trapping potential and also to the repulsive interaction between the domains. Thermal relaxation to the ground state was identified and found to be extremely slow, even at temperatures ($``$100 nK) much larger than the energy barriers responsible for metastability ($``$5 nK), due to the scarcity of non-condensed atoms, to which the thermal energy is available. In this article, we examine the decay of metastable spin domains in an $`F=1`$ spinor condensate via quantum tunneling. The tunneling rates provide a sensitive probe of the boundary between spin domains and of the penetration of the condensate wavefunction into the classically forbidden region. Tunneling barriers are formed not by an external potential, but rather by the intrinsic repulsion between two immiscible components of a quantum fluid. These energy barriers are naturally of nanokelvin-scale height and micron-scale width in the presence of weak magnetic field gradients, and are thus a promising tool for future studies of quantum tunneling and Josephson oscillations . We begin by considering the one-dimensional motion of a Bose-Einstein condensate comprised of atoms of mass $`m`$ in two different internal states, $`|A`$ and $`|B`$. The condensate is held in a harmonic trapping potential which has the same strength for each component. In a mean-field description, the condensate wavefunction $`\psi _i(z)`$ is determined by two coupled Gross-Pitaevskii equations $`\left({\displaystyle \frac{\mathrm{}^2}{2m}}{\displaystyle \frac{d^2}{dz^2}}+V_i(z)+g_in_i(z)+g_{A,B}n_j(z)\mu _i\right)\psi _i(z)`$ $`=`$ $`0`$ (1) where $`V_i(z)`$ is the trapping potential, $`n_i(z)`$ the density and $`\mu _i`$ the chemical potential of each component ($`i,j=\{A,B\},ij`$). The constants $`g_A`$, $`g_B`$, and $`g_{A,B}`$ (all assumed positive) are given by $`g=4\pi \mathrm{}^2a/m`$ where $`a`$ is the $`s`$-wave scattering length which describes collisions between atoms in the same ($`a_A`$ and $`a_B`$) or different ($`a_{A,B}`$) internal states. Bulk properties of the condensate are well described by neglecting the kinetic energy (Thomas-Fermi approximation). Under the condition $`g_{A,B}>\sqrt{g_Ag_B}`$, the two components tend to phase-separate (as observed in ). The ground state configuration consists of one domain of each component on opposite sides of the trap (Fig. 1A). The chemical potentials are determined by the densities at the boundary $`n_i^b`$ as $`\mu _i=g_in_i^b`$, and are related to one another by the condition of equal pressure, $`\mu _A^2/2g_A=\mu _B^2/2g_B`$. Within the Thomas-Fermi approximation, the domain boundary is sharp and the two components do not overlap. Yet, the kinetic energy allows each component to penetrate within the domain of the other. The energy barrier for component $`A`$ (similar for $`B`$) is $`\mathrm{\Delta }E_A(z)=V_A(z)+g_{A,B}n_B(z)\mu _A`$. Neglecting slow variations in $`V_A`$ and $`n_B`$ gives the barrier height $$\mathrm{\Delta }E_A=\mu _A\left(\frac{g_{A,B}}{\sqrt{g_Ag_B}}1\right).$$ (2) In this work we consider a condensate of atomic sodium in the two hyperfine states $`|A>=|F=1,m_F=0>`$ and $`|B>=|F=1,m_F=1>`$, with scattering lengths of $`a_1=a_{0,1}=2.75`$ nm and $`(a_1a_0)=0.10`$ nm . The barrier height for atoms in the $`|m_F=0`$ state is then $`0.018\mu _0`$, a small fraction of the chemical potential. Consider that a state-selective force $`F\widehat{z}`$ displaces the trapping potential $`V_B(z)`$ from $`V_A(z)`$ (Fig. 1B). Due to the energy barrier discussed above, the atoms cannot, classically, move to the other end of the trap and thus the condensate is left in a high-energy configuration. This configuration can decay by tunneling. At a domain boundary at the center at the condensate, $`dV_A/dz=g_Bdn_B/dz=F/2`$, and so $`\mathrm{\Delta }E_A(z)=\mathrm{\Delta }E_AFz(g_B+g_{A,B})/2g_B`$ (Fig 1c). The width of the barrier becomes $`z_b=\mathrm{\Delta }E_A/F\times 2g_B/(g_{A,B}+g_B)`$. Tunneling from the metastable spin domains is analogous to the field emission of electrons from cold metals , where the energy barrier height corresponds to the work function of the metal and the force arises from an applied electric field. The tunneling rate $`dN_A/dt`$ of atoms in state $`|A`$ from the metastable spin domain is then given by the Fowler-Nordheim relation $`{\displaystyle \frac{dN_A}{dt}}`$ $`=`$ $`\gamma \mathrm{exp}\left(2\sqrt{{\displaystyle \frac{2m}{\mathrm{}^2}}}{\displaystyle _0^{z_b}}\sqrt{\mathrm{\Delta }E_A(z)}𝑑z\right)`$ (3) $`=`$ $`\gamma \mathrm{exp}\left({\displaystyle \frac{4}{3}}\sqrt{{\displaystyle \frac{2m}{\mathrm{}^2}}}{\displaystyle \frac{2g_B}{g_{A,B}+g_B}}{\displaystyle \frac{\mathrm{\Delta }E_A^{3/2}}{F}}\right)`$ (4) where $`\gamma `$ is the total attempt rate for tunneling, and the exponential is the tunneling probability. The rate of quantum tunneling was studied in three steps. First, condensates of sodium in the $`|F=1,m_F=1`$ hyperfine state were created in a magnetic trap and transferred to a single-beam infrared optical trap with a $`1/e^2`$ beam radius of 12 $`\mu `$m, an aspect ratio (axial / radial length) of about 60, and a depth of 1 – 2 $`\mu `$K. Chirped radio-frequency pulses were used to create two-component condensates with nearly equal populations in the $`|m_F=0`$ and $`|m_F=1`$ states . Shortly afterwards, the two components were separated into two domains by the application of a strong (several G/cm) magnetic field gradient along the axis of the trap in a 15 G bias field. The spin domains were typically 100 – 200 $`\mu `$m long. Second, the condensates were placed in a metastable state by applying a magnetic field gradient $`B^{}`$ in the opposite direction of that used to initially separate the components . This metastable state corresponds to that shown in Fig. 1B, where we identify the states $`|A=|m_F=0`$ and $`|B=|m_F=1`$. The field gradient exerted a state-selective force $`F=g\mu _Bm_FB^{}`$ where $`g=1/2`$ is the Landé g-factor and $`\mu _B`$ the Bohr magneton. The condensate was then allowed to evolve freely at the gradient $`B^{}`$ and a bias field $`B_0`$ for a variable time $`\tau `$ of up to 12 seconds. Finally, the condensate was probed by time-of-flight absorption imaging combined with a Stern-Gerlach spin separation . The radial expansion of the condensate in time-of-flight allowed for independent measurement of the chemical potentials $`\mu _0`$ and $`\mu _1`$ , while the axial distribution allowed for measurement of the number of atoms in the metastable and ground-state domains of each spin state. The mean-field description of tunneling from the metastable spin domains was tested by measuring the tunneling rate across energy barriers of constant height and variable width. Condensates in a 15 G bias field with a chemical potential $`\mu _0=300`$ nK were probed after 2 seconds of tunneling at a variable field gradient $`B^{}`$ (Fig. 2). Thus, the energy barrier for tunneling had a constant height of 5 nK, and a width between 4 and 20 $`\mu `$m. As the barrier width was shortened by increasing $`B^{}`$, the fraction of atoms in the $`m_F=0`$ metastable spin domains decreased. As expressed in Eq. 4, the number of atoms which tunnel from the metastable to the ground state domains in a time $`\tau `$ should vary as $`\gamma \tau e^{\alpha /B^{}}`$ where $`\gamma `$ and $`\alpha `$ were determined by fits to the data as $`\gamma =1.5(5)\times 10^7\text{s}^1`$ and $`\alpha =1.5(2)\text{cm}/\text{G}`$. This value of $`\alpha `$ gives a tunneling probability of about $`e^4`$ for $`B^{}=370`$ mG/cm, at which the metastable domains were fully depleted. The tunneling attempt rate $`\gamma `$ can be estimated as the product of two factors. First, a bulk flux can be estimated by considering the pressure $`g_0n_0^2/2`$ to arise from an incoming atomic flux $`n_0v/2`$ which collides elastically at the boundary, imparting an impulse $`2mv`$ per particle. This gives $`\gamma _{bulk}=n_0v_s_{rad}/\sqrt{2}`$ where $`v_s=(g_0n_0/m)^{1/2}`$ is the Bogoliubov speed of sound and $`\mathrm{}_{rad}`$ denotes an integral over the radial dimension of the condensate. This bulk flux is reduced an extinction factor $`f`$ which accounts for the interpolation of the condensate wavefunction between the bulk spin domain and the classically forbidden region. We use the treatment of Dalfovo et al. , by considering the boundary between the two spin domains not as a sharp division (as in Fig. 1C), but rather as a region of width $`d=(\mathrm{}^2/2m\mathrm{\Delta }E_0)^{1/2}1.5\mu `$m wherein the density $`n_1(z)`$ rises linearly between $`0zd`$. We then find the density of the $`m_F=0`$ component at the edge of the boundary region $`n_0(d)`$ to be reduced from its bulk value by a factor $`f1/10`$. Using $`\mu _0=300`$ nK and a radial trap frequency of 500 Hz gives an estimate of $`\gamma _{bulk}10^8\text{s}^1`$ and $`\gamma 10^7\text{s}^1`$. The measured value of $`\alpha `$ can be compared with the prediction of the Fowler-Nordheim equation (Eq. 4). Using the scattering lengths above and $`\mu _0=300`$ nK gives $`\alpha =1.5(2)\text{cm}/\text{G}`$, in agreement with our measurement (the error reflects a 10% systematic uncertainty in $`\mu _0`$). In addition, $`g_1>g_0`$ implies $`\mu _1>\mu _0`$ and thus the tunneling rate of $`m_F=1`$ atoms across the $`m_F=0`$ domain should be slower than that of the $`m_F=0`$ atoms across the $`m_F=1`$ domain. The data in Fig. 2 show evidence for this behavior. The dependence of the tunneling rate on the energy barrier height was probed by varying the condensate density. For this, the number of trapped atoms was varied between about $`10^5`$ and $`10^6`$ by allowing for a variable duration of trap loss before creating the metastable state. Figure 3 shows data collected at two different settings of the optical trap depth $`U`$ and tunneling time $`\tau `$ (see caption). For each data series, at a given field gradient $`B^{}`$, there was a threshold value of the chemical potential $`\mu _0`$ below which the condensates had relaxed completely to the ground state, and above which they had not. Since the total condensate number and the attempt rate $`\gamma `$ should both scale as $`\mu _0^{5/2}`$ , one expects the threshold chemical potential for complete tunneling to the ground state to vary as $`\mu _0B^{\mathrm{\hspace{0.17em}2}/3}`$. The data shown in Figure 3 suggest a slightly steeper dependence. The chemical potential thresholds were approximately the same for both settings of the optical trap depth. Varying the optical trap depth $`U`$ also varied the temperature ($`T1/10U`$ ), and trap frequencies ($`\omega U^{1/2}`$). That the threshold is independent of temperature confirms that the decay proceeds by quantum tunneling rather than thermal activation. That the threshold is independent of the trap frequencies confirms that the decay occurs by quantum tunneling of one spin component through the other, rather than by radial motion of one component around the other. Thus, we have shown the decay of the metastable spin domains at high magnetic fields (15 G) to be due to quantum tunneling in a two-component condensate. At lower magnetic fields, a dramatic change in the tunneling behavior was observed. Metastable spin domains of initial chemical potential $`\mu _0=600`$ nK were prepared at a constant field gradient of $`B^{}=130`$ mG/cm, and a 15 G bias field. The field was then ramped down to between 0.4 and 2 G within 10 ms, and held at a constant value $`B_0`$. After a variable tunneling time $`\tau `$ of up to 12 seconds, the condensates were probed and evaluated as to whether they had fully decayed to the ground state. During the tunneling time, the chemical potential dropped due to the loss of atoms from the trap. As the field was lowered below about 1 G, relaxation to the ground state occurred at earlier times (Fig. 4A), and thus at higher chemical potentials (Fig. 4B). The increase in the tunneling rates at lower magnetic fields is inconsistent with the dynamics of a two-component condensate. Our measurements thus serve as a probe of the spin domain boundary and reveal the presence of the third $`F=1`$ spin component ($`m_F=1`$). Atoms in the $`|m_F=1`$ state are created by spin-relaxation wherein two $`m_F=0`$ atoms collide to produce an $`m_F=1`$ and an $`m_F=1`$ atom . Due to the quadratic Zeeman effect, the magnetic energy of two $`m_F=0`$ atoms is lower than that of their spin-relaxation product by $`2q=2\times 20\text{nK}\times (B_0/\text{G})^2`$. Interactions give rise to a spin-dependent energy term $`c𝐅^2`$, where $`c=\mathrm{\Delta }gn/2`$, $`\mathrm{\Delta }g=g_1g_0`$, and $`n`$ is the condensate density. Neglecting the kinetic energy, atoms in the $`|m_F=1`$ state are excluded from the domain boundary when $`q>c/2`$, i.e. at fields $`B_0250`$ mG for typical conditions ($`n3\times 10^{14}\text{cm}^3`$) . However, when the kinetic energy is considered, atoms in the $`|m_F=1`$ state are found to populate the boundary even at fields $`B_0>250`$ mG. Within the boundary region, the average magnetization $`F_z`$ must vary smoothly. Minimizing the energy functional $`qF_z^2+c𝐅^2`$ at constant magnetization $`F_z`$ indicates that, for $`q/2c1`$, the fraction of atoms in the $`|m_F=1`$ state scales roughly as $`B_0^2`$ . At a field of 1 G, the fraction of atoms in the domain boundary in the $`|m_F=1`$ state is at most $`2`$% (about 300 atoms). The presence of the $`m_F=1`$ atoms in the barrier weakens the effective repulsion between the spin domains. Consider a two-component system as before where $`|A=|m_F=0`$ and $`|B=\mathrm{cos}\theta |m_F=1\mathrm{sin}\theta |m_F=1`$ where $`0\theta \pi /2`$. Evaluating the spin dependent interaction energy one finds $`g_B=g_0+\mathrm{\Delta }g\mathrm{cos}^22\theta `$ and $`g_{0,B}=g_0+\mathrm{\Delta }g(1\mathrm{sin}2\theta )`$. Thus, as the fraction of atoms in the $`|m_F=1`$ state is increased, the repulsion of the $`m_F=0`$ atoms at the domain walls is weakened, increasing the tunneling rate. In conclusion, we have identified and studied quantum tunneling across phase-separated spin domains in a Bose-Einstein condensate. The energy barriers due to the interatomic repulsion are a small fraction of the chemical potential, and their width is simply varied by the application of a weak force. The tunneling rates at high field ($`B_0>1G`$) were described by a mean-field model and an application of the Fowler-Nordheim equation, while the tunneling at lower fields reveals a change in the spin-state composition of the domain boundaries. Future studies using metastable spin domains as tunneling barriers may may focus on the roles of coherence and damping in quantum tunneling. In the current setup, rapid Josephson oscillations might be expected at frequencies ($`1`$ kHz) given by the energy difference between the metastable and ground state spin domains. Over long time scales such oscillations are presumably damped as the system evolves toward the ground state. While no evidence for oscillatory behavior was found in the present work, the use of smaller spin domains and better time resolution is warranted. This work was supported by the Office of Naval Research, NSF, Joint Services Electronics Program (ARO), NASA, and the David and Lucile Packard Foundation. A.P.C. acknowledges additional support from the NSF, D.M.S.-K. from JSEP, and J.S. from the Alexander von Humboldt-Foundation.
no-problem/9902/nucl-th9902023.html
ar5iv
text
# Weak decays of medium and heavy 𝚲-hypernuclei ## I Introduction A hypernucleus is a bound system made of by neutrons, protons and one or more hyperons. Among these strange nuclei, those which contain one $`\mathrm{\Lambda }`$ hyperon are the most stable with respect to the strong interaction and they are the subject of this paper. The study of hypernuclear physics may help in understanding some present problems related, for instance, to some aspects of weak interactions in nuclei, or to the origin of the spin-orbit interaction in nuclei. Besides, it is a good instrument to study the role of quark degrees of freedom in the hadron-hadron interactions at short distances and the renormalization properties of pions in the nuclear medium. Nowadays we know some important features of the $`YN`$ interaction . For example, at intermediate distances the strong $`\mathrm{\Lambda }N`$ interaction is weaker than the $`NN`$ one, and its spin-orbit term is very small. Moreover, the former has a smaller range than the $`NN`$ one. From the study on mesonic decays of light hypernuclei we have evidence for strongly repulsive cores in the $`\mathrm{\Lambda }N`$ interaction at short distances , which automatically appears in quark based models . These characteristics of the $`\mathrm{\Lambda }N`$ interaction are important, as we will see, for the evaluation of the decay rates of $`\mathrm{\Lambda }`$-hypernuclei. The most interesting hypernuclear decays are those involving weak processes, which directly concern the hyperon. The weak decay of hypernuclei occurs via two channels: the so called mesonic channel ($`\mathrm{\Lambda }\pi N`$) and the non-mesonic one, in which the pion emitted from the weak hadronic vertex is absorbed by one or more nucleons in the medium ($`\mathrm{\Lambda }NNN`$, $`\mathrm{\Lambda }NNNNN`$, etc. ). Obviously, the non-mesonic processes can also be mediated by the exchange of more massive mesons than the pion. The non-mesonic decay is only possible in nuclei and, nowadays, the study of the hypernuclear decay is the only practical way to get information on the weak process $`\mathrm{\Lambda }NNN`$, especially on its parity conserving part. In fact, there are not experimental observations for this interaction using lambda beams. It is, however, under study the inverse reaction $`pn\mathrm{\Lambda }p`$ at COSY and RCNP . The free $`\mathrm{\Lambda }`$ decay is compatible with the $`\mathrm{\Delta }I=1/2`$ isospin rule, which is also valid for the decay of other hyperons and for kaons (namely in non-leptonic strangeness changing processes). This rule is based on the experimental observation that the $`\mathrm{\Lambda }\pi ^{}p`$ decay rate is twice the $`\mathrm{\Lambda }\pi ^0n`$ one, but it is not yet understood on theoretical grounds. From theoretical calculations like the one in ref. and from experimental measurements there is some evidence that the $`\mathrm{\Delta }I=1/2`$ rule is broken in nuclear mesonic decay. However, this is essentially due to shell effects and might not be directly related to the weak process. A recent estimate of $`\mathrm{\Delta }I=3/2`$ contributions to the $`\mathrm{\Lambda }NNN`$ reaction found moderate effects on the hypernuclear decay rates. In the present calculation of the decay rates in nuclei we will assume this rule as valid. The momentum of the final nucleon in $`\mathrm{\Lambda }\pi N`$ is about $`100`$ MeV for $`\mathrm{\Lambda }`$ at rest, so this process is suppressed by the Pauli principle in nuclei (particularly in heavy systems). It is strictly forbidden in infinite nuclear matter (where $`k_F^0`$ 270 MeV), but in finite nuclei it can occur because of three important effects: 1) in nuclei the hyperon has a momentum distribution that allows larger momenta for the final nucleon, 2) the final pion feels an attraction by the medium such that for fixed momentum it has a smaller energy than the free one and consequently, due to energy conservation, the final nucleon again has more chance to come out above the Fermi surface, and 3) on the nuclear surface the local Fermi momentum is smaller than $`k_F^0`$ and favours the decay. Nevertheless, the mesonic width decreases fastly as the mass number $`A`$ of the hypernucleus increases . From the study of the mesonic channel it could be possible to extract important information on the pion-nucleus optical potential, which we do not know today in a complete form. In fact, the mesonic rate is very sensitive to the pion self-energy in the medium . The final nucleons in the non-mesonic process $`\mathrm{\Lambda }NNN`$ emerge with large momenta ($`420`$ MeV), so this decay is not forbidden by the Pauli principle. On the contrary, apart from very light hypernuclei (the $`s`$-shell ones), it dominates over the mesonic decay. The non-mesonic channel is characterized by large momentum transfers, so that the details of the nuclear structure do not have a substantial influence while the $`NN`$ and $`\mathrm{\Lambda }N`$ Short Range Correlations (SRC) turn out to be very important. There is an anticorrelation between mesonic and non-mesonic decay modes such that the total lifetime is quite stable from light to heavy hypernuclei : $`\tau _{exp}=(0.5÷1)\tau _{free}`$. Nowadays, the main problem concerning the weak decay rates is to reproduce the experimental values for the ratio $`\mathrm{\Gamma }_n/\mathrm{\Gamma }_p`$ between the neutron and the proton induced widths $`\mathrm{\Lambda }nnn`$ and $`\mathrm{\Lambda }pnp`$. The theoretical calculations underestimate the experimental data for all the considered hypernuclei : $$\left\{\frac{\mathrm{\Gamma }_n}{\mathrm{\Gamma }_p}\right\}^{Th}\left\{\frac{\mathrm{\Gamma }_n}{\mathrm{\Gamma }_p}\right\}^{Exp}0.5\left\{\frac{\mathrm{\Gamma }_n}{\mathrm{\Gamma }_p}\right\}^{Exp}2.$$ (1) In the One Pion Exchange (OPE) approximation the values for this ratio are $`0.1÷0.2`$. On the other hand the OPE model has been able to reproduce the 1-body stimulated non-mesonic rates $`\mathrm{\Gamma }_1=\mathrm{\Gamma }_n+\mathrm{\Gamma }_p`$ for light and medium hypernuclei . In order to solve this problem many attempts have been made up to now, but without success. Among these we recall the inclusion in the $`\mathrm{\Lambda }NNN`$ transition potential of mesons heavier than the pion , the inclusion of interaction terms that violate the $`\mathrm{\Delta }I=1/2`$ rule and the description of the short range baryon-baryon interaction in terms of quark degrees of freedom . This last calculation is the only one that has found a consistent (but not sufficient) increase of the neutron to proton ratio with respect to the OPE one. However, this calculation is only made for $`s`$-shell hypernuclei and their effective quark-lagrangian does not reproduce the experimental ratio between the $`\mathrm{\Delta }I=1/2`$ and $`\mathrm{\Delta }I=3/2`$ transition amplitudes for the $`\mathrm{\Lambda }`$ free decay. The analysis of the ratio $`\mathrm{\Gamma }_n/\mathrm{\Gamma }_p`$ is influenced by the 2-nucleon induced process $`\mathrm{\Lambda }NNNNN`$. By assuming that the meson produced in the weak vertex is mainly absorbed by a neutron-proton strongly correlated pair, the 3-body process turns out to be $`\mathrm{\Lambda }npnnp`$, so that a considerable fraction of the measured neutrons could come from this channel and not only from the $`\mathrm{\Lambda }nnn`$ and $`\mathrm{\Lambda }pnp`$ ones. In this way it might be possible to explain the large experimental $`\mathrm{\Gamma }_n/\mathrm{\Gamma }_p`$ ratios, which originally have been analyzed without taking into account the 2-body stimulated process. Nevertheless, the situation is far from being clear and simple. The new non-mesonic mode was introduced in ref. and its calculation was improved in ref. , where the authors found that the inclusion of the new channel would lead to larger values of the $`\mathrm{\Gamma }_n/\mathrm{\Gamma }_p`$ ratios extracted from the experiment, somehow more in disagreement with theoretical estimates. However, in the hypothesis that only two nucleons from the 3-body decay are detected, the reanalysis of the experimental data would lead to smaller ratios . These observations show that $`\mathrm{\Gamma }_n/\mathrm{\Gamma }_p`$ is sensitive to the energy spectra of the emitted nucleons, whose calculation also requires a careful treatment of the nucleon Final State Interaction. In ref. the energy distributions were calculated using a Monte Carlo simulation to describe the final state interactions. A direct comparison of those spectra with the experimental ones favours $`\mathrm{\Gamma }_n/\mathrm{\Gamma }_p`$ values around $`2÷3`$ (or higher), in disagreement with the OPE predictions. However, it was also pointed out the convenience of measuring the number of protons per decay event. This observable, which can be measured from delayed fission events in the decay of heavy hypernuclei, gives a more reliable ratio $`\mathrm{\Gamma }_n/\mathrm{\Gamma }_p`$ and is less sensitive to details of the Monte Carlo simulation determining the final shape of the spectra. In this paper we present a new evaluation of the decay rates for medium to heavy hypernuclei based on the Propagator Method of ref. , which allows a unified treatment of all the decay channels. The parameters of the model are adjusted to reproduce the non-mesonic width of $`{}_{\mathrm{\Lambda }}{}^{}{}_{}{}^{12}`$C and the decay rates of heavier hypernuclei are predicted. We also discuss how the new model affects the energy spectrum of the emitted nucleons, in the hope of obtaining a ratio $`\mathrm{\Gamma }_n/\mathrm{\Gamma }_p`$ more in agreement with the experimental observation. The paper is organized as follows. In Sec. II we present the model used for the calculation of the decay rates. Our results are presented and discussed in Sec. III. We first study the sensitivity of the decay rates to the parameters defining the $`NN`$ and $`\mathrm{\Lambda }N`$ short range correlations as well as to the nuclear density and $`\mathrm{\Lambda }`$ wave functions. We then obtain the widths for various hypernuclei and discuss the energy distribution of the nucleons from the weak decays. Our conclusions are given in Sec. IV. ## II Propagator Method The $`\mathrm{\Lambda }`$ decay in nuclear systems can be studied in Random Phase Approximation (RPA) using the Propagator Method . This technique provides a unified picture of the different decay channels and it is equivalent to the standard Wave Function Method (WFM) , used by other authors in refs. . The calculation of the widths is usually performed in nuclear matter, and then extended to finite nuclei via the Local Density Approximation (LDA). For the calculation of the mesonic rates the WFM is more reliable than the propagator method in LDA since this channel is rather sensitive to the shell structure of the hypernucleus, given the small energies involved. Moreover, it is advisable to avoid the use of the LDA for very light systems and we will make the calculation starting from $`{}_{\mathrm{\Lambda }}{}^{}{}_{}{}^{12}`$C. On the other hand, the propagator method in LDA offers the possibility of calculations over a broad range of mass numbers. The method was introduced in ref. and we briefly summarize it here for clarity. The $`\mathrm{\Lambda }\pi N`$ effective lagrangian is: $$_{\mathrm{\Lambda }\pi N}=Gm_\pi ^2\overline{\psi }_N(A+B\gamma _5)𝝉\mathit{\varphi }_\pi \psi _\mathrm{\Lambda }+h.c.,$$ (2) where the values of the weak coupling constants $`G2.21110^7/m_\pi ^2`$, $`A=1.06`$, $`B=7.10`$ are fixed on the free $`\mathrm{\Lambda }`$ decay. The constants $`A`$ and $`B`$ determine the strengths of the parity violating and parity conserving $`\mathrm{\Lambda }\pi N`$ amplitudes respectively. In order to enforce the $`\mathrm{\Delta }I=1/2`$ rule, in eq. (2) the hyperon is assumed to be an isospin spurion with $`I_z=1/2`$. To calculate the $`\mathrm{\Lambda }`$ width in nuclear matter we start with the imaginary part of the $`\mathrm{\Lambda }`$ self-energy: $$\mathrm{\Gamma }_\mathrm{\Lambda }=2\mathrm{I}\mathrm{m}\mathrm{\Sigma }_\mathrm{\Lambda }.$$ (3) By the use of Feynman rules, from fig. 1 it is easy to obtain the $`\mathrm{\Lambda }`$ self-energy in the following form: $$\mathrm{\Sigma }_\mathrm{\Lambda }(k)=3i(Gm_\pi ^2)^2\frac{d^4q}{(2\pi )^4}\left\{S^2+\frac{P^2}{m_\pi ^2}𝒒^2\right\}F_\pi ^2(q)G_N(kq)G_\pi (q).$$ (4) Here, $`S=A`$, $`P=m_\pi B/2m_N`$, while the nucleon and pion propagators in nuclear matter are respectively: $$G_N(p)=\frac{\theta (𝒑k_F)}{p_0E_N(𝒑)V_N+iϵ}+\frac{\theta (k_F𝒑)}{p_0E_N(𝒑)V_Niϵ},$$ (5) and: $$G_\pi (q)=\frac{1}{q_0^2𝒒^2m_\pi ^2\mathrm{\Sigma }_\pi ^{}(q)}.$$ (6) In the above, $`p=(p_0,𝒑)`$ and $`q=(q_0,𝒒)`$ denote four-vectors, $`k_F`$ is the Fermi momentum, $`E_N`$ is the nucleon total free energy, $`V_N`$ is the nucleon binding energy, and $`\mathrm{\Sigma }_\pi ^{}`$ is the pion proper self-energy in nuclear matter. Moreover, in eq. (4) we have included a monopole form factor for the $`\pi \mathrm{\Lambda }N`$ vertex: $$F_\pi (q)=\frac{\mathrm{\Lambda }_\pi ^2m_\pi ^2}{\mathrm{\Lambda }_\pi ^2q_0^2+𝒒^2}$$ (7) (the same is used for the $`\pi NN`$ strong vertex), with cut-off $`\mathrm{\Lambda }_\pi =1.2`$ GeV. In fig. 2 we show the lowest order Feynman graphs for the $`\mathrm{\Lambda }`$ self-energy in nuclear matter. Diagram (a) represents the bare self-energy term, including the effects of Pauli principle and of binding on the intermediate nucleon. In (b) and (c) the pion couples to a particle-hole (p-h) and a $`\mathrm{\Delta }`$-h pairs, respectively. Diagram (d) is an insertion of $`s`$-wave pion self-energy at lowest order. In diagram (e) we show a 2p-2h excitation coupled to the pion through $`s`$-wave $`\pi N`$ interactions. Other 2p-2h excitations, coupled in $`p`$-wave, are shown in (f), while (g) is a RPA iteration of diagram (b). It is possible to evaluate the integral over $`q_0`$ in (4), and the $`\mathrm{\Lambda }`$ self-energy (eq. (3)) in nuclear matter becomes : $`\mathrm{\Gamma }_\mathrm{\Lambda }(𝒌,\rho )`$ $`=`$ $`6(Gm_\pi ^2)^2{\displaystyle \frac{d𝒒}{(2\pi )^3}\theta (𝒌𝒒k_F)\theta (k_0E_N(𝒌𝒒)V_N)}`$ (9) $`\times \mathrm{Im}\alpha (q)_{q_0=k_0E_N(𝒌𝒒)V_N},`$ where $`\alpha (q)`$ $`=`$ $`\left\{S^2+{\displaystyle \frac{P^2}{m_\pi ^2}}𝒒^2\right\}F_\pi ^2(q)G_\pi ^0(q)+{\displaystyle \frac{\stackrel{~}{S}^2(q)U(q)}{1V_L(q)U(q)}}`$ (11) $`+{\displaystyle \frac{\stackrel{~}{P}_L^2(q)U(q)}{1V_L(q)U(q)}}+{\displaystyle \frac{\stackrel{~}{P}_T^2(q)U(q)}{1V_T(q)U(q)}}.`$ In eq. (9) the first $`\theta `$ function forbids intermediate nucleon momenta (see fig. 1) smaller than the Fermi momentum and the second one requires the pion energy $`q_0`$ to be positive. Moreover, the $`\mathrm{\Lambda }`$ energy, $`k_0=E_\mathrm{\Lambda }(𝒌)+V_\mathrm{\Lambda }`$, contains a binding term. The pion lines of fig. 2 have been replaced in eq. (11) by the interactions $`\stackrel{~}{S}`$, $`\stackrel{~}{P}_L`$, $`\stackrel{~}{P}_T`$,$`V_L`$, $`V_T`$, which include $`\pi `$ and $`\rho `$ exchange modulated by the effect of short range correlations and whose espressions are given in the Appendix. The functions $`V_L`$ and $`V_T`$ represent the (strong) p-h interaction, including a short range Landau parameter $`g^{}`$, while $`\stackrel{~}{S}`$, $`\stackrel{~}{P}_L`$ and $`\stackrel{~}{P}_T`$ correspond to the lines connecting weak and strong hadronic vertices and contain another short range Landau parameter $`g_\mathrm{\Lambda }^{}`$. Furthermore, in eq. (11): $$G_\pi ^0(q)=\frac{1}{q_0^2𝒒^2m_\pi ^2},$$ (12) is the free pion propagator, while $`U(q)`$ contains the Lindhard functions for p-h and $`\mathrm{\Delta }`$-h excitations and also accounts for 2p-2h excitations: $$U(q)=U_{ph}(q)+U_{\mathrm{\Delta }h}(q)+U_{2p2h}(q).$$ (13) It appears in eq. (11) within the standard RPA expression. Eq. (9) depends explicitly and through $`U(q)`$ on the nuclear matter density $`\rho =2k_F^3/3\pi ^2`$. The Lindhard functions $`U_{ph}`$, $`U_{\mathrm{\Delta }h}`$ are normalized as in ref. and $`U_{2p2h}`$ is evaluated as in , that is, by calculating the available phase space for 2p-2h excitations and by taking into account the experimental data on pionic atoms. $`U(q)`$ is related to the pion proper self-energy through: $$\mathrm{\Sigma }_\pi ^{}(q)=\mathrm{\Sigma }_\pi ^{(p)}(q)+\mathrm{\Sigma }_\pi ^{(s)}(q),\mathrm{\Sigma }_\pi ^{(p)}(q)=\frac{{\displaystyle \frac{f_\pi ^2}{m_\pi ^2}}𝒒^2F_\pi ^2(q)U(q)}{1{\displaystyle \frac{f_\pi ^2}{m_\pi ^2}}g_L(q)U(q)},$$ (14) where the Landau function $`g_L(q)`$ is given in the Appendix \[see eq. (36)\], and $`\mathrm{\Sigma }_\pi ^{(s)}`$ is the $`s`$-wave part of the self-energy. We will use the parametrization of ref. : $`\mathrm{\Sigma }_\pi ^{(s)}(q)=4\pi (1+m_\pi /m_N)b_0\rho `$, with $`b_0=0.0285/m_\pi `$. The function $`\mathrm{\Sigma }_\pi ^{(s)}`$ is real (constant and positive), therefore it contributes only to the mesonic decay (diagram (d) in fig. 2 is the relative lowest order). On the contrary, the $`p`$-wave self-energy $`\mathrm{\Sigma }_\pi ^{(p)}`$ is complex and attractive (that is, $`\mathrm{Re}\mathrm{\Sigma }_\pi ^{(p)}(q)<0`$). The decay widths in finite nuclei are obtained in LDA. In this approximation, the Fermi momentum becomes $`r`$-dependent (that is, a local Fermi sea of nucleons is introduced) and related again to the nuclear density by: $$k_F(𝒓)=\left\{\frac{3}{2}\pi ^2\rho (𝒓)\right\}^{1/3}.$$ (15) Besides, the nucleon binding potential $`V_N`$ also becomes $`r`$-dependent in LDA. In Thomas-Fermi approximation we assume: $$ϵ_F(𝒓)+V_N(𝒓)\frac{k_F^2(𝒓)}{2m_N}+V_N(𝒓)=0.$$ (16) For the $`\mathrm{\Lambda }`$ binding energy we use instead the experimental value . With these prescriptions we can then evaluate the decay width in finite nuclei through the relation: $$\mathrm{\Gamma }_\mathrm{\Lambda }(𝒌)=𝑑𝒓\psi _\mathrm{\Lambda }(𝒓)^2\mathrm{\Gamma }_\mathrm{\Lambda }[𝒌,\rho (𝒓)],$$ (17) where $`\psi _\mathrm{\Lambda }`$ is the $`\mathrm{\Lambda }`$ wave function and $`\mathrm{\Gamma }_\mathrm{\Lambda }[𝒌,\rho (𝒓)]`$ is given by eqs. (9), (11). This decay rate is valid for fixed $`\mathrm{\Lambda }`$ momentum $`𝒌`$. A further average over the $`\mathrm{\Lambda }`$ momentum distribution gives the total width: $$\mathrm{\Gamma }_\mathrm{\Lambda }=𝑑𝒌\stackrel{~}{\psi }_\mathrm{\Lambda }(𝒌)^2\mathrm{\Gamma }_\mathrm{\Lambda }(𝒌),$$ (18) which can be compared with experimental results. The propagator method provides a unified picture of the decay widths. The imaginary part of a self-energy diagram requires placing simultaneously on-shell the particles of the considered intermediate state. For instance, diagram (b) in fig. 2 has two sources of imaginary part. One comes from cut 1, where the nucleon and the pion are placed on-shell. This term contributes to the mesonic channel: the final pion interacts with the medium through a p-h excitation and then escapes from the nucleus. Diagram (b) and further iterations lead to a renormalization of the pion in the medium which increases the mesonic rate by about two orders of magnitude in heavy nuclei . The cut 2 in fig. 2(b) place a nucleon and a p-h pair on shell, so it is the lowest order contribution to the physical process $`\mathrm{\Lambda }NNN`$. The mesonic width $`\mathrm{\Gamma }_M`$ is calculated from $$\alpha (q)=\left\{S^2+\frac{P^2}{m_\pi ^2}𝒒^2\right\}G_\pi (q),$$ (19) by omitting $`\mathrm{Im}\mathrm{\Sigma }_\pi ^{}`$ in $`G_\pi `$, namely by replacing: $$\mathrm{Im}G_\pi (q)\pi \delta (q_0^2𝒒^2m_\pi ^2Re\mathrm{\Sigma }_\pi ^{}(q)).$$ (20) The one-body induced non-mesonic decay rate $`\mathrm{\Gamma }_1`$ is obtained by substituting in eqs. (9), (11): $$\mathrm{Im}\frac{U(q)}{1V_{L,T}(q)U(q)}\frac{\mathrm{Im}U_{ph}(q)}{1V_{L,T}(q)U(q)^2},$$ (21) that is by omitting the imaginary parts of $`U_{\mathrm{\Delta }h}`$ and $`U_{2p2h}`$ in the numerator. Indeed $`\mathrm{Im}U_{\mathrm{\Delta }h}`$ accounts for the $`\mathrm{\Delta }\pi N`$ decay width, thus representing a contribution to the mesonic decay. There is no overlap between $`\mathrm{Im}U_{ph}(q)`$ and the pole $`q_0=\omega (𝒒)`$ in eq. (20), so the separation of the mesonic and 2-body non-mesonic channels is unambiguous. The renormalized pion pole in eq. (19) is given by the dispersion relation: $$\omega ^2(𝒒)𝒒^2m_\pi ^2Re\mathrm{\Sigma }_\pi ^{}[\omega (𝒒),𝒒]=0,$$ (22) with the constraint: $$q_0=k_0E_N(𝒌𝒒)V_N.$$ (23) At the pion pole $`\mathrm{Im}U_{2p2h}0`$, thus the 2-body induced non-mesonic width $`\mathrm{\Gamma }_2`$ cannot be calculated using the prescription (21) with $`U_{2p2h}`$ instead of $`U_{ph}`$ in the numerator of the r. h. s. Part of the decay rate calculated in this way it is due to excitations of the renormalized pion and contributes to $`\mathrm{\Gamma }_M`$. The 3-body non-mesonic rate is then calculated by subtracting $`\mathrm{\Gamma }_M`$ and $`\mathrm{\Gamma }_1`$ from the total rate $`\mathrm{\Gamma }_{TOT}`$, which we get via the full espression for $`\alpha `$ \[Eq. (11)\]. ## III Results and discussion Let us now discuss the numerical results one can obtain from the above illustrated formalism. We shall first study the influence of short range correlations and the $`\mathrm{\Lambda }`$ wave function on the decay width of $`{}_{\mathrm{\Lambda }}{}^{}{}_{}{}^{12}`$C, which will be used as a testing ground for the theoretical framework in order to fix the parameters of our model. We will then obtain the decay widths of heavier hypernuclei and we will explore whether the refined model influences the energy distribution of the emitted particles, following the Monte Carlo procedure of ref. . In order to evaluate the width (17) in LDA one needs to specify the nuclear density and the wave function for the $`\mathrm{\Lambda }`$. The former is assumed to be a Fermi distribution: $$\rho _A(r)=\frac{\rho _0(A)}{1+e^{[rR(A)]/a}}\left[\rho _0(A)=\frac{A}{\frac{4}{3}\pi R^3(A)\{1+[\frac{\pi a}{R(A)}]^2\}}\right],$$ (24) with radius $`R(A)=1.12A^{1/3}0.86A^{1/3}`$ fm and thickness $`a=0.52`$ fm. The $`\mathrm{\Lambda }`$ wave function is obtained from a Wood-Saxon (W-S) well which exactly reproduces the first two single particle eigenvalues ($`s`$ and $`p`$ levels) measured in $`\mathrm{\Lambda }`$-hypernuclei. ### A Short range correlations and $`\mathrm{\Lambda }`$ wave function A crucial ingredient in the calculation of the decay widths is the short range part of the $`NN`$ and $`\mathrm{\Lambda }N`$ interactions. They are expressed by the functions $`g_{L,T}(q)`$ and $`g_{L,T}^\mathrm{\Lambda }(q)`$, which are reported in the Appendix and contain the Landau parameters $`g^{}`$ and $`g_\mathrm{\Lambda }^{}`$. No experimental information is available on $`g_\mathrm{\Lambda }^{}`$, while many constraints have been set on $`g^{}`$, for example by the well known quenching of the Gamow-Teller resonance. Realistic values of $`g^{}`$, within the framework of the ring approximation, are in the range $`0.6÷0.7`$ . However, in the present context $`g^{}`$ correlates not only p-h pairs but also p-h with 2p-2h states. In order to fix these parameters we shall compare our calculations with the experimental non-mesonic width of $`{}_{\mathrm{\Lambda }}{}^{}{}_{}{}^{12}`$C. In fig. 3 we see how the total non-mesonic width for carbon depends on the Landau parameters. The rate decreases as $`g^{}`$ increases. This characteristic is well established in RPA. Moreover, fixing $`g^{}`$, there is a minimum for $`g_\mathrm{\Lambda }^{}0.4`$ (almost independent of the value of $`g^{}`$). This is due to the fact that for $`g_\mathrm{\Lambda }^{}0.4`$ the longitudinal $`p`$-wave contribution in eq. (11) dominates over the transverse one and the opposite occurs for $`g_\mathrm{\Lambda }^{}0.4`$. We also remind that the $`s`$-wave interactions are independent of $`g_\mathrm{\Lambda }^{}`$ \[eq. (34)\]. Moreover, the longitudinal $`p`$-wave $`\mathrm{\Lambda }NNN`$ interaction \[eq. (32)\] contains the pion exchange plus SRC, while the transverse $`p`$-wave $`\mathrm{\Lambda }NNN`$ interaction \[eq. (33)\] only contains repulsive correlations, so with increasing $`g_\mathrm{\Lambda }^{}`$ the $`p`$-wave longitudinal contribution to the width decreases, while the $`p`$-wave transverse part increases. From fig. 3 we see that there is a broad range of choices of $`g^{}`$ and $`g_\mathrm{\Lambda }^{}`$ values which fit the experimental band. The latter represents the non-mesonic decay width which is compatible with both the BNL and KEK experiments. One should notice that the theoretical curves reported in fig. 3 contain the contribution of the 3-body process; should the latter be neglected (ring approximation) then one could get equivalent results with $`g^{}`$ values smaller than the ones reported in the figure (tipically $`\mathrm{\Delta }g^{}0.1`$). The phenomenology of the $`(e,e^{})`$ quasi-elastic scattering suggests, in ring approximation, $`g^{}`$ values of the order of 0.7. Here, by taking into account also $`2p2h`$ contributions, we shall use the “equivalent” value $`g^{}=0.8`$, together with $`g_\mathrm{\Lambda }^{}=0.4`$. We note that the values used in ref. , namely $`g^{}=0.615`$ and $`g_\mathrm{\Lambda }^{}=0.62`$, would yield $`\mathrm{\Gamma }_1=1.26`$ and $`\mathrm{\Gamma }_2=0.25`$, adding to a non-mesonic width $`\mathrm{\Gamma }_{NM}=1.51`$, which is 50% larger than the experimental one. Thus the analysis performed here shows that the present data for $`{}_{\mathrm{\Lambda }}{}^{}{}_{}{}^{12}`$C favour a somewhat different but still reasonable $`g^{}`$ value. We shall now illustrate the sensitivity of our calculation to the $`\mathrm{\Lambda }`$ wave function in $`{}_{\mathrm{\Lambda }}{}^{}{}_{}{}^{12}`$C. In addition to the W-S that reproduces the $`s`$ and $`p`$ levels, we also use a harmonic oscillator wave function with an ”empirical” frequency $`\omega `$ , again obtained from the $`sp`$ energy shift, the W-S wave function of Dover et al. and the microscopic wave function calculated from a non-local self-energy using a realistic $`YN`$ interaction in ref. . The results are shown in table I, where they are compared with the experimental data from BNL and KEK . By construction, the chosen $`g^{}`$ and $`g_\mathrm{\Lambda }^{}`$ reproduce the experimental decay widths using the W-S wave function which gives the right $`s`$ and $`p`$ levels in $`{}_{\mathrm{\Lambda }}{}^{}{}_{}{}^{12}`$C. We note that it is possible to generate the microscopic wave function of ref. for carbon via a local hyperon-nucleus W-S potential with radius $`2.92`$ fm and depth $`23`$ MeV. Although this potential reproduces fairly well the experimental $`s`$-level for the $`\mathrm{\Lambda }`$ in $`{}_{\mathrm{\Lambda }}{}^{}{}_{}{}^{12}`$C, it does not reproduce the $`p`$-level. In this work we prefer to use a completely phenomenological $`\mathrm{\Lambda }`$-nucleus potential that can easily be extended to heavier nuclei and reproduces the experimental $`\mathrm{\Lambda }`$ single particle levels as well as possible. Except for $`s`$-shell hypernuclei, where experimental data require $`\mathrm{\Lambda }`$-nucleus potentials with a repulsive core at short distances , the $`\mathrm{\Lambda }`$ binding energies have been well reproduced by W-S potentials. We thus use a W-S potential with fixed diffuseness ($`a=0.6`$ fm) and adjust the radius and depth to reproduce the $`s`$ and $`p`$ $`\mathrm{\Lambda }`$-levels. The paramaters of the potential for carbon are $`R=2.27`$ fm and $`V_0=32`$ MeV. To analyse the results of table I, we note that the microscopic wave function is substantially more extended than all the other wave functions used in the present study. The Dover parameters , namely $`R=2.71`$ fm, $`V_0=28`$ MeV, give rise to a $`\mathrm{\Lambda }`$ wave function that is somewhat more extended than the new W-S one but is very similar to that obtained from a harmonic oscillator with a frequency of $`10.9`$ MeV, adjusted to the $`sp`$ energy shift in carbon. Consequently, the non-mesonic width from the Dover’s wave function is very similar to the one obtained from the harmonic oscillator and slightly smaller than the new W-S one. The microscopic wave-function predicts the smallest non-mesonic widths due to the more extended $`\mathrm{\Lambda }`$ wave-function, which explores regions of lower density and thus has a smaller probability of interacting with one or more nucleons. From table I we also see that, against intuition, the mesonic width is quite insensitive to the $`\mathrm{\Lambda }`$ wave function. On this point we remind that for fixed $`\mathrm{\Lambda }`$-momentum, the more extended is the wave function in $`r`$-space, the larger is the mesonic width, since the Pauli blocking effects on the emitted nucleon are reduced. But, when we make the integral over the $`\mathrm{\Lambda }`$-momenta in LDA (eq. (18)), to more extended wave functions in $`r`$-space correspond less extended momentum distributions which tend to decrease the mesonic width. The two effects tend to cancel each other and $`\mathrm{\Gamma }_M`$ is insensitive to the different wave functions used in the calculation. In summary, different (but realistic) $`\mathrm{\Lambda }`$ wave functions give rise to total decay widths which may differ at most by 15%. ### B Decay widths of medium-heavy hypernuclei Using the new W-S wave functions and the Landau parameters $`g^{}=0.8`$, $`g_\mathrm{\Lambda }^{}=0.4`$ we have extended the calculation to heavier hypernuclei. We note that, in order to reproduce the experimental $`s`$ and $`p`$ levels for the hyperon we must use potentials with nearly constant depth, around $`28÷32`$ MeV, from medium to heavy hypernuclei (radii and depths of the used W-S potentials are quoted in table II). Our results are shown in table III. We observe that the mesonic rate rapidly vanishes by increasing the mass number $`A`$. This is well known and it is related to the decreasing phase space allowed for the mesonic channel, and to smaller overlaps between the $`\mathrm{\Lambda }`$ wave function $`\psi _\mathrm{\Lambda }`$ and the nuclear surface, as $`A`$ increases. The 2-body induced decay is rather independent of the hypernuclear dimension and it is about 15% of the total width. Previous works gave more emphasis to this new channel, without, however, reproducing the experimental results. The total width is also nearly constant with $`A`$, as we already know from the experiment. In fig. 4 we compare the results from table III with recent (after 1990) experimental data for non-mesonic decay . Nevertheless, we remind that the data for nuclei from $`{}_{\mathrm{\Lambda }}{}^{}{}_{}{}^{28}`$Si on refer to the total width. However, as can be seen from table III, $`\mathrm{\Gamma }_M`$($`{}_{\mathrm{\Lambda }}{}^{}{}_{}{}^{28}`$Si)/$`\mathrm{\Gamma }_{NM}`$( $`{}_{\mathrm{\Lambda }}{}^{}{}_{}{}^{28}`$Si) $`610^2`$ and this ratio rapidly decreases with $`A`$. The theoretical results are in good agreement with the data (which, on the other hand, have large error bars) over the whole hypernuclear mass range explored. Moreover, we also see how the saturation of the $`\mathrm{\Lambda }NNN`$ interaction in nuclei is well reproduced. One of the open problems in the study of weak hypernuclear decays is to understand the large experimental value of the ratio $`\mathrm{\Gamma }_n/\mathrm{\Gamma }_p`$ which most of the present theories fail to reproduce. Only the quark model of ref. predicts an enhanced ratio, although it cannot describe both mesonic and non-mesonic decays from the same basic quark hamiltonian. However, we have to remind that the data for $`\mathrm{\Gamma }_n/\mathrm{\Gamma }_p`$ have a large uncertainty and they have been analyzed without taking into account the 3-body decay mechanism. The study of ref. showed that, even if the three body reaction is only about 15 % of the total decay rate, this mechanism influences the analysis of the data determining the ratio $`\mathrm{\Gamma }_n/\mathrm{\Gamma }_p`$. The energy spectra of neutrons and protons from the non-mesonic decay mechanisms were calculated in ref. . The momentum distributions of the primary nucleons were determined from the Propagator Method and a subsequent Monte Carlo simulation was used to account for the final state interactions. It was shown that the shape of the proton spectrum was sensitive to the ratio $`\mathrm{\Gamma }_n/\mathrm{\Gamma }_p`$. In fact, the protons from the three-nucleon mechanism appeared mainly at low energies, while those from the two-body process peaked around 75 MeV. Since the experimental spectra show a fair amount of protons in the low energy region they would favour a relatively larger three-body decay rate or, conversely, a reduced number of protons from the two-body process. Consequently, the experimental spectra are compatible with values for $`\mathrm{\Gamma }_n/\mathrm{\Gamma }_p`$ around $`2÷3`$, in strong contradiction with the present theories. The excellent agreement with the experimental decay rates of medium to heavy hypernuclei obtained here from the Propagator Method with modified parameters, makes it worth to explore the predictions for the nucleon spectra. The question is whether this modified model affects the momentum distribution of the primary emitted nucleons strongly enough, such that good agreement with the experimental proton spectra is obtained without the need for very large values for $`\mathrm{\Gamma }_n/\mathrm{\Gamma }_p`$. We have thus generated the nucleon spectra from the decay of several hypernuclei using the Monte Carlo simulation of ref. but with our modified $`g^{}`$, $`g_\mathrm{\Lambda }^{}`$ parameters and our more realistic nuclear density and $`\mathrm{\Lambda }`$ wave functions. The spectra obtained for various values of $`\mathrm{\Gamma }_n/\mathrm{\Gamma }_p`$, used as a free parameter in the approach of ref. , are compared with the BNL experimental data in fig. 5. We observe that, although the non-mesonic widths are smaller by about 35% than those of refs. , the resulting nucleon spectra, once they are normalized to the same total width, are practically identical. The reason is that the ratio $`\mathrm{\Gamma }_2/\mathrm{\Gamma }_1`$ of two-body induced versus one-body induced decay rates is essentially the same in both models (between 0.2 and 0.15 from medium to heavy hypernuclei), and the momentum distributions for the primary emitted protons are also very similar. As a consequence, the conclusions drawn in ref. still hold and the present calculation would also favour very large values of $`\mathrm{\Gamma }_n/\mathrm{\Gamma }_p`$. Therefore, the origin of the discrepancy between theory and experiment for the ratio $`\mathrm{\Gamma }_n/\mathrm{\Gamma }_p`$ still needs to be resolved. From the theoretical side, there is still room for improving on the numerical simulation of final state interactions. In particular, Coulomb distorsions and the evaporating processes need to be incorporated. We think that the evaporating process is an important ingredient which increases the nucleon spectra at low energies. Maybe this contribution is so important that there is no need for high $`\mathrm{\Gamma }_n/\mathrm{\Gamma }_p`$ values. On the experimental side, although new spectra are now available , they have not been corrected for energy losses inside the target or detector, so a direct comparison with the theoretical predictions is not yet possible. Attempts to incorporate these corrections by combining a theoretical model for the nucleon rescattering in the nucleus with a simulation of the energy losses in the experimental set-up are now being pursued . These efforts call for newer improved theoretical models that incorporate those final state interaction effects missing in ref. . On the other hand, a forward step towards a clean extraction of the ratio $`\mathrm{\Gamma }_n/\mathrm{\Gamma }_p`$ would be obtained if the nucleons from the different non-mesonic processes, $`\mathrm{\Lambda }NNN`$ and $`\mathrm{\Lambda }NNNNN`$ were disentagled. Through the measurement of the coincident spectra of the outgoing nucleons, it could be possible, in the near future, to split the non-mesonic decay width into its two components $`\mathrm{\Gamma }_1`$ and $`\mathrm{\Gamma }_2`$ and obtain a cleaner measurement of the ratio $`\mathrm{\Gamma }_n/\mathrm{\Gamma }_p`$. ## IV Conclusions Using the Propagator Method in Local Density Approximation, in this paper we made a new evaluation of the $`\mathrm{\Lambda }`$ decay widths in nuclei. Special attention has been devoted to the study of the $`NN`$ and $`\mathrm{\Lambda }N`$ short range interactions and realistic nuclear densities and $`\mathrm{\Lambda }`$ wave functions were used. We have adjusted the parameters that control the short range correlations to reproduce the experimental decay widths of $`{}_{\mathrm{\Lambda }}{}^{}{}_{}{}^{12}`$C. Then, the calculation has been extended to heavier hypernuclei, up to $`{}_{\mathrm{\Lambda }}{}^{}{}_{}{}^{208}`$Pb. We reproduce for the first time the experimental non-mesonic decay widths from medium to heavy $`\mathrm{\Lambda }`$-hypernuclei and the saturation of the $`\mathrm{\Lambda }NNN`$ interaction is observed. The energetic spectra of emitted nucleons calculated using the Propagator Method with modified parameters (describing the energy distributions of primary nucleons) and the Monte Carlo simulation (accounting for the final state interactions) does not change appreciably with respect to those calculated in ref. . The reason is that, in spite of the fact that the non-mesonic decay widths $`\mathrm{\Gamma }_1`$ and $`\mathrm{\Gamma }_2`$ are sizably reduced (by about 35%) with respect to those of ref. , the ratio $`\mathrm{\Gamma }_1/\mathrm{\Gamma }_2`$ is not altered, and the momentum distributions of primary nucleons are very similar to the previous calculation. So, the conclusion drawn in ref. still holds: a comparision of the calculated spectra with the experimental one favours $`\mathrm{\Gamma }_n/\mathrm{\Gamma }_p`$ ratios around $`2÷3`$ (or higher), in disagreement with the OPE predictions. On the other hand, we have to recall that for a clean experimental extraction of the $`\mathrm{\Gamma }_n/\mathrm{\Gamma }_p`$ ratio it is very important to identify the nucleons which come out from the different non-mesonic processes . ###### Acknowledgements. We would like to thank H. Noumi and H. Outa for discussions and for giving us detailed information about the experiments. We acknowledge financial support from the EU contract CHRX-CT 93-0323. This work is also supported by the MURST (Italy) and the DGICYT contract number PB95-1249 (Spain). ## Spin-isospin $`NN`$ and $`\mathrm{\Lambda }NNN`$ interactions In this appendix we show how the repulsive $`NN`$ and $`\mathrm{\Lambda }N`$ Short Range Correlations (SRC) are implemented in the $`NNNN`$ and $`\mathrm{\Lambda }NNN`$ interactions. The process $`NNNN`$ can be described through an effective potentials given by: $$G(r)=g(r)V(r).$$ (25) Here $`g(r)`$ is a 2-body correlation function, which vanishes as $`r0`$ and goes to 1 as $`r\mathrm{}`$, while $`V(r)`$ is a meson exchange potential which in our case contains $`\pi `$ and $`\rho `$ exchange: $`V=V_\pi +V_\rho `$. A practical form for $`g(r)`$ is : $$g(r)=1j_0(q_cr),$$ (26) With $`q_c780`$ MeV one get a good reproduction of realistic $`NN`$ correlation functions obtained from $`G`$-matrix calculations. The inverse of $`q_c`$ is indicative of the hard core radius of the interaction. Since there are not experimental indications, the same correlation momentum we use for the $`\mathrm{\Lambda }N`$ interaction. On the other hand, we remind that $`q_c`$ is not necessarily the same in the two cases, given the different nature of the repulsive forces. Using the correlation function (26) it is easy to get the effective interaction, eq. (25), in momentum space. It reads: $$G_{NN}(q)=V_\pi (q)+V_\rho (q)+\frac{f_\pi ^2}{m_\pi ^2}\left\{g_L(q)\widehat{q}_i\widehat{q}_j+g_T(q)(\delta _{ij}\widehat{q}_i\widehat{q}_j)\right\}\sigma _i\sigma _j𝝉𝝉,$$ (27) where the SRC are embodied in the correlation functions $`g_L`$ and $`g_T`$. The spin-isospin $`NNNN`$ interaction can be separated into a spin-longitudinal and a spin-transverse parts, as follows: $$G_{NN}(q)=\left\{V_L(q)\widehat{q}_i\widehat{q}_j+V_T(q)(\delta _{ij}\widehat{q}_i\widehat{q}_j)\right\}\sigma _i\sigma _j𝝉𝝉(\widehat{q}_i=q_i/𝒒),$$ (28) where $$V_L(q)=\frac{f_\pi ^2}{m_\pi ^2}\left\{𝒒^2F_\pi ^2(q)G_\pi ^0(q)+g_L(q)\right\},$$ (29) $$V_T(q)=\frac{f_\pi ^2}{m_\pi ^2}\left\{𝒒^2C_\rho F_\rho ^2(q)G_\rho ^0(q)+g_T(q)\right\}.$$ (30) In the above $`F_\rho `$ is the $`\rho NN`$ form factor (eq. (7) with cut-off $`\mathrm{\Lambda }_\rho =2.5`$ GeV), and $`G_\rho ^0=1/(q_0^2𝒒^2m_\rho ^2)`$ is the $`\rho `$ free propagator. The $`\mathrm{\Lambda }NNN`$ effective interactions splits into a $`p`$-wave (again longitudinal and transverse) part: $$G_{\mathrm{\Lambda }NNN}(q)=\left\{\stackrel{~}{P}_L(q)\widehat{q}_i\widehat{q}_j+\stackrel{~}{P}_T(q)(\delta _{ij}\widehat{q}_i\widehat{q}_j)\right\}\sigma _i\sigma _j𝝉𝝉,$$ (31) with: $$\stackrel{~}{P}_L(q)=\frac{f_\pi }{m_\pi }\frac{P}{m_\pi }\left\{𝒒^2F_\pi ^2(q)G_\pi ^0(q)+g_L^\mathrm{\Lambda }(q)\right\},$$ (32) $$\stackrel{~}{P}_T(q)=\frac{f_\pi }{m_\pi }\frac{P}{m_\pi }g_T^\mathrm{\Lambda }(q),$$ (33) and an $`s`$-wave part: $$\stackrel{~}{S}(q)=\frac{f_\pi }{m_\pi }S\left\{F_\pi ^2(q)G_\pi ^0(q)\stackrel{~}{F}_\pi ^2(q)\stackrel{~}{G}_\pi ^0(q)\right\}𝒒.$$ (34) Form factors and propagators with a tilde imply that they are calculated changing $`𝒒^2𝒒^2+q_c^2`$. $`C_\rho `$ is given by the expression: $$C_\rho =\frac{f_\rho ^2}{m_\rho ^2}\left[\frac{f_\pi ^2}{m_\pi ^2}\right]^1.$$ (35) The expressions for the correlation functions are the following: $$g_L(q)=\left\{𝒒^2+\frac{1}{3}q_c^2\right\}\stackrel{~}{F}_\pi ^2(q)\stackrel{~}{G}_\pi ^0(q)\frac{2}{3}q_c^2C_\rho \stackrel{~}{F}_\rho ^2(q)\stackrel{~}{G}_\rho ^0(q),$$ (36) $$g_T(q)=\frac{1}{3}q_c^2\stackrel{~}{F}_\pi ^2(q)\stackrel{~}{G}_\pi ^0(q)\left\{𝒒^2+\frac{2}{3}q_c^2\right\}C_\rho \stackrel{~}{F}_\rho ^2(q)\stackrel{~}{G}_\rho ^0(q),$$ (37) $$g_L^\mathrm{\Lambda }(q)=\left\{𝒒^2+\frac{1}{3}q_c^2\right\}\stackrel{~}{F}_\pi ^2(q)\stackrel{~}{G}_\pi ^0(q),$$ (38) $$g_T^\mathrm{\Lambda }(q)=\frac{1}{3}q_c^2\stackrel{~}{F}_\pi ^2(q)\stackrel{~}{G}_\pi ^0(q).$$ (39) Using the set of parameters: $$q_c=780\mathrm{MeV},\mathrm{\Lambda }_\pi =1.2\mathrm{GeV},\mathrm{\Lambda }_\rho =2.5\mathrm{GeV},f_\pi ^2/4\pi =0.08,C_\rho =2,$$ (40) at zero energy and momentum we have: $$g_L(0)=g_T(0)=0.615,g_L^\mathrm{\Lambda }(0)=g_T^\mathrm{\Lambda }(0)=0.155.$$ (41) Howhever we wish to keep the zero energy and momentum limit of $`g_{L,T}`$ and $`g_{L,T}^\mathrm{\Lambda }`$ as free parameters, thus we replace the previous functions by: $$g_{L,T}(q)g^{}\frac{g_{L,T}(q)}{g_{L,T}(0)},g_{L,T}^\mathrm{\Lambda }(q)g_\mathrm{\Lambda }^{}\frac{g_{L,T}^\mathrm{\Lambda }(q)}{g_{L,T}^\mathrm{\Lambda }(0)}.$$ (42)
no-problem/9902/cond-mat9902033.html
ar5iv
text
# Statistical properties of genealogical trees ## Acknowledgements Interesting discussions with Ugo Bastolla are gratefully acknowledged. SCM acknowledges support from the Alexander von Humboldt Foundation (Germany) and from Fundación Antorchas (Argentina). ## Figure caption 1. Probability of ancestor repetitions in the genealogical tree of the king Edward III . The continuous and dashed lines represent the results of simulations of $`F(r)`$ in a closed population with $`2^{11}`$ and $`2^{12}`$ individuals for our model. Averages have been performed over the 10 first generations of $`10^3`$ independent trees. 2. Distribution $`H(r,n_g)`$ of $`r`$ repetitions after $`n_g`$ generations ($`H(0,n_g)`$ is not shown). The distribution changes after roughly $`\mathrm{log}N`$ generations from a decreasing function of $`r`$ to a distribution with a maximum. The generations shown are $`n_g=9,\mathrm{\hspace{0.33em}13},\mathrm{\hspace{0.33em}15},\mathrm{\hspace{0.33em}17},\mathrm{\hspace{0.33em}19},\mathrm{\hspace{0.33em}21},`$ and $`23`$ for a population with $`N=2^{15}`$. We have averaged over 100 independent runs. 3. Data collapse for the rescaled distribution of repetitions $`P(w)`$ after the transient period. Averages have been performed over $`10^3`$ independent trees for a population size $`N=2^{20}`$. 4. Dependence of $`S(n_g)`$ on the generation $`n_g`$ for a population with $`N=2^{15}`$. The numerical asymptotic value is $`S(n_g\mathrm{})0.2031`$. The bold dotted line is the predicted theoretical value $`S=g(\mathrm{})=0.20318787\mathrm{}`$. In the inset, we represent the first ten moments $`w^n`$ for the distribution $`P(w)`$. The continuous line corresponds to numerical results, while solid circles stand for the theoretical predictions.
no-problem/9902/astro-ph9902109.html
ar5iv
text
# Measuring Galactic Extinction: A Test ## 1 Introduction Extinction and reddening caused by interstellar dust effects the detected emission from most astronomical sources in the sky. In most galactic and extragalactic studies, especially when studying so called “standard candles,” the effects of dust on the source’s detected brightness and color need to be taken into account. Hence it is very desirable to know the extinction (and reddening) anywhere on the sky. Recently Schlegel, Finkbeiner and Davis (1998; hereafter SFD) published an all-sky reddening map, based on satellite observations of far-infrared emission (at 100 and 240 µm) from dust. This new reddening map will surely be used by many researchers seeking to apply reddening “corrections” for their work, so it is very important to verify its accuracy and reliability. The SFD map uses data obtained by the DIRBE (Diffuse InfraRed Background Experiment) on board COBE combined with ISSA (IRAS Sky Survey Atlas) images. The COBE/DIRBE experiment had better control of absolute calibration than did IRAS, but a larger beam (0.7°; as compared with $``$ 5′ for IRAS). SFD use the DIRBE data to calibrate the IRAS/ISSA images, and, after sophisticated processing, they obtain a full sky map of the 100 µm emission from dust, with point sources and the zodiacal light foreground removed.<sup>3</sup><sup>3</sup>3See SFD for a full description of their elaborate foreground zodiacal light subtraction technique, point source extraction and overall data reduction. Their resulting reddening map is more accurate and has better resolution ($``$ 6.1′) than the previous existing all sky reddening map of Burstein and Heiles (1978, 1982). In order to test the extinction map derived from the SFD analysis, we compare it with Arce & Goodman’s (1999; hereafter AG) recent extinction study of a region of the Taurus dark cloud complex. The AG study uses four different techniques to measure extinction along two constant-Right Ascension “cuts” of several degrees in length. The four techniques in AG utilize: 1) the color excess of background stars for which spectral types are known; 2) the ISSA 60 and 100 µm images; 3) star counts; and 4) an optical ($`V`$ and $`R`$) version of the average color excess method used by Lada et al. (1994). The study finds that all four methods give generally similar results, and concludes that all four techniques are reliable ways to measure extinction in regions where $`A_V4`$ mag. In this Letter we compare the extinction map derived from SFD to the AG extinction map which is based on ISSA 60 and 100 µm images in the Taurus region. This comparison provides a test of the reliability of the SFD reddening map for regions of extinction ($`A_V`$) higher than 0.5 mag. ## 2 Comparing the two results In this section we compare the extinction derived from the SFD reddening map (hereafter $`A_{V_{SFD}}`$) to the extinction AG obtained using the ISSA 60 and 100 µm maps (hereafter $`A_{V_{ISSA}}`$). In order to be consistent, we must compare both methods with the same spatial resolution. As part of their image reduction procedure, SFD convolve the ISSA images with a FWHM=3.2′ Gaussian, which results in ISSA images with 6.1′ resolution. For the purpose of comparing the two extinction maps, we also convolve the AG ISSA images with a FWHM=3.2′ Gaussian, so that both extinction determinations have the same resolution. When AG compare $`A_{V_{ISSA}}`$ with the extinction measured with star counting techniques and the average color excess method (techniques 3 and 4 above), they do so by plotting extinction versus declination. The extinction values shown in Figures 3 and 5 of AG are actually traces of the extinction averaged over the $``$10′ span (in the R.A. direction) of their CCD fields. For consistency we consider the same kind of average, and we obtain the SFD reddening ($`E(\mathrm{B V})_{SFD}`$) for the two 10′ wide constant–R.A. cuts of AG and average $`E(\mathrm{B V})_{SFD}`$ over Right Ascension. In order to convert from the color excess value given by the SFD maps to visual extinction, we use the equation $`A_V=R_VE(\mathrm{B V})`$, where $`R_V`$, the ratio of total-to-selective extinction, is equal to 3.1 (SFD; Kenyon et al. 1994; Vrba & Rydgren 1985). In Figure 1 we plot extinction versus declination for the two constant R.A. cuts in the Taurus dark cloud complex region. In addition to $`A_{V_{SFD}}`$ and $`A_{V_{ISSA}}`$, we show the extinction traces of the extinction obtained from the average color excess and star count methods of AG, for reference. We will concetrate our discussion on the ISSA (AG) and SFD methods. It can be seen that these two methods trace similar structure in both cuts. The extinction peaks associated with the IRAS core Tau M1 (Wood et al. 1994), the dark cloud L1506, the dark cloud B216-217, and the IRAS cores Tau B5 and Tau B11 (Wood et al. 1994) are easily detected. The 1 $`\sigma `$ error (not including systematics) of $`A_{V_{ISSA}}`$ is 0.12 mag (AG). SFD quote an uncertainty of 16% in their reddening estimates. For the most part, in regions unassociated with a pronounced peak in extinction, $`A_{V_{SFD}}`$ is about a factor of 1.3 to 1.5 larger than $`A_{V_{ISSA}}`$. It is also important to note that in regions where there is a peak in the extinction, the value of $`A_{V_{ISSA}}`$ is closer to that of $`A_{V_{SFD}}`$ than in the rest of the trace, and in some of the peaks $`A_{V_{ISSA}}`$ exceeds $`A_{V_{SFD}}`$. There are two places where the two traces appear to show very different extinction structures. In both of these places $`A_{V_{ISSA}}`$ shows a steep dip: one is near the peak associated with B216-217 in cut 2 (around declination 26.9°); and the other is near the end of the cut, around declination 28.3° (see Figure 1). These two regions coincide with the position of IRAS point sources, and will be discussed in more detail further on. ## 3 Comparison between the two methods SFD use data from the IRAS and DIRBE experiments to construct a full-sky map of the Galactic dust based on its far-infrared emission. The IRAS data are used as a source of 100 µm flux images, and the DIRBE data are used for absolute calibration and as a source of 240 µm data. SFD derive the dust color temperature using the ratio of 100 to 240 µm emission from DIRBE<sup>4</sup><sup>4</sup>4Note that deriving dust temperature from a long wavelength color ratio, such as 100/240 µm is superior to using the 60/100 µm ratio, because the effects of both point sources and transient heating of small grains are minimized., and then use this dust color temperature to convert their ISSA-based maps of 100 µm emission to maps of dust column density. As a result of this procedure, the SFD ISSA-based 100 µm emission map has a spatial resolution of 6.1′, but their dust color temperature map has a resolution of $``$ 1.4° (Schlegel 1998). To transform from the map proportional to dust column (hereafter $`\tau _{SFD}`$) to a reddening ($`E(\mathrm{B V})`$) map, SFD use the correlation between the intrinsic B V color of elliptical galaxies and the Mg<sub>2</sub> line strength. The Mg<sub>2</sub> line strength of an elliptical galaxy correlates well with its intrinsic B V color so that the Mg<sub>2</sub> line index can be used along with photometric measurements of the galaxy in order to obtain a reasonably accurate measurement of its reddening (Faber et al. 1989). SFD use a procedure where the residual of the B V color versus the Mg<sub>2</sub> line index for 389 elliptical galaxies is correlated with the estimated reddening from their maps, using a Spearman Rank method. The resultant fit is then used by SFD to convert from $`\tau _{SFD}`$ to reddening ($`E(BV)_{SFD}`$) at each pixel. We then convert reddening to extinction, using the relation $`A_{V_{SFD}}=R_VE(BV)_{SFD}`$, with $`R_V=3.1`$ (SFD). In order to obtain their ISSA-based extinction, $`A_{V_{ISSA}}`$, AG begin by using ISSA 60 and 100 µm images to obtain a dust color temperature map from the flux ratio at each pixel. This color temperature is then used to convert the observed 100 µm flux to 100 µm optical depth, $`\tau _{100}`$. As in SFD, the next conversion, from dust opacity to extinction, is tied to a separate technique of obtaining extinction. AG chose to use a method similar to that described in Wood et al. (1994), which itself is ultimately based on work by Jarrett et al. (1989). Jarrett et al. (1989) correlate 60 µm optical depth ($`\tau _{60}`$) with extinction ($`A_V`$) obtained from star counts, and Wood et al. (1994) multiply the Jarrett et al. $`\tau _{60}`$ values by 100/60 to derive a conversion from $`\tau _{100}`$ to $`A_V`$, assuming low optical depth. Thus, using Wood et al.’s technique, AG’s conversion of $`\tau _{100}`$ to $`A_V`$ ultimately rests on Jarrett et al.’s correlation of 60 µm optical depth with star counts. As described in AG, this $`\tau _{60}A_V`$ correlation is very tight for $`A_V<4`$ mag, so we ascribe very little of the uncertainty in $`A_{V_{ISSA}}`$ to this conversion. ## 4 The cause of the discrepancy Arce & Goodman (1999) obtain the extinction toward the Taurus region using four different techniques. All four give similar results in terms of the absolute value and overall structure of the extinction. Most importantly, AG find that $`A_{V_{ISSA}}`$ agrees well with extinction values obtained by measuring the color excess of background stars for which spectral types are known, which is the the most direct and accurate way to measure reddening (see Figure 2 in AG). So, the question now is why does the SFD method give extinctions which differ from the four determinations of extinction made by AG? Absolute values. Although data reduction and zero point determination of the emission is very elaborate and accurate in the SFD method, the normalization of the reddening per unit flux density (conversion between $`\tau _{SFD}`$ and $`E(\mathrm{B V})_{SFD}`$) seems to overestimate the reddening in regions of high dust opacity. As explained above, SFD use the correlation between intrinsic B V color and the Mg<sub>2</sub> line index in elliptical galaxies to convert from dust column density to reddening. It can be seen in Figure 6 of SFD that 90% of the 389 elliptical galaxies they use for the B V vs. Mg<sub>2</sub> regression have $`E(\mathrm{B V})_{SFD}`$ values of less then 0.1 mag. Moreover it is clear from their Figure 6 that the fit is excellent for values of $`E(\mathrm{B V})_{SFD}`$ less than 0.15 mag, but for $`E(\mathrm{B V})_{SFD}>0.15`$ mag the fit starts to diverge for the few points that exist. Galaxies with $`E(\mathrm{B V})_{SFD}>0.15`$ mag ($`A_V>0.5`$ mag) seem to follow a trend leading to reddening values in regions with high dust opacity being overestimated. SFD state that the slight trend in the residual is not statistically significant, but that may be due to the fact that there are so few points with $`E(\mathrm{B V})_{SFD}>0.15`$ mag. Figure 6 in SFD shows only two points (galaxies) that have $`E(\mathrm{B V})_{SFD}>0.3`$ mag. We use the point with $`E(\mathrm{B V})_{SFD}=0.32`$ mag to asses the amount of overestimation by the SFD method in high reddening regions. Using Figure 6 in SFD and $`A_V=R_VE(\mathrm{B V})`$, with $`R_V=3.1`$, for an extinction of around 1 mag, the SFD method would overestimate the extinction, at 1.31 magnitudes. In fact, by comparing $`A_{V_{SFD}}`$ to $`A_{V_{ISSA}}`$, we find the overestimation by SFD can be even more than that: when $`A_{V_{ISSA}}`$ is 1, $`A_{V_{SFD}}`$ is typically 1.5. Thus we are convinced that SFD overestimates the reddening value to lines of sights where the extinction is more than 0.5 mag. Such overestimation is due to the fact that in the sample of 389 galaxies most of them have $`E(\mathrm{B V})<0.1`$ and very few have $`E(\mathrm{B V})>0.15`$. This results in an accurate conversion between 100 µm emission and reddening for regions with very low extinction ($`E(\mathrm{B V})<0.1`$), but a less accurate conversion where $`E(\mathrm{B V})>0.15`$ ($`A_V>0.5`$ mag). One could argue that the SFD method overestimates the extinction because we overestimate the value of $`R_V`$, needed to convert $`E(BV)`$ to $`A_V`$. We believe that is not so, since several independent studies indicate that the value of $`R_V`$ in the Taurus dark cloud complex is around 3.1 and that it stays constant through the region for lines of sight in which $`A_V<3`$ mag (Kenyon et al. 1994; Vrba & Rydgren 1985). Furthermore, $`R_V=3.1`$ gives very good agreement between $`A_{V_{ISSA}}`$ and extinction obtained using the color excess of background stars for which spectral types are known (AG). Structure. In Section 2, we noted that there are certain places where the traces of $`A_{V_{ISSA}}`$ and $`A_{V_{SFD}}`$ differ not only in value but also in structure. The most dramatic “structural” differences correspond to places where the cuts in Figure 1 intercept IRAS point sources. SFD remove point sources from the ISSA images before they use them, while AG do not. Both AG and SFD assume a single dust temperature for each line of sight. In AG’s determination of $`A_{V_{ISSA}}`$, the high 60 µm flux from IRAS point sources transforms into dust with high color temperature, which in turn causes an artificial dip in the extinction (see Wood et al. 1994 and references therein for more on this effect). We can see that SFD have done a good job at removing point sources by noticing that where $`A_{V_{ISSA}}`$ has artificial dips, $`A_{V_{SFD}}`$ traces the real structure (also traced by the other methods in AG). Thus $`A_{V_{SFD}}`$, unlike $`A_{V_{ISSA}}`$, is free of any unphysical extinction due to IRAS point sources. Less dramatic, but more curious, structural discrepancies between the two methods are seen near the extinction peaks in Figure 1. Although $`A_{V_{SFD}}`$ is in general larger than $`A_{V_{ISSA}}`$ by a factor of 1.3 to 1.5, it is important to realize that in some peaks $`A_{V_{SFD}}`$ is larger by a factor less than 1.3 and in others it is smaller than $`A_{V_{ISSA}}`$. In addition the gaussian-looking extinction peaks traced by $`A_{V_{SFD}}`$ are wider than those traced by $`A_{V_{ISSA}}`$. Gaussian fits to the peaks associated with Tau M1 and L1506 for both traces in cut 1 show that both peaks traced by $`A_{V_{SFD}}`$ have a FWHM wider than those of the peaks traced by $`A_{V_{ISSA}}`$ by a factor of 1.3. This suggests that in regions of steep extinction gradients the reddening obtained from the SFD map has a lower resolution than the quoted 6.1′. We believe that this “smearing” in the SFD maps is caused by the difference in resolution between the ISSA flux maps (6.1′) and the COBE-limited color temperature maps (1.4°) employed by SFD. In Figure 2 we plot the 100 µm emission ($`I_{100}`$) versus declination for a section of cut 1, where the rise in $`I_{100}`$ associated with Tau M1 and L1506 are clearly seen. The plot shows the traces of $`I_{100}`$ obtained by SFD ($`I_{100_{SFD}}`$) and by AG ($`I_{100_{ISSA}}`$) averaged over the $``$ 10′ width (in R.A.) of the cut. It can clearly be seen that both traces are essentially equal, and that the peaks have the same width and height. Thus it is after both the dust color temperature and $`I_{100}`$ have been used together to obtain the extinction, that the SFD method loses spatial resolution. Regions of steep extinction gradients are likely to have temperature gradients, but if the spatial resolution in the dust color temperature is $`1.4\mathrm{°}`$ then it is unlikely that these gradients would be detected. If the temperature gradients are not accounted for when calculating reddening, the result is areas with lower effective resolution. Therefore, the fact that SFD use a dust color temperature map with a spatial resolution a factor of 14 larger than the $`I_{100}`$ map results in a reddening map which does not have a constant spatial resolution of 6.1′. Unfortunately the DIRBE instrument has a limiting resolution of $``$ 1°, so it is not possible to do better with data taken with that instrument. ## 5 Conclusion We test the COBE/IRAS all-sky reddening map of Schlegel, Finkbeiner & Davis (1998) using the extinction study of Arce & Goodman (1999). In their study, AG use four different techniques to study the extinction in a region of the Taurus dark cloud complex. All four techniques give very similar results in terms of the value of $`A_V`$ and the structure in the extinction. Thus the results of AG can be considered a truthful representation of the extinction in this region, and can be used to test the reliability of the SFD reddening map. We compare the extinction obtained by AG using ISSA 60 and 100 µm images with the SFD reddening map. Our results show that in general the SFD method gives extinction values a factor of 1.3 to 1.5 larger than the extinction obtained by AG. We conclude that SFD overestimates the reddening value to lines of sights where $`A_V>0.5`$ mag. We expect this overestimation is caused by the fact that in the sample of 389 galaxies used to calculate a conversion from dust column density to $`E(\mathrm{B V})`$, 90% of the galaxies have low reddening values ($`E(\mathrm{B V})<0.1`$) and very few ($`4`$%) have high reddening ($`E(\mathrm{B V})>0.15`$) values. The lack of galaxies in high reddening regions results in inaccuracy in the conversion between dust column and reddening for lines of sight with $`E(\mathrm{B V})>0.15`$. This bias should be studied in other regions of the sky in order to see how general this trend is. For now we advise that caution be taken when using the SFD all-sky reddening map to obtain reddening (and extinction) values for regions with $`E(\mathrm{B V})>0.15`$ ($`A_V>0.5`$ mag), as the value given by SFD could be larger than the real value. The behavior of $`A_{V_{SFD}}`$ near steep gradients in extinction seems to be different from the overall “smoother” extinction regions. The difference between $`A_{V_{SFD}}`$ and $`A_{V_{ISSA}}`$ decreases near extinction peaks and in some peaks $`A_{V_{SFD}}`$ even becomes smaller than $`A_{V_{ISSA}}`$. In addition the peaks in $`A_{V_{SFD}}`$ are wider than the $`A_{V_{ISSA}}`$ peaks. We attribute this behavior to the fact that SFD use a dust temperature map with a spatial resolution of a factor of 14 larger than the 100 µm intensity map used to obtain the reddening map. The poor resolution in the temperature maps results in a reddening map which shows less spatial resolution near extinction peaks. We thank David Schlegel, Douglas Finkbeiner and Mark David for their very useful comments on this work, and Krzysztof Stanek for pointing out the SFD paper to us, and for sharing his work on this topic. We also thank the referee for prompt and very helpful comments.
no-problem/9902/physics9902030.html
ar5iv
text
# A METHOD TO DETERMINE THE IN-AIR SPATIAL SPREAD OF CLINICAL ELECTRON BEAMS ## I INTRODUCTION Nowadays, electron beams have become a tool widely used in radiation treatment of cancer. However, one of the major difficulties in the daily clinical procedure is the accurate calculation of the electron dose distribution. At present, most of the treatment planning determinations are based on the pencil beam model, which assumes that broad beams are composed by an infinite number of pencil beams, each one spreading as predicted by the Fermi-Eyges multiple scattering theory. In this approach the pencil beams are supposed to present Gaussian distribution profiles in both space and angular coordinates, at any point of their trajectory, and then the corresponding spatial and angular spread parameters are basic ingredients to characterize the beams. The purpose of this work is to determine the spatial spread parameter in-air, on the beam axis and for various distances to the source. The usual techniques to obtain the spatial and angular spreads use different relations between the spatial spread and the penumbra width (see e.g. Refs. ). However, these procedures present a major problem because of the different definitions of the penumbra width which can be found in the literature. In Refs. , it is considered to be the average spatial separation between the 10 and 90% isodose levels. The ICRU recommended the same definition but for the 20 and 80% isodose levels. Finally, in Refs. the penumbra width is obtained as the distance between the intersection of the tangent at the 50% point with the 0%-100% dose levels. In all the cases the measurements refer to a normalized beam profile. The main problem is that the values obtained using the different procedures described above can differ by more than a 30% and therefore the concept of penumbra width is itself misleading. Besides, the procedure followed in these works presents additional error sources which are neither considered nor discussed. The errors in the measured dose profiles, in the determination of the point of 50% dose and in the calculation of the corresponding tangent are usually forgotten. All these errors sum up increasing the indetermination of the penumbra width, with the consequent ambiguities in the quantities calculated from it . A different approach to the problem is followed by McKenzie and Werner, Khan, and Deibel . In their work these authors measure the electron dose distribution behind the edge of a lead block covering a flat homogeneous phantom in the half plane $`x<0`$ as in Ref. , but they do not determine the penumbra width. Instead, they obtain the spatial spread from Gaussian functions fitted to the corresponding strip beam profiles. These are calculated by deriving the measured dose profiles by means of a two-point difference formula. Unfortunately, each step in this method (the measurement of the dose profiles, the method to obtain the strip profiles and the fit procedure) introduces an error in the results which is not considered at all. All these errors propagate to the final results and it is not possible for the reader to know their accuracy. In this work we want to address a new procedure to obtain the spatial spread which, as in Refs. , is based on measurements performed at the central area of the beam. However, our method uses a simple analytical formalism, is plainly reproducible and shows controlled sources of uncertainty. The organization of the paper is as follows. In Sect. II we describe the theory underlying the method. Sect. III is devoted to the material and methods used in the experimental side. In Sect. IV we discuss the results. Finally, we give our conclusions in Sect. V. ## II THEORY As mentioned above, our procedure to determine the in-air spatial spread is based on the measurement of the dose profiles generated by an electron beam below a lead block partially collimating it. In this section we justify our methodology. Let us consider the $`xyz`$-coordinate system depicted in Fig. 1, to which the measurements will be referred. Let us suppose an infinitely broad beam parallel to the $`z`$-axis and traveling in the direction of increasing $`z`$. To calculate the dose deposited by the beam at a given point we use the pencil beam model. In this approach, it is assumed that broad beams are formed as the sum of infinite parallel pencil beams, each of them producing dose profiles of Gaussian type at each $`z`$. Thus, in our case, the ray centered at the point $`(x^{},y^{})`$ in Fig. 1 gives rise to a profile: $$D_{(x^{},y^{})}(z,x,y)=D_{\mathrm{}}\frac{1}{\sqrt{2\pi }\sigma _x(z)}\mathrm{exp}\left[\frac{(xx^{})^2}{2\sigma _x^2(z)}\right]\frac{1}{\sqrt{2\pi }\sigma _y(z)}\mathrm{exp}\left[\frac{(yy^{})^2}{2\sigma _y^2(z)}\right],$$ (1) where $`D_{\mathrm{}}`$ is a normalization constant which actually gives the broad beam electron dose. The parameters $`\sigma _x(z)`$ and $`\sigma _y(z)`$ label the spatial spreads in the $`x`$ and $`y`$ directions, respectively, at a given $`z`$. Let us now assume a semi-infinite lead block partially collimating the beam, as shown in Fig. 1. We call $`X_\mathrm{e}`$ the $`x`$-coordinate of the edge of the lead block in our reference system and we fix the $`xy`$-plane at $`z=0`$ coinciding with the lower edge of the collimator. We can calculate the total dose distribution in $`xy`$-plane at a given position $`z>0`$ as: $`D(z,x,y)`$ $`=`$ $`{\displaystyle _{X_\mathrm{e}}^{\mathrm{}}}dx^{}{\displaystyle _{\mathrm{}}^{\mathrm{}}}dy^{}D_{(x^{},y^{})}(z,x,y)`$ (2) $`=`$ $`D_{\mathrm{}}{\displaystyle _{X_\mathrm{e}}^{\mathrm{}}}dx^{}{\displaystyle \frac{1}{\sqrt{2\pi }\sigma _x(z)}}\mathrm{exp}\left[{\displaystyle \frac{(xx^{})^2}{2\sigma _x^2(z)}}\right]`$ (3) $`=`$ $`D_{\mathrm{}}{\displaystyle _{\mathrm{}}^x}dt{\displaystyle \frac{1}{\sqrt{2\pi }\sigma _x(z)}}\mathrm{exp}\left[{\displaystyle \frac{(tX_\mathrm{e})^2}{2\sigma _x^2(z)}}\right]`$ (4) $`=`$ $`D_{\mathrm{}}P(x;X_\mathrm{e},\sigma _x(z)),`$ (5) where: $$P(x;m,\sigma )=_{\mathrm{}}^xdt\frac{1}{\sqrt{2\pi }\sigma }\mathrm{exp}\left[\frac{(tm)^2}{2\sigma ^2}\right]=\frac{1}{2}\mathrm{erfc}\left(\frac{mx}{\sqrt{2}\sigma }\right)$$ (6) is the cumulative distribution function corresponding to a normal distribution centered at $`m`$ and with standard deviation $`\sigma `$ and erfc stands for the complementary error function . It is important to note that, the dose distributions given by Eq. (5) have the same centroid $`X_\mathrm{e}`$, independently of the $`z`$-value at which they are measured. This is so because the beam we have considered up to now is parallel to the $`z`$-axis of our reference system. Eq. (5) is valid only when the measurement plane is irradiated by a uniform, semi-infinite broad beam. In actual experiments this is not the case and it is necessary to consider the corresponding corrections. First of all, it is obvious that the actual beam is finite and, as a consequence, it exhibits a physical end in the open (not collimated) area. This produces a certain distortion in the dose profiles which will differ from the Gaussian shape expected for infinitely broad beams. In order to minimize these differences, we focus our attention on the data acquired below the lead collimator. Therein, we expect to be sufficiently far away from this physical end of the beam and we can assume that the Gaussian approach is enough reasonable to describe the profiles. On the other hand, the finite dimensions of the source, together with the fact that the distance from the source to the measuring plane is not infinity, make the actual beam to diverge. This gives rise (see e.g. Ref. ) to two effects. The first one is that the constant $`D_{\mathrm{}}`$ in Eq. (5) depends on the $`z`$-value. Then the dose at a given point is: $$D(z,x,y)=D_{\mathrm{}}(z)P(x;X_\mathrm{e},\sigma _x(z)).$$ (7) For electron beams generated in linear accelerators (LINAC), it is possible, in practice, to define a point virtual source. As a consequence, the dependence of $`D_{\mathrm{}}`$ with $`z`$ must verify the well known inverse squared law. We will check this point a posteriori directly on the measured profiles (see Subsect. IV B). A second effect of the divergence of the beam in the actual experiment is that, in general, the centroid of the dose distributions will vary with the $`z`$-coordinate of the measuring plane . Then, the dose profiles will be given by: $$D(z,x,y)=D_{\mathrm{}}(z)P(x;x_{\mathrm{cent}}(z),\sigma _x(z)),$$ (8) where $`x_{\mathrm{cent}}(z)`$ represents, for each $`z`$ value, the $`x`$ position of the centroid of the distribution. It is worth to point out that only if the $`x`$-position of the point virtual source of the beam is exactly $`X_\mathrm{e}`$, the centroids $`x_{\mathrm{cent}}(z)`$ would be independent of $`z`$. In such a case these centroids will equal $`X_\mathrm{e}`$. It is evident that this situation cannot be ensured in actual experiments; nevertheless, this particular case is included in the general equation (8). Geometrically, the centroid $`x_{\mathrm{cent}}(z)`$ can be understood as the radiological projection of the edge of the collimator produced by the beam and it is expected to behave linearly with $`z`$ (see Ref. ). An immediate result is that the equation: $$x_{\mathrm{cent}}(z=0)=X_\mathrm{e}$$ (9) must be verified. In our experimental procedure (see Subsect. III A) we measure these positions $`x_{\mathrm{cent}}(z)`$ and we check that they show the required linear behavior and that Eq. (9) is satisfied. Finally, it is worth to note that, in the actual experiments, measurements are performed by means of an ionization chamber sited in an electro mechanical device which permits the positioning of the chamber. Then, the source-plus-collimator system will not be perfectly aligned, in general, with the coordinate system in which measurements are done. One can expect that this new system is both shifted and rotated with respect to the measurement system and this must be taken into account. However, these effects can be minimized by controlling with the standard procedures (optical, mechanical, etc.) the positioning of the gantry head of the accelerator and we will assume they are incorporated in the values of $`x_{\mathrm{cent}}(z)`$ we determine experimentally. In view of the previous discussion, the fitting function we adopt to analyze the experimental dose distributions is the following: $$D_{\mathrm{fit}}(z,x)=D_{\mathrm{}}(z)P(x;x_{\mathrm{cent}}(z),\sigma _x(z))+B(z,x).$$ (10) We have add the “background” function $`B(z,x)`$ in order to take care of different contributions to the dose which the Gaussian model does not account for. Thus, part of the dose due to the bremsstrahlung and the dose due to electrons scattered at the gantry head, at the measurement device and its surroundings, as well as those scattered in-air with large angles, are supposed to be described by this function. The particular functional dependence of $`B(z,x)`$ with $`x`$ will be considered in Subsect. IV A and its role in the model we propose will be discussed in Subsect. IV C. As we quote below, the contribution of the background is not at all negligible and its consideration in this simple way permits to obtain very good fits of the measured profiles. ## III EXPERIMENTAL SETUP As mentioned above, our interest is to determine the in-air spatial spread of clinical electron beams produced by a linear accelerators. Also, we want to investigate its dependence with the distance to the lower edge of a lead block (actually, the inner jaw) collimating the beam. To do that, we have measured the corresponding relative dose profiles in air by using a Wellhöffer WP-700 system which incorporates an electrometer Wellhöffer WP-5007 and an ionization chamber Wellhöffer IC-10 of 0.12 cm<sup>3</sup> sited in an electro mechanical device which permits the tridimensional positioning of the chamber with a theoretical precision of 0.1 mm. The dose profiles have been acquired in continuous mode, by displacing the chamber in the $`x`$-axis direction (at $`y`$=0) and for $`z`$ values ranging between 50 and 80 cm. Beams with nominal energies varying from 6 to 18 MeV generated by a Siemens Mevatron KDS have been considered. This ensures the generality of the results for similar machines. All the profiles obtained have been treated by means of the software package WP-700 Version 3.20.02 accompanying the measurement system. In our reference system ($`z=0`$ at the bottom edge of the jaws), the isocenter lies at $`z=73`$ cm and the scattering foils are sited at $`z=27`$ cm. The reproduction of the conditions established in the theoretical hypothesis was achieved by collimating the beam to the central axis using the adjustable collimators of the LINAC in asymmetric configuration. This point deserves a comment. Electron dose computation programs require as input data the spatial and/or angular spreads of the beam. The usual practice is to determine these parameters below the applicator at treatment distances. In this way, the obtained values are considered to be clinically relevant because the possible modifications in the parameters introduced by the presence of the applicator are taken into account. However, our interest stands for a more basic question. As mentioned in the Introduction, what we want to do is to determine the spatial spread of the beam in order to have more information to characterize it. In this sense we are interested in eliminate all the possibles distortions induced by the use of the different clinical devices. Besides we try in this paper to show the possibilities of the analysis technique we propose and to study its feasibility. We think that the experimental setup we consider provides the cleanest experimental situation to complete our purposes. We leave for a following work the investigation of the behavior of the parameters of interest when applicator and phantom are introduced . ### A Determination of the position of the centroid, $`x_{cent}`$ As we have previously discussed, the position of the centroid of the dose distribution shifts in the transverse direction when the beam goes downstream. The procedure we have performed to determined the values $`x_{\mathrm{cent}}(z)`$ at each $`z`$ is based in a series of measurements done with the following experimental setups: Setup 1. First, we have moved the right collimator to the center (position $`X_\mathrm{e}`$) maintaining the left one apart. In this situation we have obtained four dose profiles, without the reference chamber, for each energy and for each of the four values of $`z`$ selected: 50, 60, 70 and 80 cm. Each profile has been taken crossplane (with $`y=0`$) and varying $`x`$ from -2 to 2 cm. Setup 2. Next, we have moved the left collimator to the center. In order to check that beams are completely collimated we have measured four dose profiles for the largest energy (18 MeV) at $`z=50`$ cm. We have checked that the dose level in this situation is zero. This ensures that the left collimator is at $`X_\mathrm{e}`$ also. Setup 3. Finally, we have moved the right collimator apart. With this setup we have got four dose profiles, again without the reference chamber, for each energy and for the same values of $`z`$ as in Setup 1. The role of the reference chamber in our procedure deserve some additional explanations. It is obvious that the way we determine the $`x_{\mathrm{cent}}`$ values imposes the necessity of guaranteeing an equal normalization for the profiles obtained in the two asymmetrical configurations corresponding to Setups 1 and 3. However, it has not been possible to situate the reference chamber in a manner that it were irradiated identically in both setups. This is the reason why we switched off it when the profiles used to measure $`x_{\mathrm{cent}}`$ were taken. The experimental procedure followed (which ensures the absence of radiation when the two collimators are closed in Setup 2) allows us to obtain $`x_{\mathrm{cent}}`$ by finding the cross point between couples of the symmetrical dose profiles measured with Setups 1 and 3. To determine the cross point of a given pair we have, first, calculated the difference between the two profiles of the couple, then found the zero of the resulting function and, finally, checked that the two profiles are symmetric with respect to the cross point. The manipulation of the profiles as described here has been done with the software package WP-700 of the measurement system. The four profiles measured with each one of the Setups 1 and 3, provide us with 16 experimental values of $`x_{\mathrm{cent}}`$ (for each energy and $`z`$). These values give a sufficient statistics and permit us to calculate the corresponding mean, $`\overline{x}_{\mathrm{cent}}`$, and standard deviation, $`\sigma _{x_{\mathrm{cent}}}`$, values. The results obtained in this way are shown in Fig. 2, as a function of $`z`$ (in cm), for the five energies considered.<sup>*</sup><sup>*</sup>*Throughout this work uncertainties are given with a coverage factor $`k=1`$ Here, a particular detail needs a comment. It is obvious that with the two asymmetric setups we use to determine the centroids, the shadow is cast by a different part of the collimator: when Setup 1 is used, the top of the jaw forms the shadow, while it is the bottom of it which is casting the shadow in Setup 3. However, the description of how the shadow is formed is not as easy as this vision implies. In fact, it has been shown that the shadow is cast by the full collimator and only when the angles subtended by the source are bigger than 2<sup>o</sup> the effect of the edges of the collimator is important. Taking into account that our jaws are 6 cm thick and the positions derived for the virtual sources, a simple geometrical calculation tells us that, in our case, these angles are of a few tenth of degree at most. Then the effect due to the differences in the cast of the beam in both asymmetrical setups are smaller than the spread we obtain for the centroid in our statistics and can be neglected. Once the transverse shifts of the beam edge offset have been measured, we want to check if their variation with $`z`$ is linear or not. In Table I we give the values of the parameters of the linear regression performed for the experimental centroids. The obtained fits are also shown in Fig. 2 (straight lines). As we can see the correlation coefficients indicate that the way we have introduced in our model the beam divergence is in agreement with the experimental findings, at least in what refers to this aspect of the centroid shift. Also, it is worth to point out two details concerning the results of this part of the experiment. First, it is satisfactory that the parameter $`b`$ appears to be constant with the energy. This parameter is measuring (see Eq. (9)) the offset at the collimating plane ($`z=0`$) and it is obvious that it should be the same in all cases, because the position of the collimator does not change. This is showing again that the procedure of analysis we are carrying out is robust and correct. Second, the fact that $`a`$ varies with the energy is due to the differences in the focal spots at the primary foils for each energy. The variation of these focal spots found using the regression we have obtained is of 1.12 mm, which is compatible with the technical specifications of our accelerator. ### B Determination of the dose profiles data and their errors The second point relevant in the experimental part corresponds to the way we have obtained the profiles to be fitted and how the corresponding errors have been estimated. It is obvious that, due to the character of the process itself, there exists a statistical uncertainty in the dose profiles measured. In order to take care of this point we have measure five new profiles with Setup 3, now with reference chamber, for each energy and for seven values of $`z`$: 50, 55, 60, 65, 70, 75 and 80 cm. These measurements have been performed together with those used to determine the positions $`x_{\mathrm{cent}}`$, what ensures the validity of the results obtained in the previous subsection to analyze these new profiles. The reason to take these new profiles with the reference chamber (contrary to what we have done for the profiles used to determine the centroid positions) is that it permits a better statistics and a quick procedure, with the obvious beam time saving. The five dose distributions corresponding to each energy and $`z`$ are then processed in order to generate the data. First, we have sampled them at different $`x`$ positions. In this respect it is worth to say that, as we mentioned in Sect. II, we have considered the data obtained below the collimator in order to fulfill the requirement of being at sufficient distance from the physical end of the beam to avoid the possible distortions. Thus, only the values of the dose up to $`x=0`$ are taken into account. In principle we should have taken into account the doses up to $`x_{\mathrm{cent}}`$ to insure we use only the data in the shadow of the collimator. However, when the profiles were taken the positions of the centroids for the different energies and $`z`$-values were not known and we sampled up to $`x=0`$. This means only a few millimeters around the actual values of $`x_{\mathrm{cent}}`$ and we assume this does not invalidate our assumptions. The data obtained in the sampling are averaged to obtain the $`\overline{D}(x)`$ values at each $`x`$. The second step is to estimate the errors accompanying these dose data. To do that we have followed the prescriptions of Ref. . In our case we have two error sources. First, there exists the obvious uncertainty associated to the statistical behavior we have just considered. This kind of error can be evaluated by means of the standard deviation, $`\sigma _D(x)`$, we obtain simultaneously to the mean values $`\overline{D}(x)`$. This is so because measuring in continuum mode, as we have done, implies the simultaneous consideration of the statistical variation of both the dose and the positioning of the measuring chamber. The corresponding uncertainty is treated as of type A (that is, not assuming an a priori distribution) in the nomenclature of Ref. and then it is evaluated by considering directly the observed distribution. In our case the values found for the relative error corresponding to this statistical spread is below 1% in all cases. (Here the % refers to the relative units in which the profiles are measured). The second source of error is that linked to the precision of the data acquisition system. This is basically different to the first one and following Ref. is classified as of type B. To evaluate it, we have considered it is the one corresponding to a digital measurement apparatus which is : $$\mathrm{\Delta }_D=\frac{1}{\sqrt{3}}\delta _D=\mathrm{\hspace{0.17em}0.029}\%,$$ (11) where in our case $`\delta _D=0.05\%`$ is half of the nominal precision of the measurement device. The total error is then calculated as the quadratic sum of both errors: $$\mathrm{\Sigma }_D(x)=\sqrt{\mathrm{\Delta }_D^2+\sigma _D^2(x)}.$$ (12) Our data are the relative dose for each energy, $`z`$ and position $`x`$, which are given by $`\overline{D}(x)\pm \mathrm{\Sigma }_D(x)`$. These values have been fitted as described below. ## IV Results In what follows we discuss the results we have obtained in the analysis of the dose profiles measured as described above. ### A Fitting procedure The dose profiles obtained from our measurements have been fitted using the fitting function in Eq. (10) and by means of the Levenberg-Marquardt method . As indicated in Sect. II, in order to complete the model, we must define the functional dependence of the background function $`B(z,x)`$ with the position $`x`$. We have assumed the simplest possibility, that is, the function $`B(z,x)`$ is constant at each $`z`$, $`B(z,x)B(z)`$. In any case, we have checked that the use of other functional dependences (that is, $`B(z,x)`$ linear or quadratic, in $`x`$) does not change the conclusions obtained with respect to the spatial spread. With our choice for the background function, the fit provides the quantities $`D_{\mathrm{}}(z)`$, $`\sigma _x(z)`$ and $`B(z)`$ once an input value of $`x_{\mathrm{cent}}`$ is given. In our experiment, $`x_{\mathrm{cent}}`$ was not measured for the intermediate $`z`$ values 55, 65 and 75 cm. Then, in order to be fully consistent in the fitting procedure, we have used the values of $`x_{\mathrm{cent}}`$ obtained from the linear regression quoted in Table I and shown in Fig. 2. In any case we have checked that same results are obtained in the fits when the experimental $`x_{\mathrm{cent}}`$ values are used in those cases where they are available. The fact that $`D_{\mathrm{fit}}`$ as given by Eq. (10) is not linear, does not allow an simple analytical estimation of the propagation of the error of $`x_{\mathrm{cent}}`$. Thus, to evaluate the errors corresponding to the three quantities defining our fitting function, we have developed a procedure of Monte Carlo type, which is based on the following steps: 1. From each original profile, we have generated a set of data, which is built by random values normally distributed around the original ones and with standard deviations equal to the associated errors. 2. We have also generated a random value for $`x_{\mathrm{cent}}`$ considering the normal distribution obtained in the linear regression of the experimental values quoted in Subsect. III A. 3. We have fitted the resulting profile with the function (10) again by means of the Levenberg-Marquardt method. The repetition of this three step procedure provides a set of values for each one of $`D_{\mathrm{}}(z)`$, $`\sigma _x(z)`$ and $`B(z)`$ and then it is possible to evaluate the corresponding standard deviations for these three quantities. The procedure is repeated until the convergence in the standard deviation values is achieved. We have checked that this happens for a number of random generations between 5000 and 10000. All the results we discuss in the following have been obtained with 10000 generations. The final values found for the standard deviations are taken to be the errors in the three parameters resulting from the fit. In what follows we discuss the results obtained in the procedure we have followed to fit the relative dose profiles. ### B Analysis of $`D_{\mathrm{}}(z)`$ One of the consequences of considering the beam divergence in our model is the fact that the dependence of $`D_{\mathrm{}}(z)`$ with $`z`$ should follow the inverse square law. In order to check if this is the case, we show in Table II the parameters $`a`$ and $`b`$ of the linear regressions $`[D_{\mathrm{}}(z)]^{1/2}=az+b`$ obtained directly from the values of $`D_{\mathrm{}}(z)`$ found in the fitting procedure. As we can see, the data behave as expected (see the correlation factor quoted in the table), what ensures the consistency of the method we have followed. The important point here is that we have not measured directly $`D_{\mathrm{}}(z)`$ but we have used the values obtained from the fit of the tails of the dose profiles below the lead block. One additional point of interest is that these linear regressions allow us to calculate the position of the virtual point source, $`z_{\mathrm{vir}}=b/a`$. The values obtained are also included in Table II, and, as we can see, the $`z_{\mathrm{vir}}`$ are, for the five energies considered, in a range smaller than 2 cm. The case of 6 MeV deserve a particular comment. As we can see, the result obtained for this energy is the one showing a larger deviation and, in fact, it differs in 1 cm roughly with respect to the mean of the values of the remaining energies. This is due to the fact that, contrary to what happens for other energies, no primary foil is present in the case of 6 MeV beams. Thus, a shift in $`z_{\mathrm{vir}}`$ of the order of the width of the foil carrier (which is around 1 cm) must be expected in this case. Then, our method is able to reflect in a direct way the architecture of the accelerator head and this shows up again its precision. In what follows we normalize, for each energy, the dose values to the values $`D_{\mathrm{}}(z=0)`$ obtained from the regressions given in Table II for $`[D_{\mathrm{}}(z)]^{1/2}`$. This affects the experimental dose data and the values of the parameters $`D_{\mathrm{}}`$ and $`B(z)`$ shown below. Obviously, the values of $`\sigma _x(z)`$ are not modified by this renormalization. ### C Analysis of $`B(z)`$ The second aspect concerning the results of the fits of the dose profiles we are interested in discussing is the role of the background function $`B(z)`$. As previously indicated, the reason to include it in our model is to take into account the different scattering processes (part of the dose due to the bremsstrahlung and the dose due to electrons scattered at the gantry head, at the measurement device and its surroundings, as well as those scattered in-air with large angles) which are not described by the pure Gaussian profiles and which, nevertheless, are important to achieve a good description of the experimental data. In Fig. 3 we show the variation with $`z`$ of the values of $`B(z)`$ (normalized as indicated above), obtained from the fit, for the five energies we are considering. As we can see, the general trend is, in all cases, to reduce its strength with increasing $`z`$. This indicates that the electrons scattered in the measuring device seems to be almost negligible because their contribution should be more relevant at large distances, where the bottom of the electro mechanical device for the positioning of the ionization chamber is closer to it. In any case, to establish which scattering mechanism is the main responsible for this background function is not an easy task, mainly because one can expect that, as it occurs for the Gaussian part of the profiles, various processes contribute to this background dose. Despite the fact that the values of $`B(z)`$ are small, it is worth to point out that its inclusion in the model is crucial for the good description of the data. This can be seen in Fig. 4, where we show the results obtained for the two extremal energies we consider (6 and 18 MeV) and for the smallest and largest $`z`$ positions. Therein solid curves correspond to the fits performed with our model, while dashed curves represent the best fits of the data obtained with a pure Gaussian profile (that is with $`B(z)=0`$ in our model). In both cases the values of $`x_{\mathrm{cent}}`$ obtained experimentally as discussed above have been used. Besides, both calculations, as well as the data, have been normalized to the values $`D_{\mathrm{}}(z=0)`$ obtained within our model. In Table III we give the values found for the parameters in these fits. The first point to note is the goodness of the fits produced by our method. On the other hand, it is worth to point out that the values obtained for the pure Gaussian model differ clearly from those obtained within our approach (including the corresponding uncertainties.) ### D Analysis of the spatial spread, $`\sigma _x(z)`$ Our main interest in this work concerns with the determination of the spatial spread of clinical electron beams in order to characterize them. In our model, the spatial spread determined in the fitting procedure corresponds to the Gaussian part of the dose profiles measured. According to Fermi-Eyges theory, this spatial spread must show a quadratic-cubic dependence with $`z`$. In particular, this dependence is given by (see e.g. Ref. ): $$\sigma _x^2(z)=\sigma _{\theta ,\mathrm{i}}^2z^2+\frac{1}{6}T(E)z^3,$$ (13) where $`\sigma _{\theta ,\mathrm{i}}^2`$ is the initial quadratic angular spread and $`T(E)`$ is the air linear scattering power corresponding to the energy $`E`$ and we assume it to be independent of $`z`$. Both parameters can be found by fitting with Eq. (13) the $`\sigma _x(z)`$ values obtained from the fitting of the dose profiles. Instead of doing directly this, and in order to simplify the analysis, we have linearly regressed the quantity $`\sigma _x^2(z)/z^2`$ as a function of $`z`$. The values obtained are given in Table IV. The regression lines we have found in this procedure are plotted in Fig. 5 with full lines. It is apparent the good agreement produced in this case with the experimental data, what guarantees the feasibility of the previous assumption concerning the significance of the spatial spread we have determined. Scattering powers are important because they permit to find the angular spread at any $`z`$. In fact, $`\sigma _\theta ^2(z)`$ behaves linearly with $`z`$ and the slope is precisely $`T(E)`$. Then it is relevant to compare the values obtained in our approach with those given in other references. To do that the first point to elucidate is to fix the beam energy at which the scattering power is determined. The beam energies we have quoted up to now are actually the nominal ones. To calculate $`T(E)`$ we have used, instead, the mean beam energies at the isocenter. Following the prescription of Ref. , these energies have been obtained from the equation $`\overline{E}_0=C_6R_{50}`$, where $`C_6=2.33`$ MeV/cm. The parameter $`R_{50}`$ is the half-value depth, which is determined by range measurements in water at SSD=100 cm and with broad beam. The values obtained, $`\overline{E}_0`$, are also given in Table IV. With these values of the mean energies corresponding to our beam, we have calculated the values of $`T(E)`$ by interpolating those quoted in Table 2.6 of Ref. and correcting them for Møller scattering. As one can see, these values differ from those obtained with our model by a factor $`2`$ or even bigger. In principle, this means that the lineal scattering powers we have found suggest energies for the beam considerably bigger than the mean energies determined experimentally as mentioned above. However one should be careful with the interpretation of these results because of the approach considered in Ref. to obtain the $`T(E)`$ values is quite different from ours. In this sense, a more detailed investigation in this direction is needed before establishing definitive conclusions. The possibilities open by the results we have obtained with our method are especially interesting in this respect due to its simplicity and accuracy. ## V Summary and conclusions In this work we have proposed, developed and tested a new procedure to determine the spatial spread of clinical electron beams. Two are the main advantages of the method proposed. First, we have assumed that the dose profiles can be described by means of a function which includes both a Gaussian part and a background function, this last taking into account those processes not considered in the first one. Second, this new method is based on the direct fit of the dose profiles measured at the center of the beam and below a lead block covering half of the beam. The beam divergence is incorporated to the model in a very easy way and its consequences (the inverse squared law and the linear shift of the centroid) are checked to be fulfilled with a high accuracy. Besides, the fitting procedure provides the spatial spread in a straightforward way and a great part of the ambiguities and errors of the usual methods based in penumbra measurements are eliminated. The data obtained in this way for the spatial spread have been fitted to a quadratic-cubic function of the distance $`z`$. The nice fits obtained confirm the behavior expected from the Fermi-Eyges theory for the Gaussian part of our model and open the possibility for using our approach to measure the scattering power in air. We think that the model we propose in this work deserves a more detailed study because of its possible application to describe the clinical electron beam behavior. Obviously, it is necessary to apply it to a variety of different measuring circumstances in order to complete its validity for this purpose. Thus, it is mandatory to proceed with the applicator, what will provide us information clinically relevant. Also, the results obtained with phantoms of, e.g., water will permit to elucidate the accuracy of our procedure for more practical situations. Work in these directions is in progress. ## ACKNOWLEDGMENTS We gratefully acknowledge useful comments and discussions with Dr. P. Galán. We also are indebted to the direction of the Hospital Universitario “San Cecilio” for permitting us the use of the accelerator to carry the measurements. This work has been supported in part by the Junta de Anadalucía (Spain).
no-problem/9902/cond-mat9902145.html
ar5iv
text
# Dielectric susceptibility of the Coulomb-glass ## I Introduction The subject of this paper is the calculation of the dielectric susceptibility in Coulomb glasses, a term used for Anderson insulators with Coulomb interactions between the localized electrons. We consider situations deep in the insulating phase, when quantum energies $`t`$ arising from tunneling are much smaller than the other important energies in the problem, i.e., Coulomb interactions and random energy fluctuations. The model also applies to systems in the quantum Hall regime far away from the peaks, for which the conductivity is exponentially small as compared to $`e^2/h`$ and the conduction mechanism is by variable range hopping between localized states. The model can be easily extended to granular metals in the insulating regime. Previous calculations of the dielectric susceptibility of the Coulomb glass have used an expression directly obtained from the analogy between Coulomb and spin glasses . However, the non-local character of the processes involved in Coulomb glasses makes this expression of the dielectric susceptibility inappropriate. Furthermore, it does not corresponds to the standard definition of the dielectric susceptibility. The first aim of this work is to present a microscopic expression of the dielectric susceptibility $`\chi `$ valid for the Coulomb glass, and to apply the fluctuation–dissipation theorem to this expression. We also want to calculate $`\chi `$ at very low temperatures. To this end we have to take into account that interactions, and specially those of long–range character, drastically change the properties of systems with localized states. Most properties of these systems are affected by electron correlations, and such effects cannot be described by one–particle densities of states or excitations. To deal with complex excitations, methods were developed to obtain the low lying states and energies of Coulomb glasses. The paper is organized as follows: Section II introduces the Coulomb glass models used for the numerical calculations. Section III presents the derivation of a microscopic expression for the dielectric susceptibility of Coulomb glasses. Section IV describes how the low-energy many-particle states are obtained numerically and gives a method to calculate the dependence of the dielectric susceptibility with the frequency. In section V, we present the results obtained for the dielectric susceptibility of Coulomb glasses at very low temperatures and the dependence on temperature, and frequency. Finally, in section VI we extract some conclusions. ## II Models Our next results apply to any model of a Coulomb glass but to be definite we performed numerical simulations for the two most common models: the standard model with a uniform random potential distribution and the classical impurity band model (CIB). Efros and Shklovskii proposed a practical model to represent Coulomb glass problems with localized electronic states, which has been widely used and extended . This model is represented by the standard tight-binding Hamiltonian: $$H=\underset{i}{}\varphi _in_i+\underset{i<j}{}\frac{e^2}{4\pi ϵ_0}\frac{(n_iK)(n_jK)}{r_{ij}}$$ (1) where $`n_i\{0,1\}`$ denotes the occupation number of site $`i`$. We use rationalized units, unlike in most works on the Coulomb glass, because in this problem they constitute the most convenient choice. We will consider sites at random positions, with a density $`\rho `$ equals to 1, and simulate the disorder by a ramdon potential in each site $`\varphi _i`$, uniformly distributed between $`W/2`$ and $`W/2`$. $`r_{ij}`$ is the distance between sites $`i`$ and $`j`$ according to periodic boundary conditions in the sense of in the perpendicular directions to the applied electric field. Charge neutrality is achieved by a background compensation charge $`K`$ at each lattice site. The classical impurity band model (CIB) is a realistic representation of a lightly doped semiconductor in which the random potential arises from the minority impurities . Here we consider an n-type, partially compensated semiconductor with donor concentration $`N_\mathrm{D}`$, and acceptor concentration $`N_\mathrm{A}=KN_\mathrm{D}`$. The Hamiltonian is given by: $$H=\frac{e^2}{4\pi ϵ_0}\left(\frac{1}{2}\underset{ij}{}\frac{(1n_i)(1n_j)}{r_{ij}}\underset{i\nu }{}\frac{(1n_i)}{r_{i\nu }}\right)$$ (2) where the donor occupation number $`n_i`$ equals 1 for occupied donors, and 0 for ionized donors. The index $`\nu `$ runs over the acceptors and $`r_{i\nu }=𝐫_i𝐫_\nu `$, with $`𝐫_i`$ being the donor coordinates and $`𝐫_\nu `$ the acceptor coordinates. We chose the donor density $`\rho `$ equal to 1 and imposed again periodic boundary conditions. For numerical reasons we constrained the nearest neighbor distance to be larger than 0.5. We used K=0.5 because the interaction effects are largest there. ## III Susceptibility of the Coulomb glass We now proceed to obtain a proper microscopic expression for the dielectric susceptibility of Coulomb glasses. It is applicable to an arbitrary three–dimensional model of Coulomb glass as long as the interaction between charges goes as $`1/r`$. A certain analogy between the spin glass and the Coulomb glass lead to an incorrect expression for the dielectric susceptibility of the Coulomb glass. In some sense the local spin $`s_i`$ in the spin glass is analogous to the site occupation $`n_i`$ in the Coulomb glass. If only $`n_i=0`$ and $`n_i=1`$ are allowed due to strong on-site interaction, then the analogy is with spins $`s_i=1/2`$. The magnetic field in the spin glass is then analogous to the chemical potential in the Coulomb glass. However, this analogy should not be pushed too far and does not apply to polarizabilities. The magnetic polarizability is the change of total spin induced by the magnetic field, but the electric polarizability is the change in the electric polarization induced by an electric field, not the change in the total occupation number induced by a change of a global potential as the direct analogy would have it. The basic difference between the two susceptibilities can be understood more clearly by realizing that the magnetic polarization $`s_i_T`$ comes from field-induced flips of spatially fixed spins, where $`\mathrm{}_T`$ refers to thermal average. An analogous electric polarization can come from flips of local dipoles, but in many systems it involves a field induced displacement of charges (there is no magnetic equivalent to this because there are no magnetic charges). Such a polarizability is thus not represented by $`n_i_T`$ responding to a potential (as in and ) but by $`_ix_in_i_T`$ responding to a field (as in ). The dielectric susceptibility $`\chi `$ then is: $$\chi =\frac{1}{ϵ_0}\frac{P}{E}$$ (3) where $`E`$ is the total electric field. Our first aim is to obtain a microscopic expression for Coulomb glasses of the classical definition of the dielectric susceptibility. Let us assume that we apply an electric displacement $`D`$. This will induce a polarization $`P`$ equal to $$P=\frac{e}{N}\underset{i}{}x_i\mathrm{\Delta }n_i_T$$ (4) where $`x_i`$ is the position of site $`i`$, $`N`$ is the number of sites, and $`\mathrm{\Delta }n_i_T`$ is the change in the average occupation of site $`i`$ due to the applied electric displacement. In the linear approximation, the change in the average occupation of site $`i`$ due to a general change in the potential is given by: $$\mathrm{\Delta }n_i_T=\underset{j}{}\frac{n_i_T}{\varphi _j}\mathrm{\Delta }\varphi _j$$ (5) where $`\mathrm{\Delta }\varphi _j`$ is the change in potential at site $`j`$. The partial derivative appearing in this expression is proportional to the local susceptibility $`\chi _{ij}`$: $$\frac{n_i_T}{\varphi _j}=\frac{eϵ_0}{T}\chi _{ij}$$ (6) where $`T`$ is the temperature, and the Boltzmann constant $`k_B`$ is taken to be 1 throughout the paper. The change in potential corresponding to a uniform electric displacement is $`\mathrm{\Delta }\varphi _i=Dx_i/ϵ_0`$, so the ratio between $`P`$ and $`D`$ is: $$\frac{P}{D}=\frac{e^2}{TN}\underset{ij}{}x_i\chi _{ij}x_j\chi _0.$$ (7) To calculate the dielectric susceptibility numerically it is convenient to apply the fluctuation-dissipation theorem to $`\chi _0`$ in order to rewrite it in terms of thermal fluctuations of the dipole moment. Taking into account the expression for the thermal average of the site occupation, $`n_i_T`$, it is easy to obtain that its derivative with respect to the potential in $`j`$, i.e., the local susceptibility $`\chi _{ij}`$, is equal to the fluctuation in the electron occupancy of the two sites involved: $$\chi _{ij}=n_in_j_Tn_i_Tn_j_T$$ (8) Using this equation in expression (7) for $`\chi _0`$, we arrive at $`\chi _0`$ $`=`$ $`{\displaystyle \frac{e^2}{TN}}{\displaystyle \underset{ij}{}}x_i(n_in_j_Tn_i_Tn_j_T)x_j`$ (9) $`=`$ $`{\displaystyle \frac{1}{TN}}(d^2_Td_T^2)`$ (10) where $`d=e_ix_in_i`$ is the dipolar moment of the sample. The dielectric susceptibility is a function of the thermal fluctuation of the dipolar moment. Our computer simulations can per force involve only systems of mesoscopic size. For the macroscopic susceptibility we can imagine building a macroscopic system of many mesoscopic cubes of linear size $`L`$ arranged to fill the space. Each of these samples corresponds to a particular realization of the random positions and energies of the sites involved. The total electric field that a given microscopic sample feels is the sum of the applied field, $`D`$, and the induced field. If the applied field is uniform, and the polarizabilities of all the samples were the same, the polarization would also be uniform and the induced field would come only from the boundary of the sample. We then have (in our units): $$ϵ_0E=DP$$ (11) and get from Eqs. (10) and (11): $$\chi =\frac{\chi _0}{1\chi _0}.$$ (12) At very low frequencies most samples are conducting and we have an effectively uniform distribution of $`\chi _0`$ over the computer “samples”. At lower frequencies there is no mechanism mitigating the effect broad distribution of $`\chi _0`$ and the argument leading to Eq. (12) fails and the problem becomes Clausius-Mossotti-like. A general approach to this problem for random media has been given in . Here we shall avoid the inherent complications of such a computation and assume that Eq. (12) is approximately valid even at higher frequencies if we use for $`\chi _0`$ a value averaged over many computer realizations. The relation $`\sigma =i\omega ϵ`$ means that a finite dc conductivity implies an infinite dc dielectric susceptibility. A proper calculation of the DC conductivity can be done by percolation in configuration space, but it is a difficult problem requiring huge numerical efforts so that for three-dimensional systems we could only consider very small samples. Our approximate calculation of the divergence of the susceptibility allows us to estimate the variation of the DC conductivity with temperature as we will see. ## IV Numerical Procedure ### A Low-energy configurations We calculate the dielectric susceptibility at very low temperatures making use of the ground state and the very low-energy configurations of the systems. With the procedure that we briefly discuss bellow we obtain the first 5.000 many-particle configurations and calculate their dipole fluctuations, Eq. (10). We find the low-energy many-particle configurations by means of a three-steps algorithm . This comprises local search , thermal cycling , and construction of “neighbouring” states by local rearrangements of the charges . The efficiency of this algorithm is illustrated in Ref. . In the first step we create an initial set $`𝒮`$ of metastable states. We start from states chosen at random and relax these states by a local search algorithm which ensures stability with respect to excitations from one up to four sites. In the second step this set $`𝒮`$ is improved by means of the thermal cycling method, which combines the Metropolis and local search algorithms. The third step completes the set $`𝒮`$ by systematically investigating the surroundings of the states previously found. At the end we only keep configurations with a fix number of electrons, so we work with canonical ensembles. ### B Frequency dependence At finite frequencies only transitions with characteristic time $`\tau _{IJ}`$ shorter than the inverse of the frequency contribute to the susceptibility. Thus, for a given frequency, we consider two configurations as connected if their $`\tau _{IJ}`$ is shorter than the inverse of the frequency, and we group the configurations in clusters according to these connections. The characteristic transition time between configurations $`I`$ and $`J`$ is , $$\tau _{IJ}=\omega _0^1\mathrm{exp}\left(2r_{ij}/a\right)\mathrm{exp}\left(E_{IJ}/T\right)/Z$$ (13) In this equation, the quantity $`\omega _0`$ is a constant of the order of the phonon frequency, $`\omega _010^{13}\mathrm{s}^1`$. The sum is the minimized sum over all hopping distances between sites which change their occupation in the transition $`IJ`$. $`a`$ denotes the localization radius, $`E_{IJ}=\mathrm{max}(E_I,E_J)`$ where $`E_I`$ is the energy of the state $`I`$, and $`Z`$ is the partition function. We calculate the susceptibility of each cluster through Eq. (10), assuming thermal equilibrium in the cluster. The glassy nature of our systems is responsible for the existence of the clusters, which indicate the non-ergodicity of the systems for times shorter than the critical time connecting all the configurations in a single cluster. Each realization of the systems will be in a given cluster and will not see the other clusters. The probability to be in a cluster depends on the history of the system and is very difficult to estimate. In order to obtain averages of the susceptibility, we will assume that the weight of each cluster is proportional to its partial partition function, which constitutes the simplest possible assumption. The results are finally averaged over many different disorder realizations. ## V Results If we take into account all types of transitions, including the slowest ones, Coulomb glass behaves like a conductor and is able to screen fully as its susceptibility diverges. But small samples may not have excitations which carry electrons across the entire sample and produce nearly equipotential surfaces at the two opposite edges. So we must consider samples above a certain critical size at which the steady state susceptibility $`\chi `$ diverges. We found that this critical size is about 200 sites for both models (Eqs. (2) and (1)). Above this size the results are practically independent of size. Fig. 1 shows the average value of $`\chi _0`$ as a function of frequency for several temperatures. The plots are for the standard model of size $`N=256`$. The localization radius is $`a=0.2`$, which is maintained throughout the paper. At low frequencies $`\chi _0`$ increases with $`T`$ while at high frequencies it decreases with $`T`$. The reason is that at small $`\omega `$ hopping extends over many configurations, and the main effect of $`T`$ is to enhance the transition rates. At large $`\omega `$ hopping is between two optimal configurations for that frequency (or even two sites) and the main effect of $`T`$ is to equalize the occupation probabilities. This bears analogy with uncorrelated hopping conductivity which increases strongly with $`T`$ as $`\omega 0`$ and behaves as $`1/T`$ at high frequency. The result for the CIB model are very similar to those for the standard model and so we do not show them explicitly. The results for the CIB model roughly correspond to those for the standard model with an effective disorder energy of approximately 3. As already mentioned, an accurate calculation of the frequency dependent macroscopic susceptibility requires the distribution $`f(\chi _0)`$, not only the average value of $`\chi _0`$. Fig. 2 shows the integrated distribution $`P(\chi _0)=_0^{\chi _0}f(\chi _0^{})𝑑\chi _0^{}`$ for $`N=256`$ at $`T=0.01`$ for different values of the frequency, $`\omega 0`$ (dotted curve), $`\omega =10^3\mathrm{s}^1`$ (dashed curve), $`\omega =10^7\mathrm{s}^1`$ (long dashed curve). The solid curve is a plot of the function $`1\mathrm{exp}\{(x\lambda )^\alpha \}`$ with the parameters $`\lambda =1.5`$ and $`\alpha =0.6`$. In the range examined this form of integrated distribution fits (with varying $`\lambda `$ and $`\alpha `$) fairly well our data for all $`T`$ and $`\omega `$. The broad character of the distribution indicates large mesoscopic fluctuations and shows a need to examine in the future the accuracy of our approximation by taking proper account of the distribution of $`\chi _0`$. We define a critical time for saturation $`\tau _\mathrm{c}`$ as the inverse of the frequency for which the value of the susceptibility is 95 % of the asymptotic value for extremely long times. We studied the $`T`$ dependence of this critical time $`\tau _\mathrm{c}`$. For all temperatures considered the values of $`\tau _\mathrm{c}`$ are extremely large, which is a sign of the glassy nature of our systems. Since $`\tau _\mathrm{c}`$ is long and close to the saturation of $`\chi _0`$, we expect our results to be rather accurate for this study. Fig. 3 plots the logarithm of $`\tau _\mathrm{c}`$ vs. $`T^{1/2}`$ for four sizes of the standard model and for two sizes of the CIB model. The data are fitted quite well by straight lines indicating that a similar mechanism which gives rise to the $`T^{1/2}`$ conductivity is also effective in the dielectric susceptibility. This should of course be expected because of the close connection between the two properties. The square root of the slope of each straight line yields a characteristic temperature $`T_0`$ which has been often associated with variable range hopping theory in a Coulomb gap. In that theory $`T_0`$ is given by: $$T_0=\beta \frac{e^2}{4\pi ϵ_0k_Ba}$$ (14) where $`\beta =2.8`$ in three dimensions . Such a theory does not take into account correlations, so comparing our results with the theory can assess the importance of many-body effects. The values of $`\beta `$ obtained from Fig. 3 are $`0.9\pm 0.2`$ for the standard model and $`0.9\pm 0.1`$ for the CIB model, i.e., about three times less than in the one-particle theory. This indicates the importance of correlations. The results follow the same trend we found for two dimensional systems by a numerical simulation study of the conductivity where we also found $`\beta `$ systematically smaller than was predicted by the one-particle theory, and were able to identify specific many-body processes. To evaluate, at least approximately, the frequency behavior of the macroscopic susceptibility we calculated $`\chi (\omega )`$ using Eqs. (10) and (12). The results are exhibited in Fig. 4 where $`\chi `$ is plotted vs. frequency for several temperatures. The data correspond to the data for $`\chi _0`$ of Fig. 1. Note that the susceptibility diverges in the static limit at all temperatures examined, but at a different rate at each temperature. Coulomb glasses will screen like a metal if where not for their glassy nature, which produces typical times for the divergence of the dielectric function much larger than measurement times. ## VI Conclusions We derived a microscopic expression for the dielectric susceptibility $`\chi `$ applicable to hopping systems including systems where interactions are important. The expression is particularly suitable for low frequencies. It corresponds to the expression used in classical electrodynamics. Si and Varma have recently study the same problem in the metallic limit of the metal-insulator transition for two dimensional systems, and obtain than the static compressibility vanishes at the transition. Some previous works used expression based on an analogy between spin and Coulomb glasses. We argues that these analogies cannot be extended to the susceptibility. The fundamental reason is that unlike in spin glasses the susceptibility in the hopping systems arises from non-local processes. The fluctuation–dissipation theorem tells us that the dielectric susceptibility is a function of the thermal fluctuations of the dipole moment of the system, instead of the fluctuations of the charge density, result that one obtains when using analogy between Coulomb and Spin glasses. We calculate $`\chi `$ numerically for three–dimensional Coulomb glass systems as a function of temperature and frequency. We found that $`\chi `$ diverges as the frequency tends to zero. One has to consider sizes larger than a critical one of approximately $`N=200`$ for the CIB model and for the standard model with $`W=3`$. The logarithm of the critical time for saturation varies proportionally to $`T^{1/2}`$, the same dependence as in variable range hopping. The characteristic temperature for this dependence is approximately equal to 0.9, a factor of three smaller than the theoretical predictions for the equivalent constant appearing in variable range hopping. ###### Acknowledgements. We acknowledge financial support from the DGES project number PB96-1118, SMWK, and DFG (SFB 393). A great part of this work was performed during A. D.-S.’s visit at the IFW Dresden; A. D.-S. thanks the IFW for its hospitality.
no-problem/9902/astro-ph9902125.html
ar5iv
text
# Non-linear coupling of spiral waves in disk galaxies: a numerical study ## 1 Introduction The linear theory of spiral density waves and modes has been very successful at explaining many features of the dynamics of observed or numerically generated galactic disks. Non-linear effects (other than the generation of shocks in the gaseous component) are most often discounted, because of the small relative amplitude of the waves (i.e. the ratio of the perturbed versus unperturbed density or potential): it is usually found to be in the range 0.1–0.3, so that the quadratic terms from which non-linearities arise are of order 0.1–0.01, negligible compared to the linear ones (see e.g. Strom et al. 1976). If spiral waves could be described only by means of linear processes, each wave or mode present in the disk would behave independently; they would be subject to the direct consequences of their linear behavior, and in particular (see Binney & Tremaine 1987, and references therein): * they would be amplified by the Swing mechanism at their corotation radius. * they would conserve energy and angular momentum during their propagation, except at their Lindblad resonances where they exchange them with the stars by Landau effect. * their radial propagation can result in transient behaviors, i.e. waves emerging from the thermal noise or due to the tidal excitation of a companion, amplified once as they are reflected from their corotation radius, and traveling back to their Lindblad resonance where they are damped. * they can also appear as exponentially growing normal modes: these result from the fact that, if a spiral wave can propagate to the galactic center, it is reflected towards its corotation radius. There it is reflected back towards the center, at the same time as it is Swing-amplified. Thus the center and corotation define a “cavity” within which the wave travels back and forth. Classically, an integral phase condition then defines a discrete set of frequencies, such that after one propagation cycle the wave returns with its initial phase; since the wave is amplified at each cycle by the same factor (say a factor $`\mathrm{\Gamma }`$ every cycle time $`\tau _c`$), this defines an exponential growth rate $`\gamma =\mathrm{\Gamma }/\tau _c`$. * Swing amplification results from the property that waves inside corotation (i.e. inside the cavity) have negative energy, associated with the fact that their azimuthal phase velocity (pattern speed) is lower than the rotation frequency of the stars. On the other hand, as a wave is reflected inward, it excites beyond corotation a wave at the same frequency which has positive energy, because there it rotates faster than the stars, and which travels outward to the Outer Lindblad Resonance. Conservation rules then imply that the negative energy of the waves inside corotation must grow at the rate at which positive energy is emitted beyond corotation. * in a stellar disk, spiral waves propagate only between their Lindblad resonances, where they are damped by Landau effect. The latter effect was not a strong constraint in early numerical studies of spiral waves, which for numerical stability reasons were done in models with very weakly peaked rotation curves at the galactic center. As a consequence it was easy to find normal modes with a frequency high enough to avoid having an Inner Lindblad Resonance (ILR) near the center, yet low enough to meet their Outer Lindblad Resonance (OLR) only at a large radius, so that they could essentially extend throughout the disk. This situation changed with the introduction of faster computers, and optimized codes which allowed to deal with realistically peaked rotation profiles at the center. In that case waves can avoid crossing an ILR only if they have a high frequency, but then their OLR occurs at a rather small radius: thus any single wave or mode cannot efficiently carry energy and angular momentum over a large radial span. In that situation Sellwood (1985), who was the first to run such realistic simulations, obtained unexpected results which were first mentioned in his study of the spiral structure of the Galaxy; the simulation showed two different $`m=2`$ (i.e. two-armed) long-lived patterns, with well-defined frequencies (typical of linear normal modes) but which did not obey the usual rules associated with linear theory: the inner one, a bar, had a high enough frequency to avoid an ILR, but did not extend beyond its corotation radius located about mid-radius of the disk, implying that it should not have been amplified to reach its large amplitude. The outer pattern, on the other hand, had a lower frequency so that it did have an ILR: Landau damping should thus have forbidden it to survive for a long time in the disk. Thus the simulation showed waves which followed in many respects the behavior expected for normal modes of the linear theory, but yet should have been forbidden by that theory; Tagger et al. (1987) and Sygnet et al. (1988) presented an explanation based on a remark which was later confirmed in all similar simulations: the two patterns overlapped over a very narrow radial range, which coincided both with the corotation of the inner one and the ILR of the outer one. They showed (this is in fact a generic property of non-linear wave coupling, also observed in other fields of physics, see Tagger & Pellat 1982) that this coincidence of resonances would make the non-linear interaction between the two patterns much more efficient than the crude order-of-magnitude estimate mentioned previously. A detailed kinetic description of this interaction allowed them to find that the two patterns should exchange energy and angular momentum between them and with their beat waves, namely an $`m=4`$ and an $`m=0`$ waves (where $`m`$ is the azimuthal wave number), with respectively the sum and the difference of their frequencies. They found that these beat waves also have a Lindblad resonance at the interaction radius, and that the coincidence of these resonances does make it realistic to have a strong exchange of energy and angular momentum, i.e. a strong non-linear effect, even at relatively small amplitudes. This made it possible to consider a scenario where the inner mode is amplified by the linear Swing mechanism, and stabilized non-linearly at a finite amplitude by transferring energy and momentum both to the outer mode and to the coupled ones, i.e. their beat waves (although the non-linear interaction could also result in non time-steady, or even chaotic, behaviors). Thus the various waves involved conspire to carry the energy and angular momentum, extracted by the first mode from the inner parts of the disk, much farther out than it alone could. Unfortunately the absence in Sellwood’s published runs of diagnostics for the $`m=4`$ and $`m=0`$ modes did not allow to further check this model. In a series of papers Sellwood and collaborators gave a better characterization of the behavior associated with the two $`m=2`$ patterns, and presented alternative explanations. Sellwood & Sparke (1988), based on simulations by Sparke & Sellwood (1987), presented more detailed results and suggested that this behavior might be quite common in barred spiral galaxies, where it might help solve the long-standing conflict between the assumed locations of the corotations of the bar and spirals. They also pointed to an interesting feature which might make the relation with observations quite difficult: since the bar and the spiral rotate at different speeds, they should at any given time be observed at random relative azimuthal positions, i.e. with the tip of the bar and the inner tip of the spiral at the same radius, but not the same position angle (in fact Sygnet et al. , 1988, noted that precisely this was suggested by Sandage (1961) for SB(r) galaxies, though Buta (1987) does not confirm this behavior). Indeed Sellwood & Sparke (1988) plot isocontours of the perturbed potential, showing this property: the bar and the spiral do form clearly detached patterns; on the other hand the isodensity contours are much more fuzzy, and hardly ever detached: this can be attributed to the fact that the density responds to the perturbed potentials in a complex manner, and manages to establish material bridges between the two patterns. This difference might explain the differing conclusions of Sandage and Buta, since observations deal mainly with the density contrast (through the complex process of star formation), and thus could not easily show the difference between the two patterns. Sellwood & Kahn (1991) have considered an alternative explanation for the behavior observed in these simulations, based on grooves and ridges in the surface density or angular momentum distribution. They find numerically and analytically an instability, which they call the “groove mode”, and which is in fact essentially the “negative-mass instability” already found by Lovelace & Hohlfeld (1978), due to either a ridge or a groove in the density profile of the disk. It is composed of waves emitted on both sides of the groove or ridge radius, and with their corotation close to that radius. Sellwood & Lin (1989) presented a recurrent spiral instability cycle, based on this mechanism. Their simulations (made in special numerical conditions which we will discuss below) show that a transient spiral instability can extract energy and angular momentum from its corotation region, and transfer them to stars close to its Lindblad resonance. This creates a narrow feature in phase space, i.e. a groove or ridge in density space, which can give rise to a new instability with its corotation at this radius, close to the Lindblad resonance of the original wave. This new instability will in turn deposit the energy and momentum close to its Lindblad resonance, where a third one can then develop, and so on. In this manner Sellwood & Lin obtain a whole “staircase” of spiral patterns, conspiring to carry the angular momentum outward, and such that each has its corotation close to the OLR of the previous one. They rule out mode coupling as an explanation, by interrupting the run at a given time, shuffling the particles azimuthally to erase any trace of a non-axisymmetric wave, and starting over the run. The “scrambled” simulation generates precisely the same wave as the original one, proving that in these simulations coherent coupling between the waves is not relevant. Two main arguments make us believe that this recurrent cycle is not at the origin of the behavior discussed in what we would call the generic case, i.e. Sellwood’s (1985) work, the present one, as well as much numerical work without the ad hoc numerical restrictions used by Sellwood & Lin (1989) : the first argument is that in their cyclic mechanism the secondary wave is found to have its corotation at the Outer Lindblad resonance of the primary; in the generic case, on the other hand, the secondary wave is generated with its ILR at the corotation of the primary. Power density contours obtained in our simulations, as will be shown below, allow to clearly discriminate between these two possibilities, and rule out the groove mechanism as a source for the secondary in the generic case. Furthermore, Sellwod & Lin use very artificial conditions to exhibit more clearly the physics they want to discuss, in order that it does not get blurred by all the complex, numerical or real, physics involved in a full simulation. In particular they make use of the fact that their code is written in polar coordinates to artificially eliminate any non-axisymmetric feature other than $`m=4`$ – the wavenumber of the waves considered in their work. Thus non-linear coupling with the $`m=4+4=8`$ is ruled out, making any comparison with the generic case dubious. Coupling with the $`m=44=0`$, i.e. the axisymmetric component, is still possible, but then is not eliminated by the scrambling: indeed the distribution function after scrambling has every reason not to be an equilibrium one (i.e. a function only of the constants of motion, i.e. constant along epicyclic orbits) in the region of the ILR of the dominant wave before scrambling; thus $`m=0`$ oscillations at the epicyclic frequency in the rotating frame, i.e. precisely at the frequency of the secondary wave found in the simulations, must be generated by the scrambling, casting at least some doubts on the exact physics of the subsequent evolution. Thus non-linear coupling remains our preferred explanation for the behavior of “generic” simulations, and in this paper we present numerical work that substantiates this explanation. An additional incentive to do this is that in a more recent paper (Masset & Tagger, 1996b) we presented analytical work showing that non-linear coupling is a very tempting explanation for the generation of galactic warps: this is a long-standing problem, since warps (which are bending waves of the disk, very similar in their physics to spiral waves) are not linearly unstable (or extremely weakly, see Masset & Tagger 1996a); thus, contrary to spiral waves, there is no simple way to explain their nearly universal observation in edge-on galaxies, even isolated ones (see Masset & Tagger, 1996b, for a discussion of alternative explanations that have been considered). In our mechanism the spiral, as it reaches its OLR, can transfer to a pair of warps the energy and angular momentum it has extracted from the inner parts of the galactic disk. The first warp would be the strong bending of the gaseous disk beyond the Holmberg radius, while the second one would be the short-wavelength corrugation observed within the Holmberg radius in many galaxies, including ours. In order to study this mechanism numerically, we have written a 3-D particle-mesh code, whose results will be presented elsewhere. In a first step, we have used a 2-D version of this code for the present work, to give a more detailed analysis (which we consider as a numerical confirmation) of non-linear coupling between spiral waves and modes. Let us note that preliminary 3-D runs already give evidence of non-linear coupling between spiral waves, with an even stronger efficiency than the example presented here. This paper is organized as follows : in a first part we present a general background about mode coupling. We present the selection rules and justify the high efficiency of coupling when the frequencies of the coupled waves are such that their resonances coincide. Since the physics of coupling can be understood without heavy mathematical derivations, we have avoided lengthy and intricate details on the derivation of the coupling efficiency, which can be found in the references given in Sygnet et al. 1988, or in Masset & Tagger 1996b. In a second part we present the characteristics of the code, and in the third part we present the results of a run which shows the unambiguous signature of non-linear coupling between the bar and $`m=0`$, $`m=2`$ and $`m=4`$ spiral waves. An additional run without the central bar is also presented in order to show that non-linear coupling is indeed responsible for the behavior observed in the external parts of the galaxy (i.e. the excitation of a slower spiral whose ILR coincides with the corotation of the bar). ## 2 General notions on non-linear coupling ### 2.1 Notations We will note $`m`$ the azimuthal wavenumber of a wave, which is an integer corresponding to the number of arms of this wave. We note $`\omega `$ the frequency of a wave in the galactocentric frame, with $`\mathrm{\Omega }_p=\omega /m`$ the pattern frequency (in the following, including the plots, we will primarily label waves by $`\omega `$ rather than $`\mathrm{\Omega }_p`$). Finally we note $`\mathrm{\Omega }(r)`$ the angular rotation velocity of the stars, $`\kappa (r)`$ the epicyclic frequency, $`\sigma _r`$ and $`\sigma _\theta `$ the radial and azimuthal velocity dispersion. ### 2.2 Non-linear coupling and selection rules Mode coupling is a very specific non-linear mechanism (see e.g. Laval & Pellat 1972 and Davidson 1972 for a general discussion). It contrasts with the usual picture of strong turbulence, where a large number of modes interact, forming in the asymptotic limit a turbulent cascade (e.g. the Kolmogorov cascade in incompressible hydrodynamics). This asymptotic limit is reached when a whole spectrum of waves or modes is excited, with a very small correlation time, so that each mode exists only for a very short time before it looses its energy to others (e.g. in the Kolmogorov spectrum the correlation time is of the order of the eddy turnover time). Mode coupling, on the other hand, occurs in situations where only a small number of waves or modes can exist, so that each interacts non-linearly with only a few others -ideally only two. In particular we will see in our numerical results that if the two $`m=2`$ patterns interact non-linearly with an $`m=4`$, the $`m=6`$ that results from the coupling of one $`m=2`$ with the $`m=4`$ can be clearly identified, but remains so weak that its influence can be neglected (technically, since the $`m=4`$ results from the non-linear interaction of two waves, it is associated with quadratic terms in the hydrodynamical equations; then the $`m=6`$, resulting from the coupling of the $`m=4`$ with one $`m=2`$, is associated with third-order terms, which remain small). This small number of active modes translates into long correlation times, i.e. the quasi-stationary structure found in the simulations. In a linear analysis all the waves present in the disk behave independently and do not interact. If one retains higher order terms of the hydrodynamical or kinetic equations, this is no more true and waves can exchange energy and angular momentum, provided that they fulfill a number of selection rules. Let us consider two spiral density waves 1 and 2, with azimuthal wavenumbers $`m_1`$ and $`m_2`$ and frequencies in the galactocentric frame $`\omega _1`$ and $`\omega _2`$. The perturbed quantities relative to each wave will be of the form : $$\xi _{1,2}e^{i(\omega _{1,2}tm_{1,2}\theta )}$$ where $`\xi `$ stands for one of the perturbed quantities (a velocity component, the density or the potential). In the hydrodynamical or kinetic equations, one finds cross products of the form $`\xi _1\xi _2`$ and $`\xi _1\xi _2^{}`$ (where the $``$ notes the complex conjugate), arising from non-linear terms (e.g. the $`vv`$ or the $`\rho \mathrm{\Phi }`$ terms of the Euler equation). These terms correspond to the beat waves associated with the perturbed quantity : $$\xi _Be^{i[(\omega _1\pm \omega _2)t(m_1\pm m_2)\theta ]}$$ Hence the beat waves will have the frequency and wavenumber : $$m_B=m_1\pm m_2$$ (1) and $$\omega _B=\omega _1\pm \omega _2$$ (2) Thus when one performs a Fourier analysis over time and azimuthal angle of the Euler equation, at the frequency $`\omega _B`$ and the wavenumber $`m_B`$ one will find both linear terms directly proportional to the amplitude of the beat wave, and quadratic ones proportional to the product of the amplitudes of waves 1 and 2. These terms act as a sink or source of energy for the beat wave. Two cases can then occur: the frequency and wavenumber of the beat wave may correspond to a perturbation which cannot propagate in the system; this perturbation is thus simply forced by the “parent” waves. But they may also correspond to a wave which can propagate (i.e. they obey the linear dispersion relation); this means that the system can spontaneously oscillate at the frequency and wavenumber excited by the parents and will thus respond strongly to the excitation: just as a resonantly driven oscillator, the beat wave can reach a large amplitude<sup>1</sup><sup>1</sup>1A small point must be made about the vocabulary: because of this analogy with a resonant oscillator, this process is often called resonant mode coupling in the relevant litterature. Thus this term is independent of the additional factor, discussed below, that in our case the coupling occurs at a radius where the waves involved have a resonance with the particles.. We will see below that this is the case we study. In that situation, since the three waves obey the linear dispersion relation and can reach large amplitudes, and since each corresponds to the beating of the other two, they can no more be distinguished as “parent” and “beat” waves: one has a system where the three waves play similar roles and can strongly interact, exchanging their energy and angular momentum. In the homogeneous (and thus much more simple) physical systems where this has been well studied, it has been found to result either in stationary behaviors (e.g. one wave is linearly unstable and extracts free energy from the system; the other two are linearly damped and can saturate the growth of the first one at a finite amplitude, by dumping the energy in a different form), or in cyclic (the classical Manley-Rowe cycles) or even chaotic behaviors (see e.g. Laval & Pellat 1972, Davidson 1972, Meunier et al. 1982). For a set of three waves such that their frequencies and azimuthal wavenumbers fulfill the relations (1-2), which are the selection rules, one finds linearly that the energy and angular momentum fluxes that each carry are constant; non-linearly, one finds that the time derivative of the energy density of each wave is proportional to the product of the amplitudes of the other two, i.e. these waves exchange energy and angular momentum. The derivation of the exchange rate can be done using the formalism of quadratic variational forms, and is beyond the scope of this paper devoted mainly to numerical results. The derivation of the efficiency of the non-linear coupling of bar and spiral waves is given in kinetic formalism by Tagger et al. 1987 and Sygnet et al. 1988, and in hydrodynamical formalism for the coupling of spiral and warp waves by Masset & Tagger, 1996b. These papers also explain why we have not introduced in the above discussion the radial and vertical wavenumbers of the waves, which should play a priori the same role as the azimuthal wavenumbers or the frequencies in the selection rules. The reason is that, in theses directions, the system is inhomogeneous so that the Fourier decomposition of linear waves is irrelevant. In the vertical direction, the waves have a standing structure and one finds that the coupling coefficient is simply proportional to a scalar product of the vertical structure functions (the Fourier integrals performed in the azimuthal direction are just a particular case of this scalar product). In the radial direction, we will find that the coupling occurs over a very narrow annulus, so that the waves essentially “ignore” their radial wavelengths (which might be derived in a WKB approximation). The next section explains why we expect coupling to occur efficiently over a small radial extent. Thus, for a wave which for instance receives energy by non-linear coupling, one can consider that it is excited at that very precise radius, with a frequency and azimuthal number given by the selection rules. It will then propagate radially with a radial wavenumber given by its local dispersion relation, independent of the radial wavenumbers of the other waves. ### 2.3 Localization of the coupling As mentioned in the preceding section, we can write the time derivative of the energy density of each wave as a coupling term involving the product of amplitudes of the other two. This time derivative is performed following the wave propagation, i.e. : $$dE/dtE/t+(1/r)(rc_gE)/r=\text{Coupling Term}$$ where $`c_g`$ is the group velocity, i.e. the velocity at which energy is advected radially, and the right-hand side vanishes in linear theory. If we assume for simplicity a stationary state (we will see from our numerical simulations that this assumption is not too far from reality), we can write : $$(c_gE)/r=\text{Coupling Term}$$ where we have neglected the effect of cylindrical geometry, since we expect the coupling to be very localized radially<sup>2</sup><sup>2</sup>2This assumption will be checked a posteriori on the numerical results in section 4.3; we will see that even when the coupling partners coexist on a wide range of radii, the coupling efficiency (i.e. the energy and angular momentum exchange between these modes) is strongly peaked on a narrow annulus which will be identified as the corotation of the bar.. This shows in particular that wherever $`c_g`$ vanishes, the variations of $`E`$ can be strong even if the coupling term is not large. In fact $`c_g`$ vanishes at Lindblad resonances and at the edge of the forbidden band around corotation (Toomre 1969), as can be seen from Fig. 1, so that we can expect non-linear coupling to be highly efficient when the waves are close to these radii. The physical meaning of this enhanced efficiency is simply that the waves stay in these regions for a long time, so that they can be efficiently driven, and exchange a sizable fraction of their energy, even at low energy transfer rates. Let us note finally that if one of the waves is at its Lindblad resonance and a second one at its corotation, their beat waves, because of the selection rule on frequencies, are also at a Lindblad resonance, still improving the efficiency of non-linear coupling. This appears in a different form in the kinetic description used by Tagger et al. 1987 and Sygnet et al. 1988: there the coupling coefficients appear as integrals over phase space of the stellar distribution, containing two resonant denominators of the form $`\omega m\mathrm{\Omega }`$ and $`\omega m\mathrm{\Omega }\pm \kappa `$ (instead of classically only one in the linear terms leading to Landau damping), thus making the coupling efficiency very efficient when the resonances of the waves coincide. ## 3 Numerical implementation ### 3.1 Algorithms We have written a classical Particle-Mesh (PM) two-dimensional code simulating the stellar component of disk galaxies, with special emphasis on diagnostics adapted to the physics we describe. We have not taken into account the gaseous component, which is not expected to modify dramatically the coupling mechanism. A more detailed discussion about the role a dissipative component could play will be given in section 4.4.2. The density is tabulated on a cartesian grid using the bilinear interpolation Cloud in Cell (CIC). The potential is computed using a FFT algorithm with a doubling-up of the grid size in order to suppress tidal effects of aliases (see Hockney & Eastwood 1981). We use a softened gravity kernel ($`G/(r^2+ϵ^2)^{1/2}`$) for the computation of the potential, with a softening parameter $`ϵ`$ chosen so as to mimic the effect of the disk thickness. Only the stars which are in the largest disk included in the grid are taken into account for the evaluation of the potential, in order not to artificially trigger $`m=4`$ perturbations. The force on each star is computed from the potential using a CIC scheme, so that the stars undergo no self-forces. The positions and velocities are advanced using a time-centered leap-frog algorithm. Finally we have added three unresponsive static components : a central mass, a bulge and a halo. The analytical force law corresponding to each of these components is given in table 1. ### 3.2 Initialization At the simulation startup, particles are placed at random radii with a probability law resulting in an exponential surface density profile: $`\mathrm{\Sigma }(r)=\mathrm{\Sigma }_0e^{r/R_d}`$, where $`\mathrm{\Sigma }_0`$ is the central surface density and $`R_d`$ is the length-scale of the galactic disk. All the particles have the same mass. The velocities are randomly assigned using the epicyclic approximation so that : * The velocity distribution be a local anisotropic Maxwellian. * The ratio of the radial to azimuthal dispersion, $`\sigma _r/\sigma _\theta `$ be $`2\mathrm{\Omega }/\kappa `$, with $`\mathrm{\Omega }`$ and $`\kappa `$ computed consistently from the static and stellar potentials. * The average azimuthal velocity be the rotational velocity corrected by the Jeans drift arising from the gradient of surface density. * The Toomre $`Q`$ parameter be constant over the whole disk. Since the epicyclic approximation fails close to the galactic center, these prescriptions do not result in an exact equilibrium distribution in the central regions; we thus see in the first dynamical times of our runs relaxation oscillations, leading to a slight radial redistribution of matter in the vicinity of the galactic center. The truncation radius of the disk corresponds to the edge of the active grid. This grid is chosen large enough so as to ensure that the runs are not perturbed by edge modes. For instance in the runs we present below the average number of particles per cell in the most external cells of the active part of the grid is $`6.510^3`$, i.e. totally negligible. ### 3.3 Tests The behavior of the code has been tested so as to ensure that : * a single massive particle follows Newton’s first law, i.e. is not subject to self-force; * two particles obey the 2-bodies laws within errors arising from finite cell size and finite timestep; Furthermore, during a run, we check that the total angular momentum is exactly conserved (within the numerical precision errors), and that the total energy is properly conserved (within 10 %) over the whole duration of the run ; we monitor the fraction of stars ejected out of the active grid, so as to be sure that it remains sufficiently small. ### 3.4 Spectral analysis The purpose of our numerical simulations is to check for the presence of non-linear coupling between waves and modes (simply defined here as quasi-stationary structures: they can thus be either linear eigenmodes, if they stay at low amplitude and unaffected by non-linear effects, or more complex non-linear entities as will be found below). We do this by identifying the features arising during the run, and checking the frequency and wavenumber relations between them. For this we plot spectral density contours, in the same manner as Sellwood 1985 : at each output time we perform a Fourier transform of the perturbed density in the azimuthal direction, resulting for each value of the azimuthal wavenumber $`m`$ in spectra depending on time and radius. In order to avoid grid artifacts, we compute the spectra directly from the coordinates of each particle rather than by a Fourier Transform of the interpolated grid. Hence for each value of $`m`$, we compute : $$W_C^{(m)}(r,t)=\underset{i=1}{\overset{N}{}}\mathrm{cos}(m\theta _i)b(r,r_i)$$ and $$W_S^{(m)}(r,t)=\underset{i=1}{\overset{N}{}}\mathrm{sin}(m\theta _i)b(r,r_i)$$ where $`(r_i,\theta _i)`$ are the polar coordinates of the $`i^{th}`$ particle, $`N`$ is the total number of particles, and, for a given $`r_i`$, $`b(r,r_i)`$ is a “bin”-function which linearly interpolates the value on a monodimensional radial grid. The temporal spectrum is obtained by taking the Fourier Transform $`\stackrel{~}{W}^{(m)}(r,\omega )`$ with respect to time of the complex function $`W_C^{(m)}(r,t)+iW_S^{(m)}(r,t)`$. We then plot either the amplitude or the power spectrum (i.e. either $`|\stackrel{~}{W}^{(m)}(r,t)|`$ or $`|\stackrel{~}{W}^{(m)}(r,t)|^2`$) properly normalized so as to represent either the relative perturbed density in the case of the amplitude spectrum, or the energy density in the case of the power spectrum. Unlike Sellwood (1985), we eliminate the first timesteps when computing the temporal Fourier transform. Indeed these first timesteps correspond to a transient regime, and taking them into account degrades the spectrum and makes its interpretation more complex. The choice of the first timestep used to compute the Fourier transform is made by looking at the $`W_{C,S}^{(m)}(r,t)`$ plots. We start the Fourier transform when the features observed on these plots appear to have settled to the quasi-periodic behavior always obtained after a few rotation periods. ## 4 Results We present two complementary runs. The first one exhibits a simple, typical example where a strong bar develops, together with a slower outer spiral; we confirm that their frequencies are such that the corotation of the bar coincides with the ILR of the spiral. The second run is performed with the same parameters, except that we inhibit the bar by changing the rotation profile at the center, so that the bar gets damped at its ILR. The rotation profile in the outer parts is not changed, and we check that the outer spiral obtained in the first run does not develop in this second one: this proves that its formation was not due to local conditions, but indeed to the non-linear excitation by the bar. ### 4.1 Run 1 In this run we use a $`128\times 128`$-active mesh with $`600`$ pc wide cells. The galaxy is an exponential disk of 80,000 particles with total mass $`6.110^{10}`$ $`M_{}`$ and a characteristic length $`R_d=3.5`$ kpc. The softening length $`ϵ`$ is $`300`$ pc. The Toomre $`Q`$ parameter is initially constrained to be $`1.3`$ over the whole disk. The disk is embedded in a static halo and a static bulge. The halo has a core radius $`r_c=2`$ kpc and an asymptotic speed $`v_{\mathrm{}}=120`$ km/s, so that the mass inside the smallest sphere containing the whole disk is $`1.2`$ that of the disk. The bulge has a radius $`b=2`$ kpc and its mass is $`M_b=510^{10}`$ $`M_{}`$. There is no central point-like mass, i.e. $`M_c=0`$. We use a timestep of $`0.75`$ Myr, we perform the simulation over 16,000 timesteps, and we output the grid density, $`W_{C,S}^{(0,1,2,3,4)}(r,t)`$ and some other quantities (velocity dispersion, energy, etc.) every 20 timesteps. The chosen grid is oversized for the study of the disk, in order to avoid edge effects which have appeared to modify strongly the behavior of previous runs. Thus in all the plots and spectra we present in this paper, we have eliminated the outer parts of the active grid, where results are quite noisy due to the rarefaction of stars. In particular, all the spectra are presented on the range 0-27 kpc. The galaxy develops a strong bar which appears a bit before 1 Gyr, and triggers a strong spiral wave outside corotation. The bar and the spiral heat the disk so that the spiral arms weaken due to the decreasing efficiency of the Swing mechanism with increasing Q. Fig. 2 shows two plots of the particle density, the first one when the bar appears, and the second one close to the end of the run, when the disk is quite hot. No striking spiral feature appears on this last plot in the outer part of the galaxy. The simulation could be made more realistic if dissipation were included through the presence of a gaseous component, and taking into account star formation which would continuously cool the stellar population and maintain the efficiency of the Swing mechanism, so that we would still have a noticeable spiral structure even at late times. Also, since the response of the gas is strongly non-linear, we expect that it would make the outer spiral structure more prominent. But here we focus on the non-linear coupling, so that the absence of dissipation is not critical for our purpose. We will discuss later the influence it should have on the coupling. Let us now turn to the power spectra of the perturbed density. On Fig. 3 we have plotted isocontours of the function $`W_C^{(2)}(r,t)`$, i.e. the cosine contribution of $`m=2`$ density perturbations as a function of radius (abscissa) and time (ordinate). This can be seen as the point of view of an observer located at radius $`r`$ and $`\vartheta =0`$, and seeing the bar and spiral arms sweep by with time. The lower plot represents the beginning of the simulation, from the initial time step to $`2000`$ Myr, while the upper one shows the very end of the simulation, with time varying from $`10`$ Gyr to $`12`$ Gyr. On the lower (earlier) plot we first see transient features which propagate outward, as shown by their obliquity. Between 1 Gyr and 1.5 Gyr we see the bar form, resulting in purely horizontal (i.e. standing) features. The plot clearly shows that the bar forms a quasi-stationary structure ending about 10 kpc from the galactic center, i.e. in the region of its corotation. Its corotation will slowly move outward as the bar adiabatically slows down at later times, as often seen in this type of simulations (Pfenniger & Friedli 1991, Little & Carlberg 1991). As expected from linear mode theory, the bar extends beyond corotation as a spiral wave propagating outward. The upper (later) plot shows how robust the bar is, since it has lasted for about 10 Gyr with a frequency that has slowly decreased. The right part of the diagram shows that there are still features propagating outside corotation, but that they are much less regular than the spiral waves formed from the “young” bar (in the upper part of the lower plot). This behavior which might be believed chaotic is in fact easily understood from the time Fourier Transform of this diagram (also taking into account $`W_S^{(2)}(r,t)`$ as imaginary part), i.e. the $`m=2`$ spectral density, shown on Fig. 4. The Fourier transform is performed over 512 outputs (from 288 to 799, i.e. for time ranging from $`4320`$ Myr to $`11985`$ Myr). The thinness of the features obtained shows that they correspond to quasi-stationary structures (bar or spiral) in the disk. In the inner part we see the strong contribution of the bar, which stops at corotation. Outside corotation we have two structures at different frequencies, explaining the apparent lack of periodicity in this region in Fig. 3 (there is also an intermediate, weaker structure at $`\omega 22`$ km/s/kpc, for which a tentative explanation will be given later). The faster wave has the same frequency as the bar and corresponds to the spiral wave excited by the bar through the Swing mechanism. The slower one appears to have its ILR at approximately the same radius as the corotation of the bar, as expected from the works of Tagger et al. 1987 and Sygnet et al. 1988. In order to check that this second mode is fed by non-linear coupling, we have to check for the presence of a beat wave near the bar corotation radius. Let us call $`\omega _B`$ the bar frequency and $`\omega _S`$ the frequency of the second (lower) spiral. We measure the frequencies from the maxima on the isocontours, with a typical accuracy $`\pm 0.5`$ km/s/kpc. We find $`\omega _B=31.8`$ km/s/kpc and $`\omega _S=13.9`$ km/s/kpc. Hence according to equations (1) and (2) we have to check on the $`m=2+2`$ spectrum for the presence of a mode at frequency $`\omega _B+\omega _S=45.7`$ km/s/kpc, and on the $`m=22`$ spectrum (i.e. an $`m=0`$ mode which would appear as a ring in the structure of the galaxy) for the presence of a mode at frequency $`\omega _B\omega _S=17.9`$ km/s/kpc. These spectra are presented respectively on Fig. 5 and 6, and the expected frequencies are indicated by a dashed line. We have chosen to represent all the spectra of the run with the same coordinates scale. This enables the reader to check graphically for the selection rules by superimposing copies of the spectra. On the $`m=4`$ spectrum we see a major contribution corresponding to the first harmonic of the bar at $`2\omega _B`$, and a weaker one which is at the expected frequency, and which begins close to its ILR, i.e. also close to the expected coupling region. Similarly on the $`m=0`$ spectrum we see that a major contribution comes from the expected beat wave, which is once again located close to the corotation of the bar. More precisely, the measured frequencies are $`\omega _4=44.7`$ km/s/kpc and $`\omega _0=18.3`$ km/s/kpc, in excellent agreement with the expected ones (within $`2`$ % for both waves). However, the discussion above is not sufficient to ensure that the slower spiral is non-linearly excited at the bar corotation. Two questions still remain: * Is the slower spiral really triggered by the bar, or does it exist independently of it ? The presence of $`m=0`$ and $`m=4`$ beat waves would still be expected in such a case, since we are dealing with finite amplitude waves, but only as “passive” features proportional to the product of the amplitudes of the parent $`m=2`$ waves. * Is the coupling localized in the bar corotation/slower spiral ILR region, as expected from the theoretical work of Tagger et al. 1987 and Sygnet et al. 1988 ? In order to answer the first question, we have performed a second run, almost identical to the first one, but where we have inhibited the bar formation. ### 4.2 Run 2 In this run we have taken a bulge mass of $`M_b=4.310^{10}M_{}`$ and a central point-like mass of $`M_c=710^9M_{}`$. All the other parameters are the same as in run 1. The sum of the bulge mass and the central mass equals the bulge mass in run 1, so that the rotation curve (and thus all the characteristic frequencies) coincide with the ones of run 1 at radii larger than the bulge radius. The only difference is that in this new run the ILR curve has no maximum, and thus prevents the formation of a stellar bar (or, to put it differently, the central mass is 11.4% that of the stellar disk, far above the critical value of about 5% which is thought to be sufficient to destroy the bar, (see e.g. Norman et al. 1996), or even lower (2–3%, see Friedli 1994). The $`m=2`$ amplitude spectrum, computed in exactly the same conditions as in run 1, is presented in Fig. 7. Some faint standing modes (i.e. thin features) appear on this figure, but they all are far fainter than the bar and spirals of run 1 (except maybe at the outer edge where particle noise may become large). In particular one should note that the first contour level for Fig. 4 was $`410^2`$, whereas it is $`10^2`$ in Fig. 7. Obviously, in absence of the central bar, there is no standing mode in this run 2 at the frequency of the slower spiral in run 1, and the mode which appears close to this frequency is far fainter than the slow spiral in run 1. The comparison of the stellar dispersion velocity between run 1 and run 2 is presented on Fig. 8. In the radial range where the slower spiral existed in run 1, we see that the disk temperature is the same for run 1 and run 2. Hence run 1 and run 2 disk are very similar (orbital and epicyclic frequencies, temperature, density) in the region over which the slower spiral of mode 1 extended, and nevertheless no such mode appears in run 2. This is a strong point in favor of our justification of this slow spiral as non-linearly triggered by the bar. ### 4.3 Radial behavior of wave amplitudes in Run 1 The second question we have noted at the end of section 4.1 is whether, as expected from the theoretical works, the coupling efficiency is very peaked at the bar corotation. In order to answer this question, we plot on Fig. 9 the amplitudes of the modes involved in the coupling (the slow and fast $`m=2`$, the $`m=0`$ and the $`=4`$). The amplitudes are computed by integrating, for each value of $`r`$, the amplitude spectrum on a 1.6 km/s/kpc bandwidth centered on the peak frequency of the mode. On this figure we clearly see from the solid line that, in agreement with linear theory, the separation between the bar and the Swing triggered spiral occurs at 13 kpc, i.e. almost exactly where the corotation appears to be located according to Fig. 4. For $`r>13`$ kpc the Swing-triggered spiral and the slower, non-linearly excited one grow very similarly up to 18 kpc. One can note that the amplitudes of the $`m=0`$ and $`m=4`$ waves are clearly not proportional to the product of the amplitudes of the fast and slow $`m=2`$ spirals: in that case (where these waves could be simply understood as ordinary beat waves, rather than partners in a non-linear mechanism), the $`m=0`$ and $`m=4`$ curves would peak around 18 kpc, and they would have a parabolic shape between 13 and 18 kpc, since both $`m=2`$ curves are linear on this range. Instead of this behavior, both curves raise sharply around 14 kpc and peak around 16 kpc. This is a strong indication that the coupling is very localized; indeed, just as the slow $`m=2`$, the $`m=0`$ and $`m=4`$ are generated at this coupling radius, and then propagate freely in the disk. Hence the coupling partners can coexist on a wide range of radii whereas the non-linear coupling mechanism which makes them interchange energy and angular momentum takes place at a very well defined radius. Around 18-19 kpc, the $`m=4`$ is attenuated. This is reasonable since it reaches there its corotation, and thus the “forbidden band” where it does not propagate (just as the fast $`m=2`$ at 14 kpc); on the other hand the $`m=0`$ is also attenuated, something we could not expect from its linear properties. This can be understood by returning to the energy spectrum of the $`m=4`$, Fig. 5: one sees, at $`30`$ km/s/kpc, another quite strong $`m=4`$ feature with its ILR at the corotation of the previous one; this is quite close to the $`26`$ km/s/kpc, expected for the beat wave of the previous $`m=4`$ and of the $`m=0`$; we believe that non-linear coupling is at work also here, allowing the $`m=0`$ and the fast $`m=4`$ to transfer their energy to a slower one. This would be an illustration of the “staircase” of modes often observed in these simulations. Finally, one could wonder why the fast $`m=2`$ seems to extend (and even to be peaked) beyond its OLR. Actually this behavior disappears if we consider the energy density rather than the amplitude, as depicted on Fig. 10. On this figure we see that the fast $`m=2`$ spiral peaks around 20 kpc, where its OLR is expected. There is still a noticeable energy density up to 2 kpc farther. This is in agreement with the epicyclic radius (which is the natural width of the resonance) of about $`\sigma _r/\kappa 3`$ kpc. One can note once again on this plot that the $`m=4`$ extends only to its corotation at $`r18`$ kpc, and that the $`m=0`$ also ends in this region, while the slow $`m=2`$ extends to $`21`$ kpc. ### 4.4 Varying parameters in Run 1 In order to check the influence of the parameters of run 1 on the spectra and on the disk profile temperature (which might be strongly influenced by two-body relaxation in a 2D simulation), we have repeated run 1 with twice as many particles, or with the same number of particles but with a timestep of $`0.25`$ Myr, three times smaller than the initial one. When we perform run 1 with a double number of particles, the same features remain, i.e. the bar, the Swing triggered spiral and the slower spiral, at frequencies which have varied by no more than 1 km/s/kpc. The coupling partners also remain, with the same intensities. As expected, the outer part of the spectra is less noisy since the particle noise has decreased. Furthermore, the intermediate faint $`m=2`$ standing mode of figure 4 at $`22`$ km/s/kpc almost disappears. This is understandable since a good explanation for this mode is the subharmonic excitation of the $`m=4`$ coupling partner, and subharmonic excitation requires initial noise to start. Since the noise is decreased in this new run, the corresponding mode is accordingly fainter. The same kind of argument stands for the alternative mechanism one could consider to justify this mode, which is the groove or ridge mechanism of Sellwood & Lin 1989. Whatever the correct explanation for this mode may be, the non-linear coupling mechanism which excites a slower spiral at its ILR and at the bar corotation is far more relevant to account for the dynamics of the external part of the galaxy. Let us mention that with twice as many particles, the temperature profile obtained in the disk coincides with the one observed with the initial number of particles (i.e. $`80,000`$). This shows that the heating in the simulations is due to the presence and growth of the spiral and the bar, and not to two-body relaxation. When we perform run 1 with a smaller timestep ($`0.25`$ Myr instead of $`0.75`$ Myr), we obtain the same results (i.e. the same features on all the spectra, with frequencies that have varied by no more than $`0.5`$ km/s/kpc), showing that the timestep chosen for run 1 was sufficiently small. ### 4.5 Discussion about Run 1 #### 4.5.1 Non-linear coupling versus grooves Run 1 confirms the coincidence of the corotation of the inner (fast) mode and the ILR of the outer (slow) one, as noticed in the simulations of Sellwood 1985 by Tagger et al. 1987 and Sygnet et al. 1988. This differs from the mechanism of Sellwood & Lin 1989, from which one would expect to have the corotation of the outer mode at the OLR of the inner one: Fig. 4 allows to clearly rule out this mechanism here, since the OLR of the fast mode is at about 20 kpc, whereas the corotation of the slow one is at about 25 kpc. The run also confirms the presence of coupled $`m=0`$ and $`m=4`$ waves at significant amplitudes. Since the slower spiral is long-lived (as shown by the thinness of the isocontours in Fig. 4) although it has an ILR where it should be damped, it must be continuously fed. We attribute this, following the analysis of Tagger et al. 1987 and Sygnet et al. 1988, to non-linear coupling between the two $`m=2`$ modes and the $`m=0`$ and $`m=4`$ beat waves. We also wish to mention the intermediate faint spiral mode at $`\omega =22`$ km/s/kpc, observed on Fig. 4, whose corotation roughly coincides with the OLR of the fast spiral. This mode could then correspond to a groove or ridge excited mode, following the mechanism of Sellwood & Lin 1989. However the energy flux it carries is far lower than the one transported by the fast and slow spirals. The excitation of this mode, which appears nearly halfway in frequency between the fast and slow spirals, could have other explanations: * it could correspond to the subharmonic of the $`m=4`$ beat wave, since it has half its wavenumber and nearly half its frequency. It would be excited at its corotation where both waves are resonant. * its OLR coincides with the corotation of the slow spiral, so that it could be non-linearly fed by this slow wave. In favor of this explanation, one can notice that the low frequency, outer feature observed on the $`m=0`$ power spectrum (slightly below $`10`$ km/s/kpc) would be an obvious partner in this coupling. A combination of these three mechanisms is probably at work in the generation of this intermediate $`m=2`$ feature, and much more detailed simulations, with some additional theoretical work, would be needed to understand their interplay. This might be taken as an illustration of the fact that, at lower amplitudes than the dominant features we have analyzed, one enters in the regime of multiple non-linear interactions between numerous partners, which could lead in other conditions to a turbulent cascade, as discussed in the introduction. #### 4.5.2 Comparison of Swing and non-linear coupling An important difference between run 1 and the simulation of Sellwood 1985 (and many of our own runs) is that in the latter case the fast mode did not extend beyond its corotation, whereas here the Swing-triggered spiral (at the frequency of the bar) is nearly as strong as the slower one up to 18 kpc, which corresponds to the corotation of the $`m=4`$ coupled wave, and then even dominates it. The reason is that for this first run, in order to simplify the physics involved, we have chosen parameters which optimize the initial efficiency of the Swing, resulting in a very strong bar. Also, since Sellwood’s work clearly documents a case where the bar does not extend beyond corotation, we have chosen this run here so as to show that both behaviors are possible. We also note, in reference to run 1, that the coupling efficiency, which increases when the group velocities decrease as explained in section 2.3, is expected to vary as $`\sigma _r^{3/2}`$, where $`\sigma _r`$ is the radial velocity dispersion (see Masset & Tagger, 1996). Since a realistic disk (i.e. with a dissipative component) would remain much colder than the disk of run 1, we expect non-linear coupling to be far more efficient in a realistic disk. Fig. 11 shows how the disk has been heated by the bar. We see that around the corotation of the bar, where coupling occurs, the Toomre Q parameter reaches values of 4 to 5 as soon as the bar forms. For a colder disk, with $`Q1.5`$, non-linear coupling could be up to 6 times more efficient. On the other hand, the Swing mechanism also becomes less efficient as $`Q`$ increases. Thus a realistic simulation including a gaseous component and stars formation would be necessary to check how the energetics of the Swing-triggered spiral and that of the slower one compare in a realistic barred galaxy, as a function of local or global parameters of the disk. ## 5 Conclusion We have presented a typical numerical simulation of a barred spiral galaxy, which shows a strong and unambiguous signature of non-linear coupling between bar and spiral waves. This non-linear coupling is responsible for the excitation by the central bar of a slower, outer $`m=2`$ spiral wave which can efficiently carry away the energy and angular momentum extracted from the central regions by the bar. This confirms, with more detailed diagnostics, the simulations of Sellwood 1985 and the theoretical explanation of Tagger et al. 1987, and Sygnet et al. 1988. This behavior is in fact routinely observed in numerical simulations, as soon as one introduces realistically peaked rotation profiles at the center, so that Lindblad resonances prevent any single spiral mode to extend radially over most of the galactic disk. This leads us to believe that non-linear coupling can be frequent also in real galaxies, under different forms, which may be difficult to analyze because of the complex density patterns (Sellwood & Sparke 1988), as discussed in the introduction: * As mentioned by Sygnet et al. 1988, and in our introduction, SB(r) galaxies seem to be a very good candidate, because of the mismatch between the position angles of the bar and spiral found by Sandage (1961). The disagreement of Buta (1987) on this observation might be attributed to the above-mentioned complex density patterns, so that this clearly deserves further investigations. * As discussed by Sellwood & Sparke 1988, this mechanism might help solve the long-standing difficulties found when one tries to locate the corotation in many barred spirals. * Recent work based both on observations and more complex simulations (Friedli & Martinet, 1993; Friedli et al. , 1996) has pointed to the frequent observation of “bars within bars” in the central regions of galaxies. They have shown that the inner bar is most frequently misaligned with the outer one, ruling out a purely dynamical origin (such as stars aligned on $`x_2`$ orbits, perpendicular to the main bar). Non-linear coupling, shown here to occur at larger radii, appears as a very good tentative explanation for this mechanism, which should play a major role in the fueling of the inner parts of the galaxy. * M51 might provide a clue to the same mechanism in tidally-driven spirals. Elmegreen et al. (1987), in detailed modeling of the spiral structure in three “classical” galaxies, found that the structure of M51 could be explained only by the presence of two distinct spirals, and noticed (before they knew about our work) that the corotation of the inner one would coincide with the ILR of the outer one – the signature of our mechanism. One must naturally be cautious with this result, since the modeling of M51 has proven to be a very challenging task. However, precisely because there remains much to be done in this respect, and because new observations become available, we believe that this might very well prove to be an important element in the complex physics of M51. * There has been in recent years a renewed interest in $`m=1`$ spiral structures. The linear theory of the $`m=1`$ mode in galaxies, which differs markedly from $`m>1`$ ones, remains to be done, but it is generally believed that it is not or only weakly unstable. Work done in the context of accretion disks (Adams et al. , 1989) would not really apply here, because self-gravity is strong and the boundary conditions used are different (see also Noh et al. , 1991). Tagger & Athanassoula (1990) discussed non-linear coupling of an $`m=2`$ with two $`m=1`$ modes, as a possible explanation for the structure of lopsided galaxies. They appear as the strong version of a more frequent observation, that of off-centered nuclei (i.e. nuclei affected by an $`m=1`$ displacement) in galaxies - including our own. Miller & Smith (1992) for stellar disks, and Laughlin & Korchagin (1996) for gaseous ones, have found persistent such motions in numerical simulations, a behavior we also obtain: although our cartesian grid gives insufficient resolution at the center to give it strong confidence without additional work, we do find $`m=1`$ spirals at the center, non-linearly coupled to unstable $`m=2`$ and $`m=3`$ ones. * As mentioned in the introduction, we (Masset & Tagger, 1996b) have shown from analytical work that non-linear coupling with spiral waves is also a very tempting explanation for the generation of warps in disk galaxies. Work is in progress to give numerical evidence of this mechanism. ## 6 Acknowledgments We wish to thank F. Combes and M. Morris for rich and helpful discussions in the course of this work. We also thank A. Hetem for his close assistance in software development, which has considerably increased its efficiency. Finally we thank our referee D. Friedli whose remarks and suggestions have considerably improved the final version of this paper.
no-problem/9902/cond-mat9902135.html
ar5iv
text
# Acknowledgement ## Acknowledgement We thank Richard E. Lenski for permission to reproduce the original figure in Refs. and . One of the author I.C. is supported by the Council of Scientific and Industrial Research, India under Sanction No. 9/15(173)/96-EMR-I. ## Figure Captions Relative fitness versus time in experiment and simulation. A global average is taken over 2000 sites of the lattice to obtain the data points in simulation. Relative fitness versus time in experiment and simulation. A local average is taken over 40 sites in simulation. the experimental data points are taken every 100 generations. Average cell size versus time in experiment and simulation (global average). Average cell size versus time in experiment and simulation (local average). Average fitness versus average cell size in experiment and simulation.
no-problem/9902/cond-mat9902063.html
ar5iv
text
# Exact exponents for the spin quantum Hall transition \[ ## Abstract We consider the spin quantum Hall transition which may occur in disordered superconductors with unbroken SU$`(2)`$ spin-rotation symmetry but broken time-reversal symmetry. Using supersymmetry, we map a model for this transition onto the two-dimensional percolation problem. The anisotropic limit is an sl$`(2|1)`$ supersymmetric spin chain. The mapping gives exact values for critical exponents associated with disorder-averages of several observables in good agreement with recent numerical results. \] Noninteracting electrons with disorder, and the ensuing metal-insulator transitions, have been studied for several decades, and are usually divided into just three classes by symmetry considerations. Recently, the ideas have been extended to quasiparticles in disordered superconductors, for which the particle number is not conserved at the mean field level. Several more symmetry classes have been found . One of these, denoted class C in Ref. , is of particular interest . This is the case in which time-reversal symmetry is broken but global SU(2) spin-rotation symmetry is not, and spin transport can be studied. In two dimensions (2D) it can occur in $`d`$-wave superconductors. Within class C, a delocalization transition is possible in which the quantized Hall conductivity for spin changes by two units, resembling the usual quantum Hall (QH) transition but in a different universality class. When a Zeeman term is introduced which breaks the SU(2) symmetry down to U(1), the transition splits into two that are each in the usual QH universality class. In this paper we present exact results for a recent model for the spin QH transition, in class C, in a system of noninteracting quasiparticles in 2D. We use a supersymmetry (SUSY) representation of such models, considered previously , to obtain a mapping onto the 2D classical bond percolation transition, from which we obtain three independent critical exponents, and universal ratios, exactly. An anisotropic version of the model is also mapped onto an antiferromagnetic sl$`(2|1)`$ SUSY quantum spin chain. The results are in very good agreement with recent numerical simulations . We study the spin QH transition in an alternative description that is obtained from the superconductor after a particle-hole transformation on the down-spin particles , which interchanges the roles of particle number and $`z`$-component of spin, and so particle number is conserved rather than spin. This makes it possible to use a single-particle description, at the cost of obscuring the SU(2) symmetry. The single-particle energy ($`E`$) spectrum has a particle-hole symmetry under which $`EE`$, so when states are filled up to $`E=0`$, the positive-energy particle and hole excitations become doublets of the global SU(2) symmetry. In this picture, a (nonrandom) Zeeman term $`H`$ for the quasiparticles maps onto a simple shift in the Fermi energy to $`E=H`$ , splitting the degeneracy. The model is a network (generalizing Ref. ), in which a particle of either spin and with $`E=0`$, represented by a doublet of complex fluxes, can propagate in one direction along each link (Fig. 1). The propagation on each link is described by a random SU(2) scattering matrix (the black dot), with a uniform distribution over the SU(2) group; the absence of an additional random U(1) phase here is crucial and implies that the global SU(2) spin-rotation (or particle-hole) symmetry is unbroken. As in Ref. , there are two sublattices, A and B, on which the nodes are related by a 90 rotation. Scattering of the fluxes at the nodes (black squares) is described by orthogonal matrices diagonal in spin indices: $`𝒮_S=𝒮_S𝒮_S`$, $$𝒮_{S\sigma }=\left(\begin{array}{cc}(1t_{S\sigma }^2)^{1/2}& t_{S\sigma }^{}\\ t_{S\sigma }^{}& (1t_{S\sigma }^2)^{1/2}\end{array}\right),$$ (1) where $`S=A`$, $`B`$ labels the sublattice and $`\sigma =`$, $``$ labels the spin direction. The network is spatially isotropic when the scattering amplitudes on the two sublattices are related by $`t_{A\sigma }^2+t_{B\sigma }^2=1.`$ The network has a multicritical point at $`t_{A\sigma }=t_{B\sigma }=2^{1/2}`$ (for the isotropic case). Taking $`t_{A\sigma }t_{B\sigma }`$ (but keeping $`t_S=t_S`$) drives the system through a QH transition between an insulator and a QH state, and the Hall conductance (now for charge) jumps from zero to 2. Making $`t_St_S`$ breaks the global SU(2) symmetry, and splits the transition into two ordinary QH transitions each in the unitary class. As we will argue later, this perturbation is different from the uniform Zeeman term. We briefly describe, for the present case, the main steps of the SUSY method for the network models . Transport and other properties of the network, such as its conductance, may be expressed in terms of sums over paths on the network. Such a sum may be written in second-quantized language as a correlation function, $`\mathrm{}\mathrm{STr}(𝒯\mathrm{}U)`$ where the supertrace contains an evolution operator $`U`$ of an associated quantum 1D problem, $`𝒯`$ is a time-ordering symbol, and $`\mathrm{}`$ stands for operators that represent the ends of paths and correspond physically to density, current, etc. In this form, the average can be taken to obtain moments of physical quantities, and we leave this implicit in later notation. In this 1D problem vertical rows of links of the network become sites, and the vertical direction becomes (imaginary) time (we assume for the present periodic boundary conditions in both directions). The evolution operator $`U`$, composed of transfer matrices for links and nodes, acts in a tensor product of Fock spaces of bosons and fermions on each site. The presence of a fermion or boson on a link—i.e. on a site at an instant of discrete time—represents an element of a path traversing that link . Both bosons and fermions are needed to ensure the cancellation of contributions from closed loops. Usually one needs two types of bosons and fermions, retarded and advanced, to be able to obtain two-particle properties. However, the particle-hole symmetry relates retarded and advanced Green’s functions . Hence, for the study of mean values of simple observables, we need only one fermion and one boson per spin direction per site. (To study fluctuations and other observables, $`N`$ types of fermion and boson are needed, and the SUSY below becomes osp($`2N|2N`$) .) We denote them by $`f_\sigma `$, $`b_\sigma `$ for the sites related to the links going up (up-sites), and $`\overline{f}_\sigma `$, $`\overline{b}_\sigma `$ for the down-sites. On the up-sites, $`f_\sigma `$, $`b_\sigma `$ are canonical, but to ensure the cancellation of closed loops we must either take the fermions on the down-sites to satisfy $`\{\overline{f}_\sigma ^{},\overline{f}_\sigma ^{}^{}\}=\delta _{\sigma \sigma ^{}}`$, or similarly for the bosons. To begin, we consider the spin-rotation invariant case with $`t_S=t_S=t_S`$. In this case, for any realization of the disorder in the scattering matrices, the transfer matrices commute with the sum over sites of the eight generators (superspin operators) of the superalgebra sl$`(2|1)`$ osp$`(2|2)`$, similarly to Ref. . The generators for each site are constructed as all bilinears in the fermions and bosons and their adjoints, which are singlets under the random SU(2). These are denoted by $`B`$, $`Q_3`$, $`Q_\pm `$, $`V_\pm `$, $`W_\pm `$, and have similar expressions for the two types of sites. Cancellation of closed loops would only require invariance under the gl$`(1|1)`$ subalgebra generated by $`B`$, $`Q_3`$, $`V_{}`$, and $`W_+`$. The larger SUSY that exists when $`t_S=t_S`$ is a manifestation of the global SU(2) symmetry. The transfer matrix describing the evolution on a link, after averaging over the random SU(2) matrices, projects the states on the corresponding site to a three-dimensional subspace of singlets of the random SU(2). On the up-sites these form the fundamental representation 3 of sl$`(2|1)`$, and we denote them as $`|m`$, $`m=0`$, $`1`$, $`2`$. Similarly, on the down-sites the three singlet states form the representation $`\overline{\mathrm{𝟑}}`$, dual to 3, and we call them $`|\overline{m}`$; $`m`$ is the number of fermions on a site of either type. We find that $`|\overline{1}`$ has negative squared norm, $`\overline{1}|\overline{1}=1`$, while the others are positive. Thus, after averaging, we have a horizontal chain of sites with alternating dual representations on the two sublattices and a discrete-time evolution along the vertical direction given by the transfer matrices at the nodes, which will be specified below. We now consider in detail the node transfer matrix $`T_S`$ on a single node on sublattice $`S`$. After the averaging, it acts in the tensor product $`\mathrm{𝟑}\overline{\mathrm{𝟑}}`$ for the two sites. Because of the sl$`(2|1)`$ SUSY, we find that it takes the form $$T_S=t_S^2P_\mathrm{𝟏}+(1t_S^2)I\overline{I}.$$ (2) Here the first term contains the projection operator $`P_\mathrm{𝟏}=|ss|`$ onto the normalized singlet state $`|s=_m|m|\overline{m},`$ while in the second term $`I`$, $`\overline{I}`$ are the identity operators on the two sites (note that $`\overline{I}=|\overline{0}\overline{0}||\overline{1}\overline{1}|+|\overline{2}\overline{2}|`$). The two terms in $`T_S`$ represent the two ways to sl$`(2|1)`$-invariantly couple the in- and out-going states at the node, such that the incoming state (in the fundamental representation $`\mathrm{𝟑}`$) flows out unchanged, turning either to the right or the left. They can be represented graphically as shown at the top in Fig. 2. When we multiply the transfer matrices together and take the supertrace in the tensor product of all sites to calculate the partition function $`Z=\mathrm{STr}U`$, the result is given by the sum of all contributions of closed loops that fill the links of the network, weighted by factors of either $`t_S^2`$ or $`(1t_S^2)`$ for each node. Each loop contributes a factor coming from the sum over the three states that can propagate around the loop, the supertrace $`\mathrm{str}\mathrm{\hspace{0.17em}1}=1`$ taken in the fundamental $`\mathrm{𝟑}`$. It is also clear that $`Z`$ is equal to 1, as it is also before averaging. The sum over loops on the links of the network is equivalent to the bond percolation problem on the square lattice, as follows. In Fig. 2, we shade one-half of the plaquettes of the network in checkerboard fashion. The two terms in $`T_S`$ possible at each node either do or do not connect the shaded plaquettes, as indicated by the thick undirected line segments. At each $`A`$\- (respectively, $`B`$-) node we have a horizontal (vertical) line with probability $`p_A=t_A^2`$ ($`p_B=1t_B^2`$). Then on the square lattice formed by the shaded plaquettes we have the classical bond percolation problem, and the loops are the boundaries (or “hulls”) of the percolation clusters. This SUSY representation of percolation easily generalizes to sl$`(n+1|n)`$ SUSY, $`n1`$, using the $`2n+1`$-dimensional fundamental representation and its dual. Many critical exponents for 2D percolation are known exactly. First, there is the correlation length exponent, which immediately gives the localization length for the spin QH transition, $$\xi |p_Sp_{Sc}|^\nu ,$$ (3) with $`\nu =4/3`$ ; the critical values are $`p_{Ac}=p_{Bc}=1/2`$ in the isotropic case. Then, because the basic operators of our system are the superspins, which act on the states that live on the hulls of the percolation clusters, we should consider the exponents associated with these hulls. These include an infinite set of scaling dimensions for the so-called $`n`$-hull operators , $$x_n=(4n^21)/12.$$ (4) The exponent $`x_n`$ describes the spatial decay at criticality $`|r_1r_2|^{2x_n}`$ of the probability that $`n`$ distinct hulls each pass close to each of the two points $`r_1`$ and $`r_2`$. There is also a set of analogous exponents for the same correlators near a boundary , $$\stackrel{~}{x}_n=n(2n1)/3.$$ (5) We will now relate further physical quantities within our model to percolation exponents, through the SUSY mapping. We write the superspins as a single 8-component object $`J`$ for either up- or down-sites. These can be inserted in any links of the network, to obtain a correlator such as $`Q_3(r_1)Q_3(r_2)`$, where $`r_1`$ and $`r_2`$ represent links of the network. Then using the same graphical expansion, we obtain a sum over loop configurations, now with the positions of the insertions marked on the loops, and for loops with insertions, the factor 1 is replaced by a supertrace (in the fundamental) of the product of matrices that represent the $`J`$’s inserted. We then require only the total probabilities that loops pass through the marked points in various ways. The simplest example is a two-point function of $`J`$’s, which is nonvanishing only if the $`J`$’s are on the same loop, because $`\mathrm{str}J=0`$ for all components of $`J`$. The leading term in the probability that the two points are on the same loop (hull) is governed by the leading $`1`$-hull operator in the continuum theory , giving $$(1)^{i_1+i_2}J(r_1)J(r_2)|r_1r_2|^{2x_1},$$ (6) at the transition, where $`x_1=1/4`$ as specified above. The reason for the staggering factors $`(1)^{i_1}`$, where $`i_1`$ is the site corresponding to $`r_1`$, will become clear momentarily. It is useful to consider the anisotropic limit of the model. This is defined by $`t_A`$, $`t_B0`$ with a fixed ratio $`t_A/t_B=ϵ`$. Then the transfer matrices $`T_S`$ may be expanded in $`t_S`$ and recombined in the exponential. The evolution operator has the form $`U\mathrm{exp}(2t_At_B𝑑\tau _{\text{1D}})`$, where the effective Hamiltonian $`_{1\mathrm{D}}`$ describes a 1D superspin chain, with alternating $`\mathrm{𝟑}`$ and $`\overline{\mathrm{𝟑}}`$ representations, and continuous imaginary time $`\tau `$: $$_{1\mathrm{D}}=_i\left(ϵJ_{2i1}J_{2i}+ϵ^1J_{2i}J_{2i+1}\right).$$ (7) Here $`JJ`$ denotes the sl$`(2|1)`$ invariant product . The transition point, where $`_{1\mathrm{D}}`$ for an infinitely-long chain is gapless, is now at $`ϵ=1`$. The two-site version of $`_{1\mathrm{D}}`$ appeared in Ref. . The sum $`_iJ_i`$ is the generator of global SUSY transformations, and so $`J_i`$, viewed as a function of $`i`$, is the superspin density on the lattice, which gives a subleading contribution $`r^2`$ to the $`J`$-$`J`$ correlation at criticality. The 1-hull operator must therefore be the staggered part, $`(1)^iJ_i`$. The 1-hull operators represented by $`(1)^iJ_i`$ have several physical applications. Components such as $`Q_+=f_{}^{}f_{}^{}`$ on the up-sites create fermions, so produce ends for the quasiparticle paths. The sum of all such paths between $`r_1`$ and $`r_2`$ represents the quasiparticle Green’s function, $`G`$. To obtain nonzero results on averaging, we must multiply the retarded and advanced Green’s functions before averaging, but this can be replaced by a spin-singlet combination of our fermions or bosons . The staggered part of this averaged correlator represents the average zero-frequency density-density (“diffusion”) propagator $`\overline{|G|^2}`$ (and also the average conductance between two point-contacts ), which therefore falls as $`|r_1r_2|^{1/2}`$ at the transition. Moreover, the local density of states $`\rho (r,E)`$ is represented by another component of the 1-hull operator, because both it and the density operator contain wavefunctions squared, $`|\psi |^2`$, in the original problem and so scale in the same way. The energy $`E`$ itself (set to zero hitherto) has scaling dimension $`y_1=2x_1`$ because an imaginary part $`\eta `$ of $`E`$ induces a staggered “magnetic field” term $`_i\eta \rho _i(i\eta )`$ in $`_{1\mathrm{D}}`$. Hence for the average we have at criticality $$\overline{\rho (r,E)}|E|^{x_1/y_1}=|E|^{1/7}.$$ (8) Also, since a uniform Zeeman term $`H`$ causes a shift in the Fermi energy, it induces a correlation length $`\xi _H|H|^{\nu _1}`$, where $`\nu _1=1/y_1=4/7`$. We have already identified the value $`\nu =4/3`$ of the localization length exponent $`\nu `$ with that in percolation. In terms of $`_{1\mathrm{D}}`$, the effect of a small deviation $`\delta ϵ1`$ is to add the perturbation $`\delta _i(1)^iJ_iJ_{i+1}`$ to the critical $`_{1\mathrm{D}}`$. This term contains the dimer operator $`D_i=(1)^iJ_iJ_{i+1}`$, which is odd under reflection through any lattice site (parity). The scaling dimension $`x_2`$ of the 2-hull operator is the same as that of this “thermal” perturbation for the transition, that is $`\nu =\nu _2=1/y_2`$, $`y_2=2x_2`$ . We therefore expect that the 2-hull operator is part of a multiplet of staggered two-superspin operators, that are similar to $`D_i`$, but are not all sl$`(2|1)`$ singlets. As a final perturbation of the critical Hamiltonian, we consider the effect of $`t_St_S`$. This breaks the global SU(2) symmetry, and breaks the SUSY to gl$`(1|1)`$. Taking $`t_{A\sigma }=t_{B\sigma }`$, we find that the effect is to add to $`_{1\mathrm{D}}`$ a term $`(t_{}t_{})^2_i\widehat{J}_i\widehat{J}_{i+1},`$ where $`\widehat{J}_i`$ is the 4-component set of generators of gl$`(1|1)`$, and the product is invariant under this algebra. This term is an anisotropy in superspin space. The two QH transitions it produces cannot be seen in our formulation without explicitly introducing both retarded and advanced fermions and bosons, and we will see only exponentially decaying correlations. The correlation length $`\xi _\mathrm{\Delta }`$ induced by $`\mathrm{\Delta }=t_{}t_{}`$ scales as $$\xi _\mathrm{\Delta }|\mathrm{\Delta }|^\mu ,$$ (9) in the notation of Ref. , for small $`\mathrm{\Delta }`$. If the spin anisotropy $`\widehat{J}_i\widehat{J}_{i+1}`$ has dimension $`x^{}`$, then we will have $`\mu =2/(2x^{})`$. The operator does not appear to be the 1-hull operator, and has the opposite parity to the 2-hull. However, the operator product of two 1-hull operators has the correct parity and might contain this operator. In conformal field theory, the 1-hull operator can be represented by $`\varphi _{2,2}`$ in the Kac classification of $`c=0`$ Virasoro representations. The fusion rules for this primary field with itself contain the leading nontrivial operator $`\varphi _{1,3}`$, which we view as a subleading 1-hull operator, with scaling dimension $`\widehat{x}_1=2h_{1,3}=2/3`$. We suggest that $`x^{}=\widehat{x}_1=2/3`$, which yields $`\mu =3/2`$. We further suggest that this operator describes a random Zeeman term (with zero mean). Finally, we note that the average two-probe conductance of our system with open ends , and with $`t_S=t_S`$, can be related to the number $`n`$ of hulls that connect one end to the other (and back). Each such configuration of loops contributes $`n`$ to the conductance, times $`2`$ for spin, so the mean conductance has the scaling form $$\overline{g}=2\underset{n=1}{\overset{\mathrm{}}{}}nP(n,L/W,L/\xi ),$$ (10) where $`P(n,L/W,L/\xi )`$ is the probability that exactly $`n`$ hulls run from one end to the other and back, for a system of size $`L`$ by $`W`$. This can be considered both for periodic and reflecting transverse boundary conditions. At the transition, $`\xi =\mathrm{}`$, and for large $`L/W`$, it is known that $`P(n,L/W,0)e^{2\pi x_nL/W}`$ for periodic, and $`e^{\pi \stackrel{~}{x}_nL/W}`$ for reflecting boundaries. The sum for $`\overline{g}`$ is dominated by the $`n=1`$ term in this limit, so it has the form $`\overline{g}e^{L/\xi _{1\mathrm{D}}}`$, giving the behavior of the localization length $`\xi _{1\mathrm{D}}`$, the only parameter that enters in the complete distribution of conductance in this limit . As $`L/W0`$, we expect that $`\overline{g}W/L`$, implying that there is a nonzero critical conductivity. We may now compare our results with those of recent numerical work. In Ref. , the results obtained were $`\nu 1.12`$ and $`\mu 1.45`$. These are in fair agreement with our predictions, especially for $`\mu `$ where our theoretical argument is less well established. The authors of Ref. study the SUSY spin chain numerically, and find critical exponents $`x_1=0.26\pm 0.02`$ and $`x_2=1.24\pm 0.01`$, in excellent agreement with our predictions. To conclude, we have used SUSY methods to find a remarkable equivalence of a quasiparticle localization problem, the spin quantum Hall transition, to 2D percolation, resulting in the exact values of three exponents, and the universal ratios for the localization length in the 1D limit. We are grateful to T. Senthil, J. B. Marston, and M. P. A. Fisher for useful discussions, and for sharing numerical results before publication. We also thank J. Chalker, V. Gurarie and M. Zirnbauer for helpful discussions. This work was supported by the NSF under grants No. PHY94-07194, DMR-9528578 (IAG), DMR-9157484 (NR), and in part by the A. P. Sloan Foundation (AWWL).
no-problem/9902/math9902148.html
ar5iv
text
# Periodic Orbits in Magnetic Fields in Dimensions Greater Than Two ## 1. Introduction The motion of a charge in a magnetic field on a manifold is given by the Hamiltonian vector field of the standard metric Hamiltonian on the cotangent bundle with respect to a non-standard symplectic structure. This so-called twisted symplectic structure is the sum of the standard one and the pull-back of the magnetic field two–form. One of the objectives of the present paper is to prove the existence of periodic orbits on low energy levels for such Hamiltonian systems, provided that the metric and the magnetic field two–form satisfy a certain condition. The condition is that the magnetic field two–form is symplectic and compatible with the metric. For example, this requirement is met when the manifold, with these structures, is Kähler. More specifically, we show (Theorem 2.3) that under this condition the number of periodic orbits on every low energy level, when the orbits are non-degenerate, is no less than the sum of Betti numbers of the manifold. In the degenerate case, a similar lower bound is obtained in \[Ke\] in terms of the minimal number of critical points of a function on the manifold. The problem of existence of periodic orbits for symplectic magnetic fields can be generalized as follows. Consider a proper function on a symplectic manifold and assume that this function has a Morse-Bott non-degenerate minimum along a symplectic submanifold. Then the problem is to find a lower bound for the number of periodic orbits of the Hamiltonian flow on the levels near the minimum. For example, when the submanifold is just a point the answer is given by Weinstein’s theorem, \[We1\]. We provide a lower bound (Theorem 2.1) when the orbits are non-degenerate and again a certain compatibility condition is satisfied. The case of degenerate orbits is treated in \[Ke\] using a method relying on Moser’s proof, \[Mo, Bo\], of Weinstein’s theorem. Both of these problems are similar to the Weinstein conjecture, \[We2\], on the existence of periodic orbits on contact type hypersurfaces. (See, e.g., \[FHV, HV, HZ, MS, Vi1, Vi2\] for more information and further references.) However, the essential difference is that the energy levels in question may fail to have contact type. For example, in the case of the magnetic field, the twisted symplectic form is never exact on the energy level when the magnetic field is not exact and the dimension of the base is greater than two. This fact makes the problem difficult to solve by standard symplectic topology techniques. The problem of existence of periodic orbits on almost all levels is accessible by making use of symplectic capacities. For example, if one can show that bounded sets in the ambient manifold have finite Hofer–Zehnder capacity, it follows that a proper Hamiltonian has periodic orbits on almost all energy levels, \[HZ, Section 4.2\]. In Section 3, we prove that the capacity of bounded sets is finite for a twisted symplectic form on the cotangent bundle to a torus for any closed magnetic field two–form (Theorem 3.1). This implies the “almost existence” of periodic orbits for magnetic fields on tori (Corollary 3.3). This paper is one of very few (see also \[Bi, BT, Ke, Lu, Po\]) focusing on magnetic fields on manifolds of dimension greater than two. Much more is known about the existence of periodic orbits for magnetic fields on surfaces. The reader interested in the review of these results from the symplectic topology perspective should consult \[Gi2\]. The necessary symplectic geometry material can be found in \[HZ, MS\]. ## 2. Periodic orbits on low energy levels ### 2.1. Periodic orbits near a minimum. Let $`(W,\omega )`$ be a symplectic manifold and let $`H:W`$ be a smooth function. Assume that $`H`$ has a Bott non-degenerate minimum along a compact symplectic submanifold $`MW`$. The normal bundle $`\nu `$ to $`M`$ in $`W`$ is a symplectic vector bundle of dimension $`2m=\mathrm{codim}M`$. The Hessian $`d^2H:\nu `$ is a positive–definite fiberwise quadratic form on $`\nu `$. Thus at every point $`xM`$ we have $`m`$ canonically defined eigenvalues of the Hessian $`d_x^2H`$ on the fiber of $`\nu `$ over $`x`$ with respect to the linear symplectic form $`\omega _x^{}`$ on this fiber. The eigenvalues of $`d_x^2H`$ are equal at every point $`xM`$ if and only if the linear Hamiltonian flow of $`d_x^2H`$ with respect to $`\omega _x^{}`$ is periodic with all its orbits having the same period. In this case, the flow gives rise to a free $`S^1`$-action on $`S\nu =\{d^2H=1\}`$ and hence to the principal $`S^1`$-bundle $`\mathrm{pr}:S\nu S\nu /S^1`$. Recall that a closed integral curve of a vector field (or a line field) is said to be non-degenerate if its Poincaré return map does not have unit as an eigenvalue. Denote by $`\mathrm{SB}(N)`$ the sum of Betti numbers of a manifold $`N`$. ###### Theorem 2.1. Assume that for every $`xM`$ all eigenvalues of $`d_x^2H`$ with respect to $`\omega _x^{}`$ are equal. Then for a sufficiently small $`ϵ>0`$, the number of periodic orbits of the Hamiltonian flow of $`H`$ on the level $`\{H=ϵ\}`$ is at least $`\mathrm{SB}(S\nu /S^1)`$ if the $`S^1`$-bundle $`\mathrm{pr}`$ is trivial and at least $`\mathrm{SB}(S\nu )/2+1`$ otherwise, provided that all of the periodic orbits are non-degenerate. The theorem will be proved in Section 2.3. Note that the bundle $`\mathrm{pr}`$ is always non-trivial when $`m>1`$. The reason is that its restriction to a fiber $`^{m1}`$ of the bundle $`S\nu /S^1M`$ is the Hopf fibration. ###### Remark 2.2. The fiberwise Hamiltonian flow of $`d^2H`$ can alternatively be described as the Hamiltonian flow of $`d^2H`$ with respect to the Poisson structure on the total space of $`\nu `$ given by the family of fiberwise symplectic forms $`\omega _x^{}`$, $`xM`$. As follows from the results of \[Ke\], when the periodic orbits are not required to be non-degenerate, the number of periodic orbits is still greater than or equal to $`\mathrm{CL}(S\nu /S^1)=\mathrm{CL}(M)+m`$. Here $`\mathrm{CL}`$ stands for the cup–length and the action of $`S^1`$ is given, after suitable rescaling, by the linear Hamiltonian flow of $`d^2H`$, described above. For $`W=^{2n}`$ and $`M`$ a point, this is a particular case of Weinstein’s theorem \[We1\]; see also \[Mo\]. It seems to be likely that the bound of Theorem 2.1 can be significantly improved. (For example, as stated, this bound is not necessarily higher than the bound from below for the degenerate case mentioned above, \[Ke\].) We conjecture that under the hypothesis of the theorem, the number of periodic orbits is at least $`\mathrm{SB}(S\nu /S^1)=m\mathrm{SB}(M)`$. (The equality follows from \[Hu, Theorem 2.5, p. 233\].) ### 2.2. Periodic trajectories of a charge in a magnetic field. Let $`M`$ be a Riemannian manifold and let $`\sigma `$ be a symplectic form on $`M`$ (a magnetic field). Then $`\omega =d\lambda +\pi ^{}\sigma `$ is symplectic on $`W=T^{}M`$. Here $`d\lambda `$ is the standard symplectic form on $`T^{}M`$ and the map $`\pi :T^{}MM`$ is the natural projection. Take $`H:T^{}M`$ to be the standard metric Hamiltonian. More explicitly, let us identify $`TM`$ and $`T^{}M`$ by means of the Riemannian metric on $`M`$. Then $`H(X)=g(X,X)/2`$, where $`g`$ is the metric. The Hamiltonian flow of $`H`$ with respect to $`\omega `$ describes the motion of a charge on $`M`$ in the magnetic field $`\sigma `$. (The reader interested in more details should consult, e.g., \[Gi2\].) We say that a symplectic form $`\sigma `$ and a metric $`g`$ are compatible if there exists an almost complex structure $`J`$ such that $`g(X,Y)=\sigma (X,JY)`$ for all $`X`$ and $`Y`$ and $`J`$ is $`g`$-orthogonal (cf., \[MS, Section 4.1\]). This condition is equivalent to that at every point all eigenvalues of $`g`$ with respect to $`\sigma `$ are equal. In Section 2.3 we will prove the following ###### Theorem 2.3. Assume that $`\sigma `$ is compatible with the Riemannian metric on $`M`$, e.g., $`M`$ is Kähler with these structures. For a sufficiently small $`ϵ>0`$, the number of periodic orbits on the energy level $`\{H=ϵ\}`$ is at least $`\mathrm{SB}(M)`$, provided that all of the orbits are non-degenerate. ###### Remark 2.4. Similarly to Theorem 2.1, the lower bound given by Theorem 2.3 is probably far from sharp. In the non-degenerate case there should be conjecturally at least $`\mathrm{SB}(STM/S^1)=m\mathrm{SB}(M)`$ periodic orbits on every low energy level, where $`2m=dimM`$. When $`M`$ is a surface, this is exactly the statement of the Theorem 2.3, which in this case was originally proved in \[Gi1\]. Note also that under the hypothesis of Theorem 2.3, there are at least $`\mathrm{SB}(M)+1`$ periodic orbits, when $`\chi (M)=0`$ but $`M𝕋^2`$. This can be seen easily from the proof of the theorem. If the periodic orbits are not assumed to be non-degenerate, the lower bound of Theorem 2.3 should be replaced by $`\mathrm{CL}(M)+m`$, \[Ke\]. When the energy value $`ϵ>0`$ is not small, the level $`\{H=ϵ\}`$ may fail to carry a periodic orbit. See \[Gi3\] for $`m=1`$ and \[Gi4, Example 4.2\] for $`m2`$. ### 2.3. The proofs of theorems 2.1 and 2.3 The theorems will follow from a more general result, proved in \[Gi1\], which we now state. Let $`E`$ be a compact odd–dimensional manifold with a free circle action. Denote the quotient $`E/S^1`$ by $`B`$ and the natural projection $`EB`$ by $`\mathrm{pr}`$. Recall that a two–form $`\eta `$ on $`E`$ is said to be maximally non-degenerate if it has a one–dimensional null–space, denoted in what follows by $`\mathrm{ker}\eta _x`$, at every point $`xE`$. Equivalently, this means that $`\eta `$ has maximal possible rank (equal to $`dimE1`$) at every point of $`E`$. Recall also that a characteristic of $`\eta `$ is an integral curve of the line field $`\mathrm{ker}\eta `$. ###### Theorem 2.5. Let $`\eta `$ be a closed maximally non-degenerate two–form on $`E`$ such that the cohomology class $`[\eta ]`$ lies in the image $`\mathrm{pr}^{}(H^2(B))`$. Assume that the field of directions $`\mathrm{ker}\eta `$ is $`C^1`$-close to the fibers of $`\mathrm{pr}`$ and that all of the closed characteristics of $`\eta `$ are non-degenerate. Then $`\eta `$ has at least $`\mathrm{SB}(B)`$ closed characteristics if the principle $`S^1`$-bundle $`\mathrm{pr}`$ is trivial and at least $`\mathrm{SB}(E)/2+1`$ closed characteristics otherwise. ###### Remark 2.6. We conjecture that in the non-degenerate case, regardless of whether $`\mathrm{pr}`$ is trivial or not, the number of closed characteristics is greater than or equal to $`\mathrm{SB}(B)`$, which would give a much higher lower bound than that of Theorem 2.5. Note that this conjecture would imply the hypothetical lower bound $`m\mathrm{SB}(M)`$ mentioned in Remarks 2.2 and 2.4. The conjecture is proved in \[Gi1\] for a surface $`B`$ and in \[Gi3\] for the case where $`\eta `$ is a $`C^0`$-small perturbation of the pull–back of a symplectic form on $`B`$. The latter result indicates that the requirement that $`\mathrm{ker}\eta `$ is $`C^1`$-close to the fibers can perhaps be replaced by $`C^0`$-closeness. Note in this connection that the above conjecture is erroneously claimed to be proved by the first of the authors in \[Gi3\] (Corollary 3.8). In fact, Corollary 3.8 does not follow from Theorem 2.7 of that paper for the reasons outlined in Remark 2.7 below. At present, the conjecture (and hence Corollary 3.8 of \[Gi3\]) appears to be an open problem. ###### Remark 2.7. Under the hypothesis of Theorem 2.5, the “horizontal component” of the average of $`\eta `$ is the pull-back of a symplectic form on $`B`$. If $`\eta `$ were close to this pull-back, Theorem 2.5, with an improved lower bound, would follow from \[Gi3, Theorem 2.7\]. However, the form $`\eta `$ may fail to be close to this pull-back and as a consequence \[Gi3, Theorem 2.7\] does not apply. In fact, the main point of Theorem 2.5 is that the form $`\eta `$ is not assumed to be close to the pull–back of any two-form on $`B`$. In other words, one may think of $`\eta `$ as a “Hamiltonian perturbation” of a non-Hamiltonian vector field generating the $`S^1`$-action. This is exactly what makes Theorem 2.5, in contrast with the results of \[Gi3\], applicable to magnetic flows in higher dimensions. ###### Remark 2.8. It is easy to see that the cohomology condition on $`\eta `$ cannot be omitted – without this condition $`\eta `$ may fail to have closed characteristics. (See \[Gi1\] for a more detailed discussion.) If the closed characteristics of $`\eta `$ are not required to be non-degenerate, the low bound of Theorem 2.5 should be replaced by the minimal number of critical points of a smooth function on $`B`$, \[Ke\]. In both of these lower bounds only the integral curves close to fibers and winding only once along the fibers are counted. In contrast with the lower bound of Theorem 2.5, the lower bound of \[Ke\] is apparently sharp for the number of periodic orbits in the class specified. ###### Proof of Theorem 2.5. When $`B`$ is a surface, the theorem is proved in \[Gi1\]. Hence, in what follows we will assume that $`2k+1=dimE5`$. Furthermore, as is shown in \[Gi1\], the number of closed characteristics is at least $`\mathrm{SB}(E)/2`$. Since $`\mathrm{SB}(E)/2=\mathrm{SB}(B)`$ when the bundle $`\mathrm{pr}`$ is trivial, we will assume from now on that $`\mathrm{pr}`$ is not trivial. We need to recall some details of the argument from \[Gi1\]. Consider closed characteristics of $`\eta `$ which are close to the orbits of the $`S^1`$-action and wind only once along the orbits. There exists a Morse–Bott function $`f:E`$ such that the closed characteristics of $`\eta `$ of this type comprise exactly the critical manifolds of $`f`$. Hence it is sufficient to prove that $`f`$ has at least $`\mathrm{SB}(E)/2+1`$ critical manifolds. Denote by $`b_i`$ the Betti numbers of $`E`$ and by $`\mu _i`$ the number of critical manifolds of $`f`$ of index $`i`$. All (co)homology groups will be taken with real coefficients and we will use the convention that $`\mu _i=0`$ whenever $`i`$ is negative. Since the critical manifolds of $`f`$ are circles, the Morse–Bott inequalities for $`f`$ turn into (1) $$\mu _i+\mu _{i1}b_i\text{ for }i=0,\mathrm{},2k+1.$$ The inequalities (1) can be refined when $`i=1`$ and $`i=2k`$. Namely, we claim that (2) $$\mu _1b_1\text{ and }\mu _{2k1}b_{2k},$$ Let us prove the first of the inequalities (2) (cf., \[Gi1\]). Denote by $`LH_1(E)`$ the subspace generated by the critical manifolds of index zero and $`L^{}H_1(E)`$ a complementary subspace to $`L`$, i.e., $`LL^{}=H_1(E)`$. The dimension of this space does not exceed the number of critical circles of index one: (3) $$\mu _1dimL^{}.$$ The critical manifolds of $`f`$ are close to the orbits of the $`S^1`$-action. As a consequence, the projections of critical manifolds to $`B`$ are contained in small balls. In particular, these projections are contractible and $`\mathrm{pr}_{}(L)=0`$ in $`H_1(B)`$. Thus (3) implies that $`\mu _1dimL^{}dim\mathrm{pr}_{}(H_1(E))`$. However, $`\mathrm{pr}_{}(H_1(E))=H_1(B)`$ and so $$\mu _1dimH_1(B).$$ Finally, since $`\mathrm{pr}`$ is a non-trivial bundle, $`b_1=dimH_1(B)`$. Hence $`\mu _1b_1`$ which proves the first inequality in (2). The second inequality follows from the first one with $`f`$ replaced by $`f`$. Adding up the Morse–Bott inequalities (1) for all $`i=0,\mathrm{},2k+1`$ except $`i=1`$ and $`i=2k`$ and the refined Morse–Bott inequalities (2), we obtain $$2\underset{i=0}{\overset{2k}{}}\mu _i(\mu _0+\mu _{2k})\underset{i=0}{\overset{2k+1}{}}b_i.$$ As a consequence, $$\mu _i\mathrm{SB}(M)+\frac{1}{2}(\mu _0+\mu _{2k}).$$ Since $`f`$ must have a local minimum and a local maximum, the second term in the latter formula is greater than or equal to one. Therefore, $`\mu _i\mathrm{SB}(E)/2+1`$ which completes the proof. ∎ ###### Proof of Theorem 2.1.. Throughout the proof we will keep the notations and conventions of Section 2.1. Using the symplectic neighborhood theorem, let us identify a neighborhood of $`M`$ in $`W`$ with a neighborhood $`U`$ of the zero section in the total space of the normal bundle $`\nu `$ so that the symplectic structure on $`U`$ is linear and equal to $`\omega ^{}`$ on the fibers of $`\nu `$. Without loss of generality we may assume that $`U`$ contains the level $`E=\{d^2H=1\}`$. Denote by $`\phi _ϵ`$ the fiberwise dilation $`yϵy`$ in the fibers on $`\nu `$. Let $`E_ϵ`$ be the level $`\phi _ϵ^{}H/ϵ^2=1`$ and $`\psi _ϵ:EE_ϵ`$ be the fiberwise central projection. Consider the vector field $`X_ϵ`$ on $`E`$ obtained by pushing forward the Hamiltonian vector field of $`H/ϵ^2`$ on the level $`\{H=ϵ^2\}`$ to $`E_ϵ`$ by means of $`(\psi _ϵ\phi _ϵ)^1`$. Clearly, the assertion of the theorem is equivalent to $`X_ϵ`$ having the required number of periodic orbits. It is not hard to show (see Remark 2.9 below) that $`X_ϵ\stackrel{C^1}{}X_0`$ as $`ϵ0`$, where $`X_0`$ is the fiberwise Hamiltonian vector field of $`d^2H`$ with respect $`\omega ^{}`$. In other words, $`X_0`$ is the Hamiltonian vector field of $`d^2H`$ for the Poisson structure which has the fibers of $`\nu `$ as its leaves and is given by $`\omega ^{}`$. Furthermore, the vector field $`X_ϵ`$ generates the null-space line field of the two–form $`\eta _ϵ=(\psi _ϵ\phi _ϵ)^{}\omega `$. By the hypothesis on the eigenvalues of $`d^2H`$, the flow of $`X_0`$ gives rise to a free $`S^1`$-action on $`E`$. The form $`\eta _ϵ`$ satisfies the cohomology condition of Theorem 2.5. Thus, by Theorem 2.5, when $`ϵ>0`$ is small enough the form $`\eta _ϵ`$ has at least $`\mathrm{SB}(E/S^1)`$ or $`\mathrm{SB}(E)/2+1`$ closed characteristics, depending on whether $`\mathrm{pr}:EE/S^1`$ is trivial or not and provided that the closed characteristics are non-degenerate. This completes the proof of Theorem 2.1. ∎ ###### Remark 2.9. Set $`H_ϵ=\phi _ϵ^{}H/ϵ^2`$. Then $`H_ϵ\stackrel{C^k}{}d^2H`$ for any $`k0`$ as $`ϵ0`$ on a small neighborhood $`U`$ of the zero section. Furthermore, consider the family of symplectic forms $`\omega _ϵ=\phi _ϵ^{}\omega `$. This family of forms does not converge as $`ϵ0`$, but the corresponding Poisson structures do. To be more specific, denote by $`\omega _ϵ^1`$ the Poisson structure on $`U`$ corresponding to the symplectic structure $`\omega _ϵ`$ and by $`\omega _0^1`$ the Poisson structure on $`U`$ whose leaves are the fibers of $`\nu `$ and which is equal to $`\omega ^{}`$ on the fibers. One can verify (see \[Ke\]) that $`\omega _ϵ^1\stackrel{C^k}{}\omega _0^1`$ on $`U`$ as $`ϵ0`$ for any $`k0`$. Therefore, the Hamiltonian vector field of $`H_ϵ`$ with respect to $`\omega _ϵ`$ $`C^k`$-converges to the Hamiltonian vector field of $`d^2H`$ with respect to $`\omega _0^1`$. ###### Proof of Theorem 2.3.. The normal bundle $`\nu `$ to $`M`$ is canonically isomorphic to $`T^{}M`$. Furthermore, let us identify the tangent and cotangent bundles to $`M`$ by means of $`\sigma `$. Then $`\omega _x^{}=\sigma _x`$ for all $`xM`$. The compatibility condition of the corollary is equivalent to that for every $`xM`$ the eigenvalues of the metric on $`T_xM`$ with respect to $`\sigma _x`$ are equal. Thus the hypothesis of Theorem 2.1 holds under the compatibility condition. As a consequence, Theorem 2.1 gives a lower bound for the number of periodic orbits on the energy level $`\{H=ϵ\}`$ in terms of the sum or Betti numbers of $`E=ST^{}M`$ or $`E/S^1`$. The $`S^1`$-bundle $`\mathrm{pr}:EE/S^1`$ is trivial only when $`M=𝕋^2`$. In this case $`\mathrm{SB}(E/S^1)=\mathrm{SB}(M)`$. Thus let us assume that $`\mathrm{pr}`$ is non-trivial. If $`\chi (M)=0`$, we have $`\mathrm{SB}(E)/2=\mathrm{SB}(M)`$ and Theorem 2.3 follows from Theorem 2.1 with an even higher lower bound. Assume that $`\chi (M)0`$. We claim that (4) $$\mathrm{SB}(E)/2+1\mathrm{SB}(M).$$ Denote by $`b_i`$ the Betti numbers of $`E`$ and by $`\beta _i`$ the Betti numbers of $`M`$. The condition $`\chi (M)0`$ implies that $$b_i=\{\begin{array}{cc}\beta _i\text{ if }0in1\hfill & \\ \beta _{in+1}\text{ if }ni2n1,\hfill & \end{array}$$ where $`n=dimM`$. Adding up these equalities for all $`i=0,\mathrm{},2n1`$, we obtain (5) $$\mathrm{SB}(E)2\mathrm{SB}(M)(\beta _0+\beta _n).$$ Without loss of generality we can assume that $`M`$ is connected. Then $`\beta _0=\beta _n=1`$ and (4) follows from (5). This completes the proof of Theorem 2.3. ∎ ## 3. Magnetic fields on tori In this section we give a simple proof of the fact that for a charge on a torus there are periodic orbits on almost all energy levels. Let $`\sigma `$ be a closed two-form on a torus $`𝕋^n`$. (Note that $`n`$ can be odd.) As in Section 2.2, consider the twisted symplectic structure $`\omega =d\lambda +\pi ^{}\sigma `$ on $`W=T^{}𝕋^n`$. ###### Theorem 3.1. Every bounded set in $`W`$ has finite Hofer–Zehnder capacity. ###### Remark 3.2. The reader interested in the definition and properties of the Hofer–Zehnder capacity should consult \[HZ\]. Theorem 3.1 is not new. In fact, the theorem has been known to experts for quite some time. When the cohomology class $`[\sigma ]`$ is rational, the theorem follows from \[Ji2, Theorem 1.2\]. For $`\sigma `$ symplectic, the theorem is proved in \[Gi2, Lemma 5.3\]. When $`\sigma `$ is the pull-back of a two-form under a projection $`𝕋^n𝕋^{n1}`$, the theorem becomes a particular case of a result of G. Lu, \[Lu, Theorem E\]. Finally, for $`[\sigma ]0`$, Theorem 3.1 follows from \[Lu, Theorem C\]. ###### Proof. Note first that to prove the theorem for $`\sigma `$ it suffices to prove the theorem for any form in the cohomology class $`[\sigma ]`$. Indeed, the fiberwise shift by a one-form $`\alpha `$ sends bounded sets to bounded sets and transforms the twisted symplectic form $`\omega `$ into the form $`\omega \pi ^{}d\alpha `$. Hence without loss of generality, we may assume that $`\sigma `$ is a translation–invariant form $`𝕋^n`$. In other words, in some coordinates $`x_1,\mathrm{},x_n`$ on $`𝕋^n`$, we have (6) $$\sigma =a_{ij}dx_idx_j,$$ for some constants $`a_{ij}`$. For an exact form $`\sigma `$, i.e., when $`a_{ij}=0`$, the theorem is proved in \[Ji1\]. This fact is very easy to see, \[HZ, Proposition 4, p. 136\]: A bounded set in $`T^{}S^1`$ is contained in an annulus and the latter can be embedded into a disc. By taking the product, we conclude that a bounded set in $`T^{}𝕋^n`$ can be symplectically embedded into a polydisc. A polydisc has finite capacity and, as a consequence of monotonicity, a bounded set in $`T^{}𝕋^n`$ has finite capacity. Thus we may assume that $`\sigma `$ given by (6) is non-zero. We claim that $`(W,\omega )`$ is symplectomorphic to the product $`^{2k}\times W_1`$ with $`k1`$, where $`^{2k}`$ is equipped with the standard symplectic form and $`W_1=^{n2k}\times 𝕋^n`$ is given a translation–invariant symplectic form. Let us prove the claim. Consider the universal covering $`\stackrel{~}{W}=^{2n}`$ of $`W`$ with the pull-back linear symplectic form $`\stackrel{~}{\omega }`$. Let $`L`$ be the inverse image of $`𝕋^n`$ in $`\stackrel{~}{W}`$ and let $`L^{}`$ be the symplectic orthogonal complement to $`L`$ with respect to $`\stackrel{~}{\omega }`$. Pick a linear subspace $`E`$ in $`L^{}`$ which is transversal to $`L^{}L`$. The space $`E`$ is symplectic because the null–space of $`\stackrel{~}{\omega }|_L^{}`$ is exactly $`L^{}L`$. Moreover, $`dimE>0`$. This follows from the fact that $`\mathrm{rk}\stackrel{~}{\omega }|_L>0`$, and so $`L`$ is not Lagrangian and $`L^{}L`$. Fix a symplectic subspace $`\stackrel{~}{W}_1`$ in $`\stackrel{~}{W}`$ which contains $`L`$ and is transversal to $`E`$. Thus $`\stackrel{~}{W}`$ is symplectomorphic to $`E\times \stackrel{~}{W_1}`$ The decomposition $`\stackrel{~}{W}=E\times \stackrel{~}{W_1}`$ induces the required direct product decomposition of $`W`$. To see this, note that $`W=\stackrel{~}{W}/\mathrm{\Gamma }`$, where $`\mathrm{\Gamma }`$ is a discrete subgroup in $`L`$. Thus $`W=E\times W_1`$, where $`W_1=\stackrel{~}{W}_1/\mathrm{\Gamma }`$. It is clear that $`E`$ is symplectomorphic to $`(^{2k},\omega _0)`$ with $`2k>0`$ and that the resulting symplectic structure on $`W_1`$ is translation–invariant. Observe now that every bounded subset of $`W_1`$ can be symplectically embedded into $`𝕋^{2(nk)}`$ with some translation–invariant symplectic structure. As a result, every bounded open set in $`W`$ is symplectomorphic to a bounded open set in $`^{2k}\times 𝕋^{2(nk)}`$, where $`2k>0`$. The latter open sets have finite capacity as proved in \[FHV\] and \[Ma\]; see also \[Lu\] for further generalizations. (It is essential that $`2k>0`$ and hence the space $`E`$ is non-trivial: the torus $`𝕋^{2(nk)}`$ may have infinite capacity, \[HZ, Section 4.5\].) ∎ As in Section 2.2, let $`H:T^{}𝕋^n`$ be a metric Hamiltonian. Finiteness of capacity implies (see \[HZ, Section 4.2, Theorem 4\]) “almost existence” of periodic orbits: ###### Corollary 3.3. Almost all, in the sense of measure theory, levels of $`H`$ carry a periodic orbit. To put this general benchmark result in perspective, let us state some recently proved more subtle theorems concerning the existence of periodic orbits in the magnetic problem on higher–dimensional tori. (See, e.g., \[Gi2\] for a review of results for surfaces.) According to the first result, due to Polterovich, \[Po\], *when $`\sigma 0`$, there exists a sequence of positive energy values $`c_k0`$ such that on every level $`\{H=c_k\}`$ there exists a contractible periodic orbit.* This is a rather deep theorem: Corollary 3.3 guarantees the existence of periodic orbits on almost all levels of $`H`$, but does not guarantee that these orbits are contractible. For instance, if $`\sigma =0`$ periodic orbits are just closed geodesics and a metric on $`𝕋^n`$ can easily fail to have contractible closed geodesic (e.g., a flat metric). This result also holds for any compact manifold $`M`$ with $`\chi (M)=0`$ in place of $`𝕋^n`$. Note also that as follows from a theorem of G. Lu, \[Lu, Theorem C\], almost all levels of $`H`$ carry a contractible periodic orbit when $`[\sigma ]0`$. The second theorem, a version of Hopf’s rigidity, is due to Bialy, \[Bi\]. By Bialy’s theorem, *every energy level of $`H`$ carries an orbit with conjugate points, provided that the metric is conformally flat and again $`\sigma 0`$.* This fact is related to the question of existence of contractible periodic orbits because every such orbit (with non-zero Maslov index) would have conjugate points. Thus Bialy’s theorem serves as indirect evidence in favor of the affirmative answer to the existence question. In conclusion note that when $`\sigma `$ is exact, all high energy levels have contact type and thus carry periodic orbits, \[HV\]. (See \[Gi2, Remark 2.3\] for more details.)
no-problem/9902/cond-mat9902181.html
ar5iv
text
# Aperiodicity-induced effects on the transmission resonances in multibarrier systems ## 1 Introduction Tunneling of an electron through a potential barrier is one of the fundamental phenomena in quantum mechanics and plays a key role in the physics of electronic and optoelectronic devices . Stimulated by the advancement in modern material fabrication technology such as molecular-beam epitaxy and metal-organic chemical vapor deposition, there has been a lot of work on the problem of the electronic resonant tunneling in semiconductor superlattices \[2-15\]. Among interesting features emerged from the studies, the most well recognized features are the resonance splitting effects and the energy band effects; for a finite superlattice composed of $`N`$ identical potential barriers with arbitrary profiles at zero bias, there occurs $`(N1)`$-fold resonance splitting, and the split resonant energies approach the band structure for large $`N`$ \[1, 12-14\]. Very recently, Guo et al. studied the resonance splitting effects in superlattices which are periodically juxtaposed with two different building barriers to demonstrate that the resonance splitting is determined not only by the structure but also by the parameters of building barriers. In a different context, there has been much interest in the electronic properties of deterministic aperiodic systems \[16-19\] which are known to have more complex geometrical structures than the periodic system. Studies on these systems have revealed a variety of exotic electronic properties such as the singular continuity of the energy spectrum, the self-similarity of the electronic wave functions, the power-law behavior of the resistance, and so forth. However, most of the work has focused on the electronic properties in the infinite limit of the system size \[16-20\], and the study on the transport properties of the finite-size systems, particularly from tunneling point of view, has received less attention. Recently, Singh et al. studied the electronic transport properties of the Fibonacci and Thue-Morse (TM) superlattices to compare the results with those of the periodic system. Besides, Liu et al. calculated the electronic transmission spectra of the Cantor fractal multibarrier systems to show that the tunneling spectrum is more complex than that of the periodic system. In this paper, we study the aperiodicity-induced effects on the transmission resonances in a few types of binary aperiodic multibarrier systems. To do this, taking into account four kinds of binary multibarrier systems whose geometrical structures are determined by Eq. (11), we calculate the transmission coefficients of an electron through these multibarrier systems. Dependence of the transmission resonances on the system parameters such as the kind of aperiodicity, the generation number, and the widths of the wells and barriers is investigated. From this, we first illustrate the characteristics of the transmission resonances exhibited in the binary periodic (BP) system, and then make a comparison of the resonances between the BP and aperiodic systems; similarities and differences between them are presented in detail. In doing this, a comparison of the resonance splitting exhibited in the common-well (CW) structure with those exhibited in the common-barrier (CB) structure of the systems is also presented. ## 2 Method We shall now derive an expression for the transmission coefficient of a multibarrier system using the transfer matrix formalism. To do this, we consider an electron with a longitudinal energy $`E`$ incident from left of the system. Assuming that the phonon scattering can be neglected and no bias is applied across the system, the effective-mass approximation leads to the continuous Schrödinger equation $$\left[\frac{\mathrm{}^2}{2m_j^{}}\frac{d^2}{dx^2}+V(x)\right]\mathrm{\Psi }_j(x)=E\mathrm{\Psi }_j(x),$$ (1) where $`x`$ is the longitudinal direction of the system, $`V(x)`$ the minimum energy of the conduction band, and $`m_j^{}=m_{w(b)}^{}`$ the effective mass of the electron in the well (barrier) of the $`j`$th cell. Figure 1 shows the schematic configuration of a part of the system. For convenience of calculation, we set the conduction band minimum to be zero and the potential barrier to be rectangular, i.e., $$V(x)=\{\begin{array}{ccc}0\hfill & \mathrm{for}& x_j<x<y_j\hfill \\ V\hfill & \mathrm{for}& y_j<x<x_{j+1}\hfill \end{array},$$ (2) where $`x_j`$ $`(y_j)`$ is the starting position of the $`j`$th well (barrier). Then, the wave function associated with the electron in the $`j`$th cell can be written as $$\psi _j(x)=A_je^{ik(xx_j)}+B_je^{ik(xx_j)}$$ (3) in the well and $$\varphi _j(x)=C_je^{\kappa (xy_j)}+D_je^{\kappa (xy_j)}$$ (4) in the barrier. Here, $$k=\sqrt{\frac{2m_w^{}E}{\mathrm{}^2}},\kappa =\sqrt{\frac{2m_b^{}(VE)}{\mathrm{}^2}}$$ (5) are the wave numbers in the wells and barriers, respectively. By applying the Bastard’s matching conditions of the wave function and its derivative at discontinuity of $`V(x)`$, we can write the relation of the coefficients between the $`j`$th and $`(j+1)`$th wells as $$\left(\begin{array}{c}A_{j+1}\\ B_{j+1}\end{array}\right)=T_j\left(\begin{array}{c}A_j\\ B_j\end{array}\right)=\left(\begin{array}{cc}\alpha _j& \beta _j\\ \beta _j^{}& \alpha _j^{}\end{array}\right)\left(\begin{array}{c}A_j\\ B_j\end{array}\right),$$ (6) where $`T_j`$ is the unimodular transfer matrix, and $`\alpha _j`$ and $`\beta _j`$ are given by $`\alpha _j`$ $`=`$ $`\left[\mathrm{cosh}(\kappa b_j)+{\displaystyle \frac{i}{2}}\left({\displaystyle \frac{m_b^{}k}{m_w^{}\kappa }}{\displaystyle \frac{m_w^{}\kappa }{m_b^{}k}}\right)\mathrm{sinh}(\kappa b_j)\right]e^{ikw_j},`$ $`\beta _j`$ $`=`$ $`{\displaystyle \frac{i}{2}}\left({\displaystyle \frac{m_b^{}k}{m_w^{}\kappa }}+{\displaystyle \frac{m_w^{}\kappa }{m_b^{}k}}\right)\mathrm{sinh}(\kappa b_j)e^{ikw_j}.`$ (7) Here, $`w_j(=y_jx_j)`$ and $`b_j(=x_{j+1}y_j)`$ are the widths of the $`j`$th well and barrier. Multiplying $`T_j`$ successively, we can write the relation of $`A`$’s and $`B`$’s between the first and the $`(N+1)`$th region as $$\left(\begin{array}{c}A_{N+1}\\ B_{N+1}\end{array}\right)=M_N\left(\begin{array}{c}A_1\\ B_1\end{array}\right),$$ (8) where $`M_N`$ is the total transfer matrix given by $$M_N=T_NT_{N1}\mathrm{}T_2T_1=\left(\begin{array}{cc}a_N& b_N\\ b_N^{}& a_N^{}\end{array}\right).$$ (9) Since there will be a reflected wave in the first region but only a transmitted wave in the $`(N+1)`$th region (i.e., $`B_{N+1}=0`$), we can write the transmission coefficient as $$T=\frac{1}{|a_N^{}|^2}.$$ (10) Generally, it requires extensive matrix manipulation to calculate $`T`$. However, for the multibarrier systems considered in this paper, $`T`$ can be easily calculated in terms of the deterministic substitution rules given below. We now introduce four kinds of deterministic sequences $``$ the BP, the TM , the period-doubling (PD) , and the copper-mean (CM) sequences which are generated by the substitution rules $$\begin{array}{ccc}BP:\hfill & S_{l+1}=S_l^2,\hfill & S_1=AB\hfill \\ TM:\hfill & S_{l+1}=S_l\overline{S}_l,\hfill & (\overline{S}_0,S_0)=(B,A)\hfill \\ PD:\hfill & S_{l+1}=S_lS_{l1}^2,\hfill & (S_0,S_1)=(A,AB)\hfill \\ CM:\hfill & S_{l+1}=S_lS_{l1}^2,\hfill & (S_1,S_0)=(B,A),\hfill \end{array}$$ (11) where $`l`$ is the generation number (i.e., $`N=2^l`$ for the first three sequences and $`N=[2^{l+2}(1)^l]/3`$ for the last sequence) and $`\overline{S}_l`$ is the complement of $`S_l`$ which is obtained by exchanging the letters $`A`$ and $`B`$. Here, $`A`$ and $`B`$ represent two kinds of unit cells of the multibarrier system. The unit cell $`A`$ ($`B`$) consists of a well with the width $`w_A`$ ($`w_B`$) and a barrier with the width $`b_A`$ ($`b_B`$) and the height $`V`$. We refer the case of $`w_A=w_B`$ with $`b_Ab_B`$ as the CW model and the case of $`b_A=b_B`$ with $`w_Aw_B`$ as the CB model, respectively. By means of Eq. (11), we can write the recursion relations of the total transfer matrices between different generations as $$\begin{array}{ccc}BP:\hfill & M_{l+1}=M_l^2,\hfill & M_1=T_BT_A\hfill \\ TM:\hfill & M_{l+1}=\overline{M}_lM_l,\hfill & (\overline{M}_0,M_0)=(T_B,T_A)\hfill \\ PD:\hfill & M_{l+1}=M_{l1}^2M_l,\hfill & (M_0,M_1)=(T_A,T_BT_A)\hfill \\ CM:\hfill & M_{l+1}=M_{l1}^2M_l,\hfill & (M_1,M_0)=(T_B,T_A).\hfill \end{array}$$ (12) Using the relations in Eq. (12), we can easily derive the recursion relations of $`a`$’s and $`b`$’s between different generations as follows: $$\{\begin{array}{c}a_{l+1}=a_l^2+b_l^{}b_l\hfill \\ b_{l+1}=b_l(a_l+a_l^{})\hfill \end{array}$$ (13) with $`a_1=\alpha _A\alpha _B+\beta _A^{}\beta _B`$ and $`b_1=\alpha _B\beta _A+\alpha _A^{}\beta _B`$ for the BP sequence, $$\{\begin{array}{cc}a_{l+1}=\overline{a}_la_l+\overline{b}_lb_l^{},\hfill & b_{l+1}=\overline{a}_lb_l+a_l^{}\overline{b}_l\hfill \\ \overline{a}_{l+1}=a_l\overline{a}_l+b_l\overline{b}_l^{},\hfill & \overline{b}_{l+1}=a_l\overline{b}_l+\overline{a}_l^{}b_l\hfill \end{array}$$ (14) with $`a_0=\alpha _A`$, $`b_0=\beta _A`$, $`\overline{a}_0=\alpha _B`$, and $`\overline{b}_0=\beta _B`$ for the TM sequence, and $$\{\begin{array}{c}a_{l+1}=a_l(a_{l1}^2+|b_{l1}|^2)+b_l^{}b_{l1}(a_{l1}+a_{l1}^{})\hfill \\ b_{l+1}=b_l(a_{l1}^2+|b_{l1}|^2)+a_l^{}b_{l1}(a_{l1}+a_{l1}^{})\hfill \end{array}$$ (15) with $`a_0=\alpha _A`$, $`b_0=\beta _A`$, $`a_1=\alpha _A\alpha _B+\beta _A^{}\beta _B`$, and $`b_1=\alpha _B\beta _A+\alpha _A^{}\beta _B`$ for the PD sequence. Recursion relations for the CM sequence are exactly the same as Eq. (15) with $`a_1=\alpha _B`$, $`b_1=\beta _B`$, $`a_0=\alpha _A`$, and $`b_0=\beta _A`$. ## 3 Numerical results and discussion As a sample material for calculation, we choose the $`\mu `$c-Si:H/$`a`$-Si:H superlattice , where $`\mu `$c-Si:H acts as the well and $`a`$-Si:H the barrier with $`V=0.4`$ eV. The effective mass in the wells and barriers is taken to be $`m_w^{}=m_b^{}=0.3m_e`$ , where $`m_e`$ is the free electron mass. In calculating transmission coefficients of an electron through multibarrier systems, we treat two cases separately; the one is to set the widths of the wells equal while arrange the widths of the barriers according to the given substitution rule (the CW model), and the other is to set the widths of the barriers equal while arrange the widths of the wells according to the given substitution rule (the CB model). Some examples of the results are plotted in Figures 2 and 3. In plotting, the mesh of $`E`$ is taken to be $`\mathrm{\Delta }E=1.0\times 10^5`$ eV, and the system parameters are set to be $`w_A=w_B=20`$ Å and $`b_A=2b_B=8`$ Å in the CW model, and $`b_A=b_B=7`$ Å and $`w_A=2w_B=40`$ Å in the CB model, respectively. Figure 2 shows the results obtained from the CW model for the energy range near the lowest domain of resonances, and Figure 3 shows the results obtained from the CB model for the energy range near the first two lowest domains of resonances. Here, the ‘domain of resonances’ means the energy range that contains resonant peaks in the finite-size system and would approach the allowed energy band in the infinite limit of the system size. We first discuss the features of the transmission resonances exhibited in the BP system with $`l=2`$. In this case, three resonant peaks in each domain of resonances are expected due to overlap of quasi-bound states in the three well regions, and the result obtained from the CW model \[Figure 2a\] agrees well with the expectation. An interesting feature to be noted in Figure 2a is that the first and the third peaks of the three resonant peaks are complete (i.e., $`T=1`$) while the second peak is incomplete (i.e., $`T<1`$). We will see that the two complete peaks locate in the middle of the subdomains while the incomplete peak disappears for large $`l`$ \[see Figure 2e\]. As for the result obtained from the CB model \[Figure 3a\], there are two distinctive features from the result obtained from the CW model. The one is that there occurs suppression of resonant peaks. We can see in Figure 3a that there exist a single complete peak in the first lowest domain and two complete peaks in the second lowest domain, which implies that two peaks in the first lowest domain and one peak in the second lowest domain are suppressed. The second is that the resonance widths exhibited in the CB model are much narrower and sharper than those in the CW model. We now discuss the features of the transmission resonances in the BP system with large $`l`$. As $`l`$ increases, successive resonant splitting effects and the energy band effects on the transmission properties are expected, and we confirm them. Figure 2e shows the transmission coefficients obtained from the CW model with $`l=5`$. Here two distinctive features from the case of the periodic system \[12-14\] are emerged. The first is that the main domain splits into two subdomains, the centers of which correspond to the first and the third resonant peaks in Figure 2a. We argue that the occurrence of this kind of resonance splitting attributes to the binary periodicity of the system. The second is that the total number of resonant peaks is not exactly the same as the number of the wells. In Figure 2e, each subdomain contains 15 peaks and the total number of peaks are 30 while there are 31 wells in the system with $`l=5`$. The result obtained from the CB model with $`l=5`$ is plotted in Figure 3e. Similar features appeared in the CW model are also exhibited in the second lowest main domain; the domain splits into two subdomains and the total number of resonant peaks is less than the number of the wells. However, the transmission coefficients in the lowest main domain display somewhat different behavior; no splitting into subdomains occurs, which indicates that the binary periodicity of the system does not affect on the splitting of this domain. We also study the features of the transmission resonances with varying the widths of the wells and barriers, and observe that the number of resonance domains increases with increasing the width of the well, which can be easily understood by noting that the energies of the quasi-bound states in a well are approximately in proportional to the inverse square of the width of the well . We also observe that the widths of resonance domains decrease and the peak-to-valley ratios increase with increasing the width of the barrier. Having seen the features of the transmission resonances in the BP system, we now discuss the features of the transmission resonances exhibited in the aperiodic TM, PD, CM systems with $`l=2`$. In this case, the three, three, and four resonant peaks are expected to exist in each domain of resonances due to overlap of quasi-bound states in the three, three, and four wells of the TM, PD, CM systems. The results obtained from the CW model for these systems are plotted in Figures 2b $``$ 2d, where it can be seen that the number of resonant peaks fits with the expectation. However, the results obtained from the CB model \[Figures 3b $``$ 3d\] do not always fit with the expectation: The transmission coefficients for the second lowest domain agree with the expectation, while the number of resonant peaks in the first lowest domain is less than the expected number due to suppression of the peaks. The number of suppressed peaks is two, one, and one in the TM, PD, CM system, respectively. As for the effects of aperiodicity, we would like to mention two points. The first is that the lowest resonant peaks shift towards the lower energy region, compared with that of the BP system. The second is that there coexist the complete and incomplete resonant peaks. For the peaks exhibited in the CW model, all the peaks in the TM system and the second peak in the CM system are complete, while all the peaks in the PD system and all the peaks expect the second peak in the CM system are incomplete. In the meanwhile, for the peaks exhibited in the CB model, all the peaks are incomplete, which implies that it is more difficult for an electron to tunnel through the CB structure than the CW structure. Here it should be noted that the incomplete resonant peak appeared in the BP system \[Figure 2a\] and the incomplete peaks appeared in the aperiodic systems show different behavior as $`l`$ increases; the former fails to locate in the middle of the subbands and disappears, while the latter survives to locate in the middle of the subdomains We now discuss the features of the transmission resonances in the aperiodic systems with large $`l`$. Figures 2f $``$ 2h show the results obtained from the CW model of the TM, PD, and CM systems with $`l=5`$, which reveal two distinctive features to be mentioned. The first is that, even though the aperiodicity of each system is binary as in the BP system, the resonance splitting pattern is much more complex than that of the BP system such that a variety of peak-to-valley ratios appear. We argue that the level of hierarchy in the resonance splitting will become deeper with the increase of $`l`$ and the split resonant peaks will eventually compose of Cantor-like fractal structure in the infinite limit of the system size. The second is that, despite the existence of aperiodicity, the transmission resonances for some energy ranges resemble those exhibited in the BP system. For example, from $`0.101`$ eV to $`0.128`$ eV and from $`0.184`$ eV to $`0.188`$ eV in Figure 2f, the transmission coefficients exhibit the ‘resonance plateaus’ as in the case of the BP system. We also find that, for higher energy ranges, the effect of aperiodicity weakens such that the number and the widths of resonant plateaus increase, which resembles the feature of the periodic system. The results obtained from the CB model of the TM, PD, and CM systems \[Figures 3f $``$ 3h\] reveal similar features to those exhibited in the CW model; complex resonance splitting effects and the existence of the resonance plateaus are clearly seen. An example of the resonance plateaus can be seen in Figure 3g with the energy range from $`0.178`$ eV to $`0.197`$ eV. ## 4 Summary In summary, we studied the effects of aperiodicity on the transmission resonances of an electron through a few types of binary aperiodic multibarrier systems with finite system size. Dependence of transmission resonances on the system parameters was investigated, from which the similarities and differences of the resonances between the binary periodic and aperiodic systems were presented. In doing this, a comparison of resonance splitting exhibited in the common-well structure with those exhibited in the common-barrier structure of the multibarrier systems was also made in detail. It was found that complex resonance splitting and a variety of peak-to-valley ratios which are not exhibited in the periodic system are emerged as a result of introducing aperiodicity. It was also found that the transmission resonances for some energy ranges in the aperiodic systems resemble those in the periodic system, despite the existence of aperiodicity. We hope that exotic resonant tunneling properties of the binary aperiodic multibarrier systems such as the complex resonance splitting effects, deep levels of hierarchy in the peak-to-valley ratios, and the existence of tunneling plateaus can be applied in designing a new type of electronic device.
no-problem/9902/cond-mat9902309.html
ar5iv
text
# Nonlinear Quantum Capacitance ## Figure Caption * Operation of QSCM: comparison of “measured” LPDOS to the exact one. Solid lines: exact; solid circles: fitted LPDOS using 10 voltages; solid triangles: fitted LPDOS using 3 voltages. Upper set of curves are for $`b_2=0.002`$, lower set $`b_2=0.001`$. The QSCM tip has been fixed with $`b_1=0.003`$. The energy unit is the Fermi energy $`E_F^1`$ of the tip. Inset: the electrochemical capacitance versus voltage curve for the two sets of $`b_2`$: upper curve is for $`b_2=0.002`$. The unit of $`V`$ is $`E_F^1/e`$.
no-problem/9902/hep-lat9902031.html
ar5iv
text
# 1 Introduction ## 1 Introduction Many recent efforts to elucidate the mechanism of confinement in qcd and non–Abelian gauge theories have focused on isolating a reduced set of variables that are responsible for the confining behaviour. In the dual superconducting vacuum hypothesis the crucial degrees of freedom are the magnetic monopoles revealed after Abelian projection. In the maximally Abelian gauge one finds that the extracted U(1) fields possess a string tension that approximately equals the original SU(2) string tension (‘Abelian dominance’) , and that this is almost entirely due to monopole currents in these Abelian fields (‘monopole dominance’) . The magnetic currents observed in the maximally Abelian gauge are found to have non-trivial correlations with gauge-invariant quantities such as the action and topological charge densities (see for example and references therein) and this invites the hypothesis that the structures formed by the magnetic monopoles correspond to similar objects in the SU(2) vacuum, seen after gauge fixing and Abelian projection. If the magnetic monopoles truly reflect the otherwise unknown infrared physics of the SU(2) vacuum, analysis of these structures may provide important information about the confinement mechanism. The main purpose of this paper is to extend our previous study of monopole currents to lattices that are larger in physical units at the smallest lattice spacings. As reviewed in Sec. 2, we obtained in a strikingly simple monopole picture at $`\beta =2.3`$, 2.4. When the magnetic monopole currents are organised into separate clusters, one finds in each field configuration one and only one cluster which is very much larger than the rest and which percolates throughout the entire lattice volume. Moreover this largest cluster is alone responsible for infrared physics such as the string tension. The remaining clusters are compact objects with radii varying with length roughly as $`r\sqrt{l}`$ and with a population that follows a power law as a function of length. We found the exponent of this power law to be consistent with a universal value of 3. This simple pattern became more confused at $`\beta =2.5`$. The scaling relations for cluster size that we established in suggested that our $`L=16`$ lattice at $`\beta =2.5`$ was simply too small. There was of course an alternative possibility: that the simple picture we found at lower $`\beta `$ failed as one approached the continuum limit. Clearly it is important to distinguish between these two possibilities, and this is what we propose to do in this paper. The cluster size scaling relations referred to above imply that an $`L=32`$ lattice at $`\beta =2.5115`$ should have a large enough volume. Such gauge fixed lattice fields were made available to us by G. Bali and we have used them, supplemented by calculations on an intermediate $`L=20`$ volume at $`\beta =2.5`$, to obtain evidence, as described in Secs. 2 and 3, that the monopole picture we found previously is in fact valid at these lattice spacings and that the deviations we found previously were due to too small a lattice size. The fact that one has to go to space-time volumes that are ever larger, in physical units, as the lattice spacing decreases, hints at some kind of breakdown of ‘monopole dominance’ in the continuum limit. We finish Sec. 2, with a discussion of the form that this breakdown might take. An attractive alternative to the dual superconducting vacuum as a mechanism for confinement is vortex condensation . Here the confining degrees of freedom are the vortices created by the ’t Hooft dual disorder loops and the confining disorder is located in the centre Z(N) of the SU(N) gauge group. When such a vortex intertwines a Wilson loop, the fields along the loop undergo a gauge transformation that varies from unity to a non-trivial element of the centre as one goes once around the Wilson loop. For SU(2) this means that the Wilson loop acquires a factor of $`1`$. A condensate of such vortices will therefore completely disorder the Wilson loop and will lead to linear confinement. At the centre of the vortex, which will be a line in $`D=2+1`$ and a sheet in $`D=3+1`$, the fields are clearly singular (multivalued) if we demand that the vortex correspond to a gauge transformation almost everywhere. In a properly regularised and renormalised theory, this singularity will be smoothed out into a core of finite size in which there is a non-trivial but finite action density, and whose size will be $`O(1)`$ in units of the physical length scale of the theory. One can either try to study these vortices directly in the SU(2) gauge fields or one can go to the centre gauge , where one makes the gauge links as close to $`+1`$ or $`1`$ as possible, and construct the corresponding fields where the link matrices take values in Z(2) (‘centre projection’) and where the only nontrivial fluctuations are singular Z(2) vortices. Just as a ’t Hooft–Polyakov monopole will appear as a singular Dirac monopole in the Abelian fields that one obtains after Abelian projection, one would expect the presence of a vortex in the SU(2) fields to appear as a singular Z(2) vortex after centre projection. This picture has received increasing attention recently and has, for example, proved successful in reproducing the static quark potential (‘centre dominance’). Our ability, in this paper, to address the question of how important are such vortices is constrained by the fact that we only work with Abelian projected SU(2) fields. So first we need to clarify how such vortices might be encoded in these Abelian fields and only then can we perform numerical tests to see whether there is any sign of their presence. This is the content of Sec. 3. Finally there is a summary of the results in Sec. 4. ## 2 Monopole cluster structure ### 2.1 Background Fixing to the maximally Abelian gauge of SU(2) amounts to maximising with respect to gauge transformations the Morse functional $$R=\underset{n,\mu }{}\text{Tr}\left(U_\mu (n)i\sigma _3U_\mu ^{}(n)i\sigma _3\right).$$ (1) It is easy to see that this maximises the matrix elements $`|[U_\mu (n)]_{11}|^2`$ summed over all links. That is to say, it is the gauge in which the SU(2) link matrices are made to look as diagonal, and as Abelian, as possible — hence the name. Having fixed to this gauge, the link matrices are then written in a factored form and the U(1) link angle (just the phase of $`[U_\mu (n)]_{11}`$) is extracted. The U(1) field contains integer valued monopole currents , $`\{j_\mu (n)\}`$, which satisfy a continuity relation, $`\mathrm{\Delta }_\mu j_\mu (n)=0`$, and may be unambiguously assigned to one of a set of mutually disconnected closed networks, or ‘clusters.’ In we found that the clusters may be divided into two classes on the basis of their length, where the length is obtained by simply summing the current in the cluster $$l=\underset{n,\mu 𝜀\text{cl.}}{}\left|j_\mu (n)\right|.$$ (2) The first class comprises the largest cluster, which is physically the most interesting. It percolates the whole lattice volume and its length $`l_{\mathrm{max}}`$ is simply proportional to the volume $`L^4`$ (at least in the interval $`2.3\beta 2.5`$) when these are re-expressed in physical units, i.e. $`l_{\mathrm{max}}\sqrt{K}(L\sqrt{K})^4`$, where $`K`$ is the SU(2) lattice string tension in lattice units and we use $`1/\sqrt{K}`$ to set our physical length scale. We remark that over this range in $`\beta `$ there is a factor 2 change in the lattice spacing, and so one might have expected that the extra ultraviolet fluctuations on the finer lattice would lead to significant violations of the naïve scaling relation. That is to say, one might expect to need to coarse grain the currents at larger $`\beta `$ to obtain reasonable scaling. That this is not required is perhaps surprising. The remaining clusters were found to be much shorter. Their population as a function of length (the ‘length spectrum’) is described by a power law $$N(l)=\frac{c_l(\beta )}{l^\gamma },$$ (3) where $`\gamma 3`$ for all lattice spacings and sizes tested and the coefficient $`c_l(\beta )`$ is proportional to the lattice volume, $`L^4`$, and depends weakly on $`\beta `$. The radius of gyration of these clusters is small and approximately proportional to the square root of the cluster length, just like a random walk. When folded with the length spectrum, this suggests that the ‘radius spectrum’ should also be described by a power law $$N(r)=\frac{c_r(\beta )}{r^\eta },$$ (4) with $`\eta 5`$ and $`c_r(\beta )`$ weakly dependent on $`\beta `$. Such a spectrum is close to the scale invariant spectrum of 4–dimensional balls of radius $`\rho `$, $`N(\rho )d\rho d\rho /\rho \times 1/\rho ^4`$, and so one might try and relate these clusters to the SU(2) instantons in the theory, which classically also possess a scale-invariant spectrum. It is well known, however, that the inclusion of quantum corrections renders the spectrum of the latter far from scale invariant, at least for the small instantons where perturbation theory can be trusted, and so such a connexion does not seem to be possible . On sufficiently large volumes the difference in length between the largest and second largest cluster is very marked, and where this gulf is clear one finds that the long range physics such as the monopole string tension arises solely from the largest cluster. This is the case at $`\beta =2.3`$, $`L10`$ and at $`\beta =2.4`$, $`L16`$. On moving to a finer $`L=16`$ lattice at $`\beta =2.5`$ the gulf was found to disappear and the origin of the long range physics was no longer so clear cut. This could be a mere finite volume effect, or, much more seriously, it might signal the breakdown of this monopole picture in the weak coupling, continuum limit. Clearly this needs to be resolved and the only unambiguous way to do so is by performing the calculations on large enough lattices. ### 2.2 This calculation The direct way to estimate the lattice size necessary at $`\beta =2.5`$ to restore (if that is possible) our picture is as follows. Suppose that the average size of the second largest cluster scales approximately as $$l_{\text{2nd}}L^\alpha (\sqrt{K})^\delta .$$ (5) We know that $`l_{\mathrm{max}}L^4(\sqrt{K})^3`$ to a good approximation for the largest cluster. So we will maintain the same ratio of lengths $`l_{\text{2nd}}/l_{\mathrm{max}}`$, and a gulf between these, if $$\frac{L_1}{L_2}=\left(\frac{\sqrt{K_1}}{\sqrt{K_2}}\right)^{\left(\frac{3\delta }{4\alpha }\right)}$$ (6) If we take our directly calculated values of $`l_{\text{2nd}}`$, they seem to give roughly $`\alpha 1`$ and $`\delta 2`$. This suggests that we need to scale our lattice size with $`\beta `$ so as to keep $`L(\sqrt{K})^{5/3}`$ constant. This estimate is not entirely reliable because, on smaller lattices, the distributions of the ‘largest’ and ‘second largest’ clusters overlap so that they exchange rôles. An alternative estimate can be obtained from the tail of the distribution in eqn. 3 that integrates to unity. Doing so one obtains $`\alpha 2`$ and $`0<\delta <0.25`$. This suggests that we scale our lattice size so as to keep $`L(\sqrt{K})^{\{1.41.5\}}`$ constant. This estimate is also not very reliable, since it assumes that the distribution of secondary cluster sizes on different field configurations fluctuates no more than mildly about the average distribution given in eqn. 3. In fact the fluctuations are very large. \[As we can see immediately when we try to calculate $`l^2`$ in order to obtain a standard fluctuation — it diverges for a length spectrum with $`N(l)dl/l^3`$.\] Nonetheless, the two very different estimates we have given above produce a very similar final criterion: to maintain the same gap between the largest and second largest clusters as $`\beta `$ is varied, one should choose $`L`$ so as to keep $`L(\sqrt{K})^{1.5}`$ constant. So if we wish to match the clear picture on an $`L=10`$ lattice at $`\beta =2.3`$ (where $`K=0.136`$ (2)) we should work on a lattice that is roughly $`L=28`$ at $`\beta =2.5`$ (where $`K=0.0346`$ (8)). In particular we note that an $`L=32`$ lattice at $`\beta =2.5115`$ (where $`K=0.0324`$ (10)) is more than large enough and an ensemble of 100 such configurations, already gauge fixed , has been made available to us by the authors. The gauge fixing procedure used in obtaining these is somewhat different from the one we have used in our previous calculations (in its treatment of the Gribov copies — see below) and although this is not expected to affect the qualitative features that are our primary interest here, it will have some effect on detailed questions of scaling etc. We have therefore also performed a calculation on an ensemble of 100 gauge fixed $`L=20`$ field configurations at $`\beta =2.5`$. While the latter volume is not expected to be large enough to recreate a clear gulf between the largest and remaining clusters, we would expect to find smaller finite size corrections than with the $`L=16`$ lattice we used previously. In gauge fixing a configuration we select a local maximum of the Morse functional, $`R`$, of which on lattices large enough to support non–perturbative physics there are typically a very large number . These correspond to the (lattice) Gribov copies. Gauge dependent quantities appear to vary by $`𝒪(10\%)`$ depending upon the Gribov copy chosen; this is true not only of local quantities such as the magnetic current density but also of supposedly long range, physical numbers such as the Abelian and monopole string tensions . Some criterion must be employed for the selection of the maxima of $`R`$, and in the absence of a clear understanding of which maximum, if any, is the most ‘physical’, one maximum was selected at random in . An alternative strategy, used in gauge fixing the $`L=32`$ lattices at $`\beta =2.5115`$, is to pursue the global maximum of $`R`$ . Each field configuration is fixed to the maximally Abelian gauge 10 times using a simulated annealing algorithm that already weights the distribution of maxima so selected towards those of higher $`R`$. The solution with the largest $`R`$ from these is selected. Details of this method are discussed in . The difference in procedures invites caution in comparing exact numbers between this ensemble and those studied previously; for example a $`𝒪(10\%)`$ suppression in the string tension is observed. It is likely that cluster lengths will differ by a corresponding amount and this will prevent a quantitative scaling analysis using this ensemble. The power law indices do appear, however, to be robust and it also seems likely that ratios of string tensions obtained on the same ensemble can be reliably compared with other ratios. ### 2.3 Cluster properties The fact that the largest cluster does not belong to the same distribution as the smaller clusters is seen from the very different scaling properties of these clusters with volume . It is also apparent from the fact that the largest cluster is very much longer than the second largest cluster. Indeed for a large enough volume and for a reasonable size of the configuration ensemble, there will be a substantial gulf between the distribution of largest cluster lengths and that of the second largest clusters. By contrast the length distributions of the second and third largest clusters strongly overlap. This is the situation that prevailed for the larger lattices at $`\beta =2.3`$ and 2.4 but which broke down on the $`L=16`$ lattice at $`\beta =2.5`$. We can now compare what we find on our $`L=20`$ and $`L=32`$ lattices with the latter. This is done in Table 1. There we show the longest and shortest cluster lengths for the largest, second largest and third largest cluster respectively over the ensemble. The ensemble sizes are not exactly the same, but it is nonetheless clear that there is a real gulf between the largest and second largest clusters on the $`L=32`$ lattice while there is significant overlap in the $`L=16`$ case. The $`L=20`$ lattice is a marginal case. We conclude from this that the apparent loss of a well separated largest cluster as seen in at $`\beta =2.5`$ was in fact a finite volume effect, and that our scaling analysis has proved reliable in predicting what volume one needs to use in order to regain the simple picture. In Figure 1 we show how the length of the largest cluster varies with the lattice volume when both are expressed in physical units (set by $`\sqrt{K}`$). To be specific, we have divided $`l_{\mathrm{max}}\sqrt{K}`$ by $`(L\sqrt{K})^4`$ and plotted the resulting numbers against $`L\sqrt{K}`$ for both our new and our old calculations. The fact that at fixed $`\beta `$ the values fall on a horizontal line tells us that that the length of the largest cluster is proportional to the volume at fixed lattice spacing: $`l_{\mathrm{max}}L^4`$. The fact that the various horizontal lines almost coincide tells us that the current density in the largest cluster is consistent with scaling. That is to say, it has a finite non-zero value in the continuum limit. Thus the monopole whose world line traces out this largest cluster, percolates throughout the space–time volume and its world line is sufficiently smooth on short distance scales that its length does not show any sign of diverging as we take the continuum limit. We note that the $`L=32`$ lattice deviates by $`10\%`$ from the other values. This is consistent with what we might have expected from the different gauge fixing procedure used in that case. Turning now to the secondary clusters, we display in Figure 2 the length spectrum that we obtain at $`\beta =2.5115`$. It is clearly well described by a power law as in eqn. 3 and we fit the exponent to be $`\gamma =3.01`$ (8). This is in accord with the universal value of 3 that was postulated in on the basis of calculations on coarser lattices. The value one fits to the spectrum obtained on the $`L=20`$ lattice at $`\beta =2.5`$ is $`\gamma =2.98`$ (7) and is equally consistent. We also examine the dependence on $`\beta `$ of the coefficient $`c_l(\beta )`$ in eqn. 3 adding to the older work our calculations at $`\beta =2.5`$ on the $`L=20`$ lattice. \[We do not use the $`L=32`$ lattice for this purpose because of the different gauge fixing procedure used.\] If we assume a constant power (which is approximately the case), then $`c_l(\beta )`$ is just proportional to the total length of the secondary clusters. At fixed $`\beta `$ we find this length to be proportional to $`L^4`$ just as one might expect. \[Small clusters in very different parts of a large volume are presumably independent.\] The dependence on $`\beta `$, on the other hand, is much less clear. Between $`\beta =2.3`$ and $`\beta =2.4`$ it varies weakly, roughly as $`K^{0.12\pm 0.13}`$. Between $`\beta =2.4`$ and $`\beta =2.5`$ it varies more strongly, roughly as $`K^{0.48\pm 0.09}`$. We can try to summarise this by saying that $$c_l(\beta )=\text{const.}L^4\sqrt{K}^\zeta $$ (7) where $`\zeta =0.5\pm 0.5`$, which is consistent with what was found previously . The smaller clusters are compact objects in $`d=4`$, and having determined the cluster spectrum as a function of length we can then ask what is the spectrum when re-expressed as a function of the radius (of gyration) of the cluster. In we obtained this spectrum by determining the average radius as a function of length, and folding that in with the number density as a function of length. This is an approximate procedure (forced upon us by the fact that we did not foresee the interest of this spectrum during the processing of the clusters) and one can obtain the spectrum more accurately by calculating $`r`$ for each cluster and forming the spectrum directly. Doing so for the $`L=32`$ lattice at $`\beta =2.5115`$, also in Figure 2, we find a power law as in eqn. 4 with $`\eta =4.20`$ (8). The spectrum on the $`L=20`$ lattice at $`\beta =2.5`$ yields $`\eta =4.27`$ (6). We recall that in we claimed that the spectrum was consistent with the scale invariant result $`dr/r\times 1/r^4`$, i.e. $`\eta =5`$. This followed from the fact that we found the the radius of the smaller clusters to vary with their length as $`r(l)=s+t.l^{0.5}`$ i.e. just what one would expect from a random walk. Folded with a length spectrum $`N(l)1/l^3`$, this gives $`\eta =5`$. On the $`L=32`$ lattice we still find that the random walk ansatz provides an acceptable fit but we also find that $`r(l)=s+t.l^{0.65}`$ works equally well over similar ranges. The latter, when folded with $`\gamma =3`$, gives $`\eta =4.2`$. It is clear that the direct calculation of $`N(r)`$ is much more accurate than the indirect approach. Treating the power as a free parameter in the fit, $`r(l)=s+t.l^u`$, we find $`u=0.57`$ (3) on $`L=32`$ at $`\beta =2.5115`$, consistent with $`u=0.58`$ (4) on $`L=20`$ at $`\beta =2.5`$. Thus both $`u=0.5`$ and $`u=0.65`$ lie within about two standard deviations from the fitted value. Note that what the fitted powers $`\gamma `$ and $`u`$ parameterise are the means of the distributions of lengths and radii respectively. That combining these does not give the directly calculated value of $`\eta `$ is not unexpected, and reflects the importance of fluctuations around the mean in the distributions. If the secondary monopole clusters can be associated with localised excitations of the full SU(2) vacuum (‘4–balls’), it would seem that such objects do not have an exactly scale invariant distribution in space–time, so that the number of larger radius objects is somewhat greater than would be expected were this the case. Now it is known that an isolated instanton (even with quantum fluctuations) is associated with a monopole cluster within its core (see and references therein) and that the scale invariant semiclassical density of instantons acquires corrections due to quantum fluctuations. These corrections are, however, very large; in SU(2) the spectrum of small instantons (where perturbation theory is reliable) goes as $`N(\rho )d\rho d\rho /\rho \times \rho ^{10/3}`$, where $`\rho `$ is the core size. The scale breaking we have observed for monopole clusters is negligible in comparison. Thus we cannot identify the ‘4–balls’ with instantons. Indeed, the fact that the monopole spectrum is so close to being scale invariant strongly suggests that these secondary clusters have no physical significance. In the next section we shall show explicitly that, in the large volume limit, they do not play any part in the long range confining physics. ### 2.4 Breakdown of ‘monopole dominance’? We finish this section by asking if there are hints from our cluster analysis that ‘monopole dominance’ might be breaking down as we approach the continuum limit. This question is motivated by the observation that the monopoles are identified by a gauge fixing procedure which involves making the bare SU(2) fields as diagonal as possible. Since the theory is renormalisable, the long distance physics increasingly decouples from the fluctuations of the ultraviolet bare fields as we approach the continuum limit. For example, the ultraviolet contribution to the action density is $`O(1/\beta )`$ while the long distance contribution is $`O(e^{c\beta })`$. Thus as $`a0`$ the maximally Abelian gauge will be overwhelmingly driven by ultraviolet rather than by physical fluctuations. Moreover at the location of the monopoles the Abelian fields are far from unity and so one would expect the SU(2) fields also to be far from unity. Thus the number of monopoles would seem to be constrained by the probability of finding corresponding clumps of SU(2) fields with large plaquette values. This probability depends on the detailed form of the SU(2) lattice action far from the Gaussian minimum and one could easily choose an action where it is completely suppressed and yet which one would expect to be in the usual universality class. None of the above arguments are completely compelling of course. In the Gaussian approximation, for example, the $`O(1/\beta )`$ ultraviolet fluctuations would not generate any monopoles at all, and in that case there would be no reason to expect any breakdown of monopole dominance. Nonetheless the arguments do suggest that it would be surprising if the long distance physics were to be usefully and simply encoded in the monopole structure (as defined on the smallest ultraviolet scales) all the way to the continuum limit. There are different ways in which monopole dominance could be lost. The most extreme possibility is that as $`a0`$ the fields simply cease to contain monopole clusters that are large enough to disorder large Wilson loops. That this is indeed so has been argued in where it has been claimed that the exponent $`\gamma `$ in our eqn. 3 (but defined for loops rather than for clusters) increases rapidly with decreasing $`a`$. Of course this would not in itself preclude the existence of a large percolating cluster, as long as this cluster could be decomposed into a large number of small and correlated intersecting loops. Irrespective of this, we also note that the volumes used in are very small by the criterion given in eqn. 6. For example, from our scaling relations we would expect to need an $`L46`$ lattice at $`\beta =2.6`$ and an $`L70`$ lattice at $`\beta =2.7`$ in order to resolve our simple monopole picture, if it still holds at these values of $`\beta `$. This contrasts with the $`L=12`$ and $`L=20`$ lattices actually employed in . So it appears to us that while the claims in are certainly interesting, further calculations on much larger lattices are required. Our work suggests a somewhat different form of the breakdown to the one above. We see from eqn. 7 that the ratio of the (total) monopole current residing in the physically irrelevant, smaller clusters to that residing in the large percolating cluster, increases rapidly as $`a0`$ as $`1/\sqrt{K}^{3\zeta }1/a^{3\zeta }`$. This suggests that as $`a0`$ a calculation of Wilson loops will become increasingly dominated by the fluctuating contribution of the unphysical monopoles that are ever denser on physical length scales, and that this will eventually prevent us from extracting a potential or string tension. That is to say: calculations in the maximally Abelian gauge will eventually acquire a similar problem to that which typically afflicts Abelian projections using other gauges. In our case we can overcome this problem by going to a large enough volume that the physically relevant percolating cluster can be simply identified. \[The reason this cannot be done with other typical Abelian gauge fixings is that there the unphysical monopoles are dense on lattice scales making any meaningful separation into clusters impossible.\] We can then extract the string tension using, in our Wilson loop calculation, only this largest monopole cluster. The fact that the length of this cluster scales in physical units, with apparently no significant anomalous dimension, tells us that this calculation will not be drowned in ultraviolet ‘noise’ as we approach the continuum limit. Of course, the fact that we can only do this for volumes that diverge in physical units as $`a0`$ is a symptom of the underlying breakdown of the Abelian projection. The qualitative discussion in the previous paragraph over-estimates the effect of the secondary clusters; for example, the contribution that a cluster of fixed size in lattice units makes to a Wilson loop of a fixed physical size will clearly go to zero as $`a0`$. So it is useful to ask how Wilson loops are affected by the secondary clusters, and to do so using approximations that underestimate the effect of these smaller clusters. Consider an $`R\times R`$ Wilson loop. A monopole cluster that has an extent $`r`$ that is smaller then $`R`$ will affect it only weakly through higher multipole fields which cannot on their own give rise to an area law decay and a string tension. So we neglect such clusters and consider only those larger than $`R`$. Let us first neglect the observed breaking of scale invariance and simply assume that $`r\sqrt{l}`$ and that $`\gamma =3`$. We then find, by integrating eqn. 3 and using eqn. 7, that the number of secondary clusters with $`r>R`$ is proportional to $`L^4\sqrt{K}^\zeta /R^4`$. We further assume that such clusters must be within a distance $`\xi `$ from the minimal surface of the Wilson loop, where $`\xi `$ is the screening length, if they are to disorder that loop significantly. The lattice volume this encompasses is the area of the planar loop, $`R^2`$, multiplied by a factor of $`\xi `$ for each of the two orthogonal directions in $`d=4`$. So the probability for this Wilson loop to be disordered thus decreases with $`R`$ as $`(R^2\xi ^2/L^4\times L^4\sqrt{K}^\zeta /R^4)\sqrt{K}^\zeta (\xi /R)^2`$. So if we look at a Wilson loop that is of a fixed size in physical units, i.e. $`R/\xi `$ fixed assuming $`\xi `$ scales as a physical quantity , then the influence of the secondary clusters will decrease to zero as $`a0`$ as long as $`\zeta >0`$. If $`\zeta <0`$, however, then we would have to go to Wilson loops that were ever larger in physical units as we approached the continuum limit, in order that the physical contribution from the percolating cluster should not be swamped by the unphysical contribution of the secondary clusters. Of course this calculation uses a scale invariant $`dr/r\times 1/r^4`$ spectrum, whereas, as we have seen, there is significant scale breaking and the actual spectrum is closer to $`dr/r\times 1/r^{3.2}`$. If we redo the above analysis with the latter spectrum we see that we are only guaranteed to preserve this aspect of ‘monopole dominance’ if $`\zeta >0.8`$. As demonstrated in eqn. 7, there is some evidence that $`\zeta >0`$ but it is not at all clear that $`\zeta >0.8`$. All this indicates that even in a calculation that errs on the side of neglecting the effect of the smaller clusters, they nonetheless will most likely dominate the values of Wilson loops on fixed physical length scales. It is only by separating the percolating cluster from the other smaller clusters, and calculating Wilson loops just using that largest cluster, that we can hope to be able to extract the string tension as $`a0`$. ## 3 Monopoles, vortices and the string tension In this section we begin by describing how we calculate the string tension from an arbitrary set of monopole currents. We then go on to show that even at the smallest lattice spacings, the string tension arises essentially entirely from the largest cluster, as long as we use a sufficiently large volume. We then calculate the string tension for sources that have a charge of $`q=2`$, 3 and 4 times the basic charge, and compare these results to a simple toy model calculation. Finally we discuss the implications of our calculations for the question whether it is really monopoles or vortices that drive the confining physics. ### 3.1 Monopole Wilson loops The monopole contribution to the string tension may be estimated using Wilson loops. If the magnetic flux due to the monopole currents through a surface spanning the Wilson loop, $`𝒞`$, (by default the minimal one) is $`\mathrm{\Phi }(𝒮)`$, then the charge $`q`$ Wilson loop has value $$W(𝒞)=\mathrm{exp}[iq\mathrm{\Phi }(𝒮)].$$ (8) We may obtain the static potential from the rectangular Wilson loops $$V(r)=\underset{t\mathrm{}}{lim}V_{\text{eff}}(r,t)\underset{t\mathrm{}}{lim}\mathrm{ln}\left[\frac{W(r,t)}{W(r,t+a)}\right].$$ (9) The string tension, $`K`$, may then be obtained from the long range behaviour of this potential, $`V(r)Kr`$. The string tension may also be found from the Creutz ratios $$K=\underset{r\mathrm{}}{lim}K_{\text{eff}}(r)\underset{r\mathrm{}}{lim}\mathrm{ln}\left[\frac{W(r+a,r)W(r,r+a)}{W(r,r)W(r+a,r+a)}\right].$$ (10) Square Creutz ratios at a given $`r`$ are useful because they provide a relatively precise probe for the existence of confining physics on that length scale. In addition Creutz ratios are useful where the quality of the ‘data’ precludes the double limit of the potential fit. This is so particularly when positivity is badly broken as it frequently is for our gauge dependent correlators. The magnetic flux due to the monopole currents is found by solving a set of Maxwell equations with a dual vector potential reflecting the exclusively magnetic source terms. An iterative algorithm being prohibitively slow on $`L=32`$, we utilised a fast Fourier transform method to evaluate an approximate solution as the convolution of the periodic lattice Coulomb propagator and the magnetic current sources . The error in this solution was then reduced to an acceptable level by using it as the starting point for the over–relaxed, iterative method. We may use any subset of the monopole currents as the source term to calculate the contribution to the Wilson loops and potential of those currents, provided that they i) are locally conserved and ii) have net zero winding number around the periodic lattice in all directions, e.g. $$Q_{\mu =4}\underset{x,y,z}{}j_4(x,y,z,t=1)=0.$$ (11) If we choose complete clusters, then the first condition is always satisfied but the second condition is often not met (even though the winding number for all the clusters together must be zero). In such cases we introduce a ‘fix’ as follows. At random sites in the lattice we introduce a Polyakov–like straight line of magnetic current of corrective charge $`Q_\mu `$ for each direction, and use these as sources for a dual vector potential. Such lines represent static monopoles and a random gas of these can lead to a string tension. This introduces a systematic error to the monopole string tension that we need to estimate. We do so by placing the same corrective loop on an otherwise empty lattice, along with a second loop of charge $`+Q_\mu `$ at another random site. From this new ensemble we calculate the string tension from Creutz ratios. One half of this is a crude estimate of the bias introduced in correcting the original configurations, and this is quoted as a second error on our string tension values, as appropriate. ### 3.2 The largest cluster In we observed that at $`\beta =2.3`$, $`L10`$ and at $`\beta =2.4`$, $`L14`$ the $`q=1`$ monopole string tension was produced almost entirely by the largest cluster, and the other clusters had a string tension near zero. At $`\beta =2.5`$, $`L=16`$ the situation was more confused; the smaller, power law clusters still had a very low string tension, but that of the largest cluster alone was substantially less than the full monopole string tension. This suggested some kind of constructive correlation between the two sets of clusters. In our new calculation on an $`L=20`$ lattice at $`\beta =2.5`$, we still find a situation that is confused, although somewhat less so than on the $`L=16`$ lattice while on the $`L=32`$ lattice at $`\beta =2.5115`$ the clear picture seen at $`\beta =2.3`$ re-emerges, with nearly all the string tension being produced by the largest cluster, and the remaining clusters having a negligible contribution. To illustrate this we display in Figure 3 the effective string tensions as a function of $`r`$ for the lattices at $`\beta =2.5`$ and $`\beta =2.5115`$. The confused rôles of the clusters on finer lattices is thus a finite volume effect and does not represent a breakdown of the monopole picture as we near the continuum limit. Due to the differing scaling relations for the lengths of the two largest clusters, it is not enough to maintain a constant lattice volume in physical units to reproduce the physics as we reduce the lattice spacing. Rather the lattice must actually become larger even in physical units, as discussed in subsection 2.2. The string tension arises from ‘disordering’ — i.e. switches in sign — of the Wilson loop by the monopoles. A monopole that is sufficiently close to a large Wilson loop will multiply the loop by $`\mathrm{exp}[iq\pi ]`$ which would naïvely suggest that even–charged loops are not disordered and have no string tension. In a screened monopole plasma, however, as the monopole is moved away from the loop, the flux falls and the possibility for disorder and a string tension exists. \[This will also occur without screening, but only when the monopole is a distance away from the Wilson loop that is comparable to the size of the loop.\] Clearly the exact value of the string tension will depend upon the details of the screening mechanism, especially as we increase $`q`$. This can be calculated in the usual saddle point approximation where one finds that the string tension is proportional to $`q`$ . One can obtain a crude model estimate with much less effort, and this we do in the next subsection. Returning to our lattice calculations, we list in Table 2 the monopole string tensions that we obtain using charge $`q`$ Wilson loops at $`\beta =2.5115`$ on the $`L=32`$ lattice. We see that they are indeed consistent with a scaling relation $`R(q)=q`$, at least up to $`q=4`$. ### 3.3 A simple model It is useful to consider here a simple model for the disordering of Abelian Wilson loops of various charges by monopoles. We consider only static monopoles in $`d=4`$, with a mean field type of screening, assuming that the macroscopic, exponential fall–off in the flux with screening length $`\xi `$ could be applied on the microscopic scale also. For numerical reasons we also impose a cut–off: beyond $`N`$ screening lengths the flux is set exactly zero. The magnetic field is thus $$B(d)=\{\begin{array}{cc}\frac{1}{2d^2}e^{d/\xi }\hfill & dN\xi \hfill \\ 0\hfill & d>N\xi .\hfill \end{array}$$ (12) The flux from a monopole distance $`zN\xi `$ above a large (spacelike) Wilson loop through that loop is $$\mathrm{\Phi }(N,z,\xi )=\pi _{z/N\xi }^1𝑑y\mathrm{exp}\left(\frac{z}{y\xi }\right).$$ (13) Considering a slab of monopoles and antimonopoles all distance $`z`$ above the loop (and similar below), the charge $`q`$ Wilson loop gives a string tension $$\delta K(N,z,q)\left(1\mathrm{cos}\left[q\mathrm{\Phi }(N,z,\xi )\right]\right).$$ (14) Integrating over all $`|z|N\xi `$, the ratio of string tensions calculated using charge $`q`$ and charge $`q=1`$ Wilson loops is $$R(N,q)\frac{K(N,q)}{K(N,q=1)}=\frac{_0^N𝑑a\left(1\mathrm{cos}\left[q\pi _{a/N}^1𝑑ye^{a/y}\right]\right)}{_0^N𝑑a\left(1\mathrm{cos}\left[\pi _{a/N}^1𝑑ye^{a/y}\right]\right)}$$ (15) for this static monopole assumption. This may be evaluated numerically, and extrapolated as $`N\mathrm{}`$, where there is a well–defined limit, $`R(q)`$. The results for small $`q`$ are shown in Table 3, where the error on $`R(q)`$ reflects the extrapolation uncertainty. Comparing these numbers to the actual ratio of string tensions, we find this simplistic model works remarkably well for $`q=2`$, but becomes less reliable as we increase $`q`$. This no doubt reflects the increasing importance of the neglected fluctuations of the flux away from the mean screened values. ### 3.4 Monopoles or vortices? The fact that the Abelian fields that one extracts in the maximally Abelian gauge, and their corresponding monopoles, successfully reproduce the SU(2) fundamental string tension, provides some evidence for the dual superconducter model of confinement. As we remarked in the introduction, however, an attractive alternative picture exists, based on vortex condensation, and one has comparable evidence for that picture, obtained by going to maximal centre gauge and calculating Wilson loops using the singular vortices obtained after centre projection. Since the Abelian projected fields seem to contain the full string tension, it is reasonable to assume that they encode all the significant confining fluctuations in the SU(2) fields, even if these are vortices. How would one expect a vortex to be encoded in the Abelian fields? And how can we test for their presence? Recall that the kind of vortex we are interested in has a smooth core and flips the sign of any Wilson loop that it threads. Consider now a space-like Wilson loop in some time-slice of our Abelian projected lattice field. We observe that it will flip its sign if threaded by a loop of magnetic flux whose core contains a total flux equal to $`\pi `$. If the core size is not arbitrarily large, so that a (large enough) Wilson loop has negligible probability to overlap with the actual core, then a condensate of such fluxes will lead to linear confinement. Since the original SU(2) vortex has a smooth core, the simplest expectation is that this flux, if it reflects the vortex, should not have a singular monopole source; rather it should be a closed loop of magnetic flux. If its length is much larger than the size of the Wilson loop, it can easily thread the loop an odd number of times and can disorder it. So the natural way for a Z(2) vortex to be encoded in the Abelian projected fields is as a closed loop of magnetic flux, in roughly the same position, and with a smooth core of roughly the same size. If this is so, and if vortices are present in the SU(2) fields, we would expect that our Abelian fields contain two kind of confining fluctuations; singular magnetic monopoles and smooth closed loops of ($`\pi `$ units of) magnetic flux. Since these closed loops of flux are smooth they will be hard to identify individually in the midst of the magnetic fluxes generated by the monopoles. Their presence can however be easily tested for as follows. The flux in the U(1) fields is conserved and so any flux either originates on the monopoles or closes on itself as part of a closed flux loop. The monopoles are easy to identify and their flux can be calculated. So for any Wilson loop, $`𝒞`$, we can calculate the flux, $`B_{\text{mon}}(𝒞)`$, due to the monopoles and we can subtract it from the total flux, $`B[𝒞]`$, so as to obtain the remaining flux, $$B_\delta (𝒞)B(𝒞)B_{\text{mon}}(𝒞),$$ (16) that comes from closed flux loops. The corresponding value of the Wilson loop will be $`e^{B_\delta (𝒞)}`$. In this way we can calculate the potential due to the non-monopole flux, and if we find a non-zero string tension this demonstrates the existence of a condensate of such flux loops and provides evidence for corresponding Z(2) vortices. If the flux loops carry $`\pi `$ units of flux, Wilson loops corresponding to sources with an even charge will have zero string tension. We remark here that in U(1) lattice gauge theories, such loops of magnetic flux are not usually discussed as significant degrees of freedom. That is not because they cannot exist but rather that the dynamics is such that they usually play no significant rôle. \[One can always smoothly reduce the usual U(1) action by increasing the core size of such a loop. Ultimately they contribute a non-confining ‘spin wave’ contribution to the interaction.\] The Abelian projected fields, on the other hand, are not generated from some local U(1) action. They may possess any structures that are kinematically allowed. Vortices can also be encoded in the Abelian fields in a more subtle way than the above. This involves long-distance correlations amongst the monopoles. In $`d=2+1`$ suppose that at least some of the monopoles lie along ‘lines’ in such a way that each monopole is followed by an antimonopole (and vice versa) as we follow the line. This will generate an alternating flux of $`\pm \pi `$ along the line . So a Wilson loop threaded by this line will acquire a factor of $`1`$. Such correlated ensembles can therefore encode the vortices in the original three dimensional SU(2) fields. A similar restriction of monopole current world lines to two dimensional sheets can be envisaged in $`d=3+1`$. In both cases, their presence would be signalled by the fact that they do not disorder Wilson loops corresponding to an even charge (unlike a plasma of monopoles). So if we calculate the string tension due to the monopoles, and if we find a significant suppression of the $`q=2`$ string tension, then this will indicate the significant presence of such correlations and hence of vortices. This latter way of encoding vortices in the Abelian projected fields might seem less natural given the smoothness of the underlying Z(2) vortices. As pointed out in , however, such correlated monopole structures actually occur in what one usually regards as a standard example of a field theory that demonstrates linear confinement driven by monopole condensation: the Georgi-Glashow model in three dimensions. This model couples an SU(2) gauge field to a scalar Higgs field in the adjoint representation of the gauge group. The theory has a Higgs phase, and the Higgs field drives the gauge field into a vacuum state which has only U(1) gauge symmetry, save in the cores of extended topological objects. These ’t Hooft–Polyakov monopoles are magnetically charged with respect to the U(1) fields, and give rise to the linear confining potential, at least in the semi–classical approximation which holds good when the charged vector bosons are heavy. As pointed out in , however, this conventional picture cannot be true on large enough length scales since eventually the presence of the charged massive $`W^\pm `$ fields will lead to the breaking of strings between doubly charged sources (the $`W^\pm `$ possessing twice the fundamental unit of charge). A plasma of monopoles, on the other hand, will predict the linear confinement of such double charges. So it was argued that in this limit it is Z(2) vortices, which do not disorder doubly charged Wilson loops, that drive the confinement . The crossover between the two pictures, it is argued, would occur beyond a certain length scale dictated by the $`W^\pm `$ mass, where the distribution of monopole flux would no longer be purely Coulombic, but would be collimated into structures of lower dimension — essentially strings of alternating monopoles and antimonopoles — that reflect the Z(2) vortices of the vacuum. Of course one cannot carry this argument over in all its details to the case of the pure SU(2) gauge theory. Here there are no explicit Higgs or $`W^\pm `$ fields; any analogous objects would need to be composite. The theory also has only one scale, and so one would not expect an extended intermediate region between the onset of confining behaviour and the collimation of the flux signalling Z(2) disorder. But it does raise the possibility that the Z(2) vortices in the SU(2) fields might be encoded, after Abelian projection, in such correlations amongst the monopoles rather than in separate smooth closed loops of magnetic flux. To probe for the presence of smooth loops of magnetic flux in the Abelian projected fields, we have calculated the ‘difference’ flux, as defined in eqn. 16, and the resulting string tension; and to probe for vortex-like ensembles of monopoles we have calculated the monopole string tension, $`K(q)`$, for various source charges, $`q`$. We start with the latter. In Table 4 we show the $`q=1,2`$ monopole effective string tensions that we have obtained from Creutz ratios on the $`L=32`$ lattice at $`\beta =2.5115`$. We see that for $`q=2`$, just as for $`q=1`$, there are very few transients at small $`r`$, and the extraction of an asymptotic string tension appears to be unambiguous. We have accurate calculations out to a distance of $`r=9a`$ which corresponds to $$r=9a\frac{1.6}{\sqrt{K}}$$ (17) in physical units at this $`\beta `$. Out to this distance there is absolutely no hint of any reduction in the $`q=2`$ effective string tension. It has been pointed out that when the Wilson loop is not much larger than the typical vortex core, it is not completely unnatural to obtain an effective string tension comparable to the one from a monopole plasma. Here the size is beginning to be large compared to the natural scale of the theory, however, and it is hard not to view the lack of any variation at all in the $`q=2`$ effective string tension as pointing to the absence of the kind of correlations amongst the monopoles that might be encoding Z(2) vortices. The second possibility is that the vortices might be encoded not in correlations amongst the monopoles but rather in closed loops carrying $`\pi `$ units of magnetic flux. Such loops would contribute to $`K(q=1)`$ but not to $`K(q=2)`$. We recall that there has been a calculation of $`K(q)`$, calculated within the full Abelian fields at $`\beta =2.5115`$, and that there it was found that there is a finite $`q=2`$ effective string tension that extends out to at least as far as $`r=9a`$, and that the ratio of the U(1) string tensions is $`K(q=2)/K(q=1)=2.23`$ (5). While this suggests that closed flux loops are not important, these string tensions necessarily include the contribution from monopoles, and it would be useful to have a calculation that excludes the latter. We have therefore calculated the effective string tension using only the flux that comes from closed flux loops, as defined in eqn. 16. The results of this calculation are listed in Table 4 for $`q=1`$. We see that, within small errors, there is asymptotically no string tension from such loops (a potential fit to the Wilson loops yields $`K<0.0025`$). This shows in a direct way that there is no significant condensate of closed loops of flux in the Abelian projected fields. In conclusion, our investigations here have shown no sign of vortices encoded in the Abelian projected fields in either of the two ways that one might plausibly have expected them to be. It is worth stepping back at this point and reflecting upon the tentative nature of the above arguments. Our calculation of the monopole string tension takes each monopole to be a source of a simple Coulombic flux, as obtained by solving Maxwell’s equations. Treating the monopoles as being ‘isolated’ in this way, is the obvious starting point if one wishes to ask what is the physics ‘due to monopoles’. But it is no guarantee that such a question makes any sense. Indeed it is only in the Villain model that one has the exact factorisation of Wilson loops into monopole and non-monopole pieces that is needed for this question to be clearly unambiguous. For example, it is not a priori clear that the ensemble of monopoles one obtains in the maximally Abelian gauge is even qualitatively such as one would expect from a generic U(1) action. If it is not, then one must ask what are the fluctuations in the SU(2) fields that determine the nature of the monopole ensemble, and whether these features of the ensemble have a significant effect on the calculated string tension. If they do then the question we are asking, whether the string tension is ‘due to monopoles’, becomes intrinsically ambiguous. Our demonstration that there is no suppression of the $`q=2`$ monopole string tension may be regarded as a first step, but only a first step, towards showing that the monopole ensemble does not possess such features that require additional explanation. One should also mention that the Abelian fields are periodic in $`2\pi `$ (in the sense that the number density of plaquette angles peaks at multiples of $`2\pi `$), which is the requirement for Dirac strings to be invisible, and that they possess a screening length that is characteristic of plasmas . Equally, if we had found a significant flux loop condensate in the Abelian fields, we would have had to study carefully the (presumably) non-trivial correlations between the monopoles and flux loops in order to determine if there was any sense in claiming that some physics was ‘due to monopoles’. The fact that we have not found any sign of such a flux loop condensate, or of any anomalous features of the monopole plasma, means that we are not yet forced to confront this quite general problem. But this question clearly needs systematic exploration. ## 4 Summary We have studied the magnetic monopole currents obtained after fixing to the maximally Abelian gauge of SU(2), on lattices that are both large in physical units and have a relatively small lattice spacing. The monopole clusters are found to divide into two clear classes both on the basis of their lengths and their physical properties. The smaller clusters have a distribution of lengths which follows a power law, and the exponent is consistent with 3, as was previously seen on coarser lattices . These clusters are compact objects, and their radii also follow a power law whose exponent we found to be 4.2 (1). This is close to, but a little less than, the scale invariant value of 5 which indicates that if the smaller clusters correspond to objects in the SU(2) vacuum, these objects have a size distribution which yields slightly more large radius objects than would be expected in a purely scale invariant theory. This scale breaking is, however, far too weak to encourage the identification of such objects with the small instantons in the theory. That is not to say that instantons are necessarily irrelevant; the correlations between the monopole currents and the action and topological charge densities ( and references therein) indicate some connexion. It would be interesting to measure the correlations separately using the largest cluster, and the remaining, power law clusters. The small clusters do not appear relevant to the long range physics; they produce a zero, or at most very small, contribution to the string tension. Indeed the string tension is consistent with being produced by the largest cluster alone. The fact that there should be a large percolating monopole cluster associated with the long-distance physics is an old idea (see for an early reference). The properties that we find for this cluster, however, are certainly not those associated with naïve percolation. In particular, as we approach the continuum limit the density of monopoles belonging to this cluster goes to zero. And indeed the fraction of the total monopole current that arises from this largest cluster also appears to go to zero. This is because this single very large cluster seems to percolate on physical and not on lattice length scales, while the physically unimportant secondary clusters have an approximatley constant density in lattice units. All this reproduces the properties that we previously obtained on coarser lattices, but which seemed to be lost when going to finer lattice spacings, albeit on volumes of a smaller physical extent. This study demonstrates that the breakdown was a finite volume effect, rather than a failure of the monopole picture in the weak coupling limit. The volume at which the picture was restored was as predicted by the scaling relations derived from the coarser lattices. The fact that one has to go to volumes that are ever larger as $`a0`$, can be interpreted as a breakdown of the Abelian projection. As we remarked, something like this is not unexpected: as $`a0`$ the Abelian projection will presumably be increasingly driven by the irrelevant ultraviolet fluctuations of the SU(2) link matrices. This leads to an increasing fraction of the monopole current – that belonging to the smaller clusters – containing no physics and this contributes an increasing background ‘noise’ to attempts at extracting physical observables as we approach the continuum limit. Fortunately the unphysical gas of monopoles that one obtains by Abelian projection within the maximally Abelian gauge is sufficiently dilute that one can isolate the physically relevant ‘percolating’ cluster, even if the price is that one has to work with ever larger volumes. We also calculated the monopole contribution to Wilson loops of higher charges, and found that the corresponding monopole string tensions appear to be simply proportional to the charge, at least up to $`q=4`$. This is what is predicted by a saddle point treatment of the U(1) lattice gauge theory as can be seen more simply, if more approximately, within our simplistic charge plasma model. Our main reason for studying these higher-$`q`$ string tensions, was to probe for any sign of a condensate of Z(2) vortices in the Abelian projected fields. It might, of course, be that such vortices are simply not encoded in the Abelian fields. It is plausible, however, to infer from the observed monopole and centre dominance that both when we force the SU(2) link matrices to be as Abelian as possible, and when we force them to be as close to $`\pm 1`$ as possible, the resulting Abelian and Z(2) fields capture essentially all the long range confining disorder present in the original SU(2) fields. In the case of Z(2) fields the disorder must be encoded by vortices (there is nothing else). In the Abelian case however the disorder can be carried either by monopoles or by closed loops of ‘magnetic’ flux. We argued that such a closed loop, carrying a net magnetic flux of $`\pi `$ units, provides a plausible way for the Abelian fields to encode the presence of an underlying Z(2) vortex. Our study of the monopole–U(1) ‘difference gas’ showed, however, that there is no significant contribution to confinement from such loops of magnetic flux. An alternative . is that the Z(2) disorder is encoded in correlated strings of (anti)monopoles. If such correlations were important, however, they would lead to a significant suppression of the $`q=2`$ string tension, and this we do not observe. Instead we find that the effective monopole string tensions satisfies $`K(q=2)=2K(q=1)`$ very accurately to distances that are quite substantial in physical units. While there is a limit to what one can conclude about Z(2) vortices in a study that focuses solely on the Abelian projected fields, the fact that they do not manifest themselves in any of the ways that one might expect, must cast some doubt on their importance in the SU(2) vacuum. ### Acknowledgments The gauge fixed $`L=32`$ field configurations were crucial to the work of this paper and we are very grateful to Gunnar Bali for making them available to us. Our computations were performed on a UKQCD workstation and this work was supported in part by United Kingdom PPARC grant GR/L22744. The work of M.T. was supported in part by PPARC grant GR/K55752.
no-problem/9902/astro-ph9902189.html
ar5iv
text
# 1 Knowledge Decreases with Time ## 1 Knowledge Decreases with Time George Orwell wrote, “To see what is in front of one’s nose requires a constant struggle.” If the universe is dominated by a cosmological constant this will become more true, with a vengeance, as time proceeds. The observable universe is remarkably homogeneous and isotropic on large scales. These properties enable us to parametrize the evolution of the universe’s large scale geometry in terms of one spatially homogeneous function of time, the scale factor $`a(t)`$. The observed expansion of the universe can be understood as the increase in $`a(t)`$. For objects comoving with this expansion, $`a(t)`$ describes how the distance between them changes. The evolution of the scale factor is given by the Einstein field equation appropriate for our very symmetric universe, the Lemaitre-Friedmann-Robertson-Walker (LFRW) equation: $$\left(\frac{\dot{a}}{a}\right)^2+\frac{k}{a^2}=\frac{8\pi G\rho }{3}.$$ (1) Here $`G`$ is Newton’s constant, $`\rho `$ is the energy density, and $`k`$ measures the curvature of space. The expansion history $`a(t)`$ depends strongly on: (1) the sign of $`k`$; (2) the dependence of $`\rho `$ on $`a`$, in particular the $`a`$-dependence of the most slowly varying component of the density. For all known equations-of-state, the time derivatives of $`\rho `$ and $`a`$ have the same sign. If the universe becomes dominated by a constant positive energy density $`\rho _\mathrm{\Lambda }\frac{\mathrm{\Lambda }}{8\pi G}`$, then the evolution of the metric quickly approaches that associated with a flat ($`k=0`$) Einstein-de Sitter universe, in which $$a(t)=a(t_o)e^{\sqrt{\frac{\mathrm{\Lambda }}{3}}(tt_o)}.$$ (2) $`\mathrm{\Lambda }`$ is called the cosmological-constant, and $`\rho _\mathrm{\Lambda }`$ may be interpreted as the intrinsic energy density associated with the vacuum. From equation (2), a point initially a distance $`d`$ away from an observer in such a universe will be carried away by the cosmic expansion at a velocity $$\dot{d}=\sqrt{\frac{\mathrm{\Lambda }}{3}}d.$$ (3) Equating this recession velocity to the speed of light $`c`$, one finds the physical distance to the so-called de Sitter horizon as measured by a network of observers comoving with the expansion. This horizon, is a sphere enclosing a region, outside of which no new information can reach the observer at the center, and across which the outward de Sitter expansion carries material. Each observer has such a horizon sphere centered on them. Similarly, any signal we send out today will never reach objects currently located distances further than the horizon distance. Moreover, this distance may be comparable to the current observable region of the universe. If we accept a cosmological constant of the magnitude suggested by the current data, then $`\rho _\mathrm{\Lambda }6\times 10^{30}`$gm/cm<sup>3</sup> and the distance to the horizon is approximately $`R_H1.7\times 10^{26}\mathrm{m}18`$ billion light years. While the effects of the de Sitter horizon are not yet directly discernible, this result suggests that they will be seen on a time-scale comparable to the present age of the universe. As objects approach the horizon, the time (as measured by the clocks of the comoving observers) between the emission of light and its reception on Earth grows exponentially. As the light travels from its source to the observer, its wavelength is stretched in proportion to the growth in $`a(t)`$. Objects therefore appear exponentially redshifted as they approach the horizon. Finally, their apparent brightness declines exponentially, so that the distance of the objects inferred by an observer increases exponentially. While it strictly takes an infinite amount of time for the observer to completely lose causal contact with these receding objects, distant stars, galaxies, and all radiation backgrounds from the Big Bang will effectively “blink” out of existence in a finite time – as their signals redshift, the time scale for detecting these signals becomes comparable to the age of the universe, as we describe below. Eventually all objects not decoupled from the background expansion, i.e. those objects not bound to the local supercluster, will disappear in this fashion. The time-scale for this disappearance is surprisingly short. We can estimate it by taking a radius of $`R_{SC}=10`$ Megaparsecs (about $`3\times 10^7`$ light years, $`3\times 10^{22}`$m) as the extent of the local supercluster of galaxies – the largest observed structure of which we are a part. Objects further than this distance now will reach an apparent distance $`R_H`$ in a time given by $$\frac{R_H}{R_{SC}}\frac{1.7\times 10^{26}m}{3\times 10^{22}m}5\times 10^3=\mathrm{exp}\left[\sqrt{\frac{\mathrm{\Lambda }}{3}}t\right].$$ (4) Thus, in roughly 150 billion years light from all objects outside our local supercluster will have redshifted by more than a factor of $`5000`$, with each successive 150 billion years bringing an equal redshift factor. In a little more than two trillion years, all extra-supercluster objects will have redshifted by a factor of more than $`10^{53}`$. Even for the highest energy gamma rays, a redshift of $`10^{53}`$ stretches their wavelength to greater than the physical diameter of the horizon. (There is no contradiction here. From the point of view of a comoving observer, the horizon appears infinitely far away. Infinitely large redshift means that objects possessing such redshifts will have expanded infinitely far away by the time their light arrives at the observer.) The resolution time for such radiation will exceed the physical age of the universe. This time-scale is remarkably short, at least compared to the times we shall shortly discuss. It implies that when the universe is less than two-hundred times its present age, comparable to the lifetime of very low mass stars, any remaining intelligent life will no longer be able to obtain new empirical data on the state of large scale structure on scales we can now observe. Moreover, if today $`\mathrm{\Lambda }`$ contributes $`70\%`$ of the total energy density of a flat ($`k=0`$) universe, then the universe became $`\mathrm{\Lambda }`$-dominated at about 1/2 its present age. The “in principle” observable region of the Universe has been shrinking ever since. This loss of content of the observable universe has not yet become detectable, but it soon will. Objects more distant than the de Sitter horizon now will forever remain unobservable. On the bright side for astronomers, funding priorities for cosmological observations will become exponentially more important as time goes on. ## 2 The Recoverable Energy Content of the Observable Universe As we shall discuss, it will be crucial for the continued existence of life for the recoverable energy in the universe to be maximized. If the universe is dominated by a cosmological constant, then although the volume of the universe may be infinite the amount of energy available to any civilization, like the amount of information, is limited to at most what is currently observable, and so is finite. But what if the cosmological constant is instead zero, or time varying, so that it does not ultimately dominate the energy density of the Universe? Suppose that at very late times in the history of the universe, the dominant form of energy density $`\rho _{dom}`$ scales with the expansion as $`a^{n_{dom}}`$, with $`n_{dom}>0`$ (if $`n_{dom}=0`$, then the universe is cosmological constant-dominated). Equation (1) can then be solved for the evolution of the scale factor — $`at^{2/n_{dom}}`$. If $`n_{dom}<2`$, then the expansion is accelerating and, as in the case of a cosmological constant dominated universe, one is forever limited to the energy and information content of a finite subvolume of the universe. If on the other hand $`n_{dom}2`$, then the total energy that can eventually be contained within the causal horizon may be infinite. Knowing that there are infinite energy reserves ultimately containable within the (ever growing) causal horizon is not enough. One must be able to recover the energy to use it! Can a single civilization recover an infinite amount of energy given an infinite amount of time in an expanding universe? The answer, as we now show, appears to be no. Suppose that intelligent life-forms in the universe seeking to fuel their civilization construct machines to prospect and mine the universe for energy. The energy source they seek to collect may or may not be the dominant energy density of the universe, so its energy density $`\rho _{coll}`$ can scale as $`a^{n_{coll}}`$, with $`n_{coll}n_{dom}`$. To compete with the decreasing energy density, the number $`N`$ of such machines may be increased, so at some late time in history let $`Nt^b`$. The mass $`M`$ of each machine may also be changed, so that $`Mt^c`$. The total collected energy will therefore depend on the efficacy $``$ of each machine, the physical volume per unit time per unit machine mass from which the machine is able to extract energy. Suppose this scales as $`t^d`$ at late times. We allow all the energy recovered to be funneled into the construction of mining machines, and ignore the ongoing energy expenditures to run the machines. Clearly, this is overly optimistic. However, we will find insurmountable difficulties even ignoring this inevitable energy sink. The most optimistic rate of energy recovery is therefore $$\mathrm{\Phi }=NM\rho t^{b+c+d2n_{coll}/n_{dom}},$$ (5) while the rate of growth of the total mass of the machines is $$\frac{d}{dt}\left(NM\right)(b+c)t^{b+c1}.$$ (6) Since the total machine mass can ultimately grow no faster than the total recovered energy, we must have either $$d2\frac{n_{coll}}{n_{dom}}11\mathrm{o}rb+c<0$$ (7) to be able to maintain indefinitely this rate of energy recovery. If $`d2\frac{n_{coll}}{n_{dom}}1`$, then an infinite amount of energy can be collected. However, if $`d<2n_{coll}/n_{dom}1`$, so that $`b+c0`$, then $`\mathrm{\Phi }t^p`$, with $`p<1`$, and the total recovered energy will be finite. The crucial question is therefore: how fast can the efficacy $``$ grow? The answer depends on the type of energy density that one is collecting. ### 2.1 Prospecting for Matter First, let us consider prospecting for non-relativistic matter ($`n_{coll}=3`$). Because the matter is effectively at rest, the prospector must bring the matter into the system. If the prospector makes use only of short range forces (those which fall faster than the square of the distance to the machine), then the prospected volume per unit mass per unit time will saturate, $`d0`$. The total recovered energy will be finite. The prospecting machine would therefore need to use a long range force to continuously increase its sphere of influence as the universe expands. The available long range forces (gravity and electromagnetism) fall off as the inverse square of the distance, but grow linearly as the mass (or charge) of the machine. Using gravity is a more optimistic option, since the Coulomb force can screened by negative charges. We therefore consider a massive prospecting machine. Particles at rest with respect to the comoving expansion, if sufficiently close to such an object, will fall towards it. Simple arguments based on the growth of structure imply that the volume of the sphere of influence of our mining machine cannot grow as fast as $`t`$ in an ever expanding universe. Indeed in an ever-expanding universe all objects have a finite ultimate sphere of gravitational influence. Consider a region that has a density $`\rho +\delta \rho `$ which exceeds the mean density $`\rho `$ of the universe. If the region is sufficiently large, gravity will cause the region to expand somewhat more slowly than the average. The over-density $`\delta \rho /\rho `$ of the region compared to the mean will increase. Once $`\delta \rho /\rho `$ approaches one, the region will decouple from the background expansion, grow slightly and then collapse. Because there is a uniform background density of material, the gravitational effect of any local mass distribution becomes negligible as one goes to larger volumes – all objects are gravitationally influenced only by larger mass overdensities. For $`n_{dom}3`$ (e.g. curvature, radiation, or cosmological constant dominated), expansion eventually wins out over collapse on large scales, and structure formation ceases; the gravitationally accessible mass for our “machine” is therefore finite. Only in a matter-dominated ($`n_{dom}=3`$) flat ($`k=0`$) universe, does structure continues to grow hierarchically. We do not appear to live in such a universe. Nevertheless, even in this case the gravitationally accessible mass appears to be finite, though the ultimate result of large scale structure formation would depend upon the spectrum of primordial density perturbations. Primordial density perturbations could be absent on large scales, so that ever larger structures do not form. In this case the accessible $`n_{coll}=3`$ energy contained within the collapsed perturbations is clearly finite. Alternately, non-zero density fluctuations could continue to come inside the horizon indefinitely. In this case, structures on ever larger scales will continue to form. As described above, after entering the horizon, fluctuations will grow in size until $`\delta \rho /\rho 1`$. At this point they will decouple from the expansion and soon begin to recollapse. For the structures we currently observe, such as galaxies or cluster of galaxies, the recollapse has been (temporarily) halted by the internal pressure of the collapsing matter. This happened long before their average density exceeded the critical density of a black hole of that mass, $`\rho _{BH}=\frac{3c^2}{32\pi G_N^3M^2}`$. However, once the collapsing structures are sufficiently large and the collapsing matter is sufficiently cold, there is no known source of internal pressure to halt the collapse until after they have exceeded this critical density. Not only is the energy accessible to civilizations finite in such cases, but it all must ultimately end in the singularity inside a black hole. This is identical in detail with the ultimate fate of life in a collapsing universe. Thus, in a flat, matter dominated universe, life either is stranded on isolated islands of finite total energy, or is swept into a large black hole. Hence, it appears that in any cosmological model, only a finite amount of $`n_{coll}=3`$ energy can be recovered by static machines. ### 2.2 Relativistic matter, and mobile mining machines If the energy to be mined involves radiation, rather than matter, then $`n_{coll}=4`$. This applies to a uniform background of radiation, such as the Cosmic Microwave Background. If the source of such radiation lies instead in discrete concentrations of matter, then the preceding analysis applies, and only a finite total energy can be mined. For the case of an $`n_{coll}=4`$, background, one must perform a different analysis. It is also worth recognizing that we can include here the special case in which we move mining machines to scoop up matter or energy. The case of a static detector intercepting radiation will be equivalent to a moving detector with $`v=c=`$constant, for example. Imagine collectors of effective area $`A`$ intercepting the energy (with $`A`$ equal to the number of scattering centers times the cross section for scattering of each scattering center), so that $$\mathrm{\Phi }=\rho NAv.$$ (8) At late times $`vt^e`$ with $`e0`$. (For a static detector receiving radiation with $`v=c`$, $`e=0`$.) Note that a moving machine will be slowed down as it sweeps up energy from the background, requiring a continuing input of energy into the machine. As the mass of the machine grows, the energy input required will also increase with time. We will ignore this need to input kinetic energy for the moment, as it is irrelevant for what turns out to be the optimal possibility: a static detector receiving radiation. At first sight, it seems that the most efficient collectors would be black holes. As a black hole passes through the universe (or as radiation streams by the black hole), it effectively traps all material which falls within the disk spanned by its event horizon. The area of the black hole’s horizon scales as $`M_{BH}^2`$, so $`At^{2c}`$ Equivalently, we might optimistically consider investing collected photons in new collecting machines which might somehow coherently convert them into material particles. In this case, the cross section for these machines would grow as the square of the the number of material particles. (Note that this is the most optimistic assumption one can make.) In either case, we can then consider a rate of energy collection optimistically given by: $$\frac{dE}{dt}=\gamma E^2t^{8/n_{dom}}$$ (9) with $`\gamma =(16\pi G^2/c^3)\mathrm{\Omega }_o^{rad}\rho _ct_o^{8/n_{dom}}`$ in flat space ($`k=0`$). Here $``$ is the gravitational focusing factor, which is a number of order $`1`$ that depends on the velocities of the particles being collected. (The curved space result is more complicated, but the final results are unchanged, as we will describe.) Here $`\rho _c`$ is the critical density of the universe. (If $`\rho >\rho _c`$ then $`k>0`$; if $`\rho <\rho _c`$ then $`k<0`$.) $`\mathrm{\Omega }_o^{rad}\rho _c`$ is the current energy density in radiation; $`t_o`$ is the current age of the universe. The long term behavior of $`E(t)`$ in this case is: $$\underset{t\mathrm{}}{lim}E(t)=\frac{E_o}{1\frac{\gamma E_ot_o^{18/n}}{8n}}.$$ (10) This is finite so long as the initial mass $`M_o=E_o/c^2`$ is less than a critical value: $$M_c\frac{(8n)c}{16\pi G^2\rho _ct_o\mathrm{\Omega }_o^{rad}}$$ (11) This critical mass is equal to the mass within the entire visible universe times a factor of order $`1/\mathrm{\Omega }_o^{rad}`$. Since $`\mathrm{\Omega }_o^{rad}10^4`$, even under this overly optimistic assumption, the radiation energy that such a machine (black hole or otherwise) can collect is finite. (For a black hole, we have the additional problem that the energy collected is stored for a long time, as the black hole lifetime goes as $`M^3`$. Hence the usable power quickly falls in this case, so that the power required to run energy metabolizers could quickly exceed the available supply.). We can understand this general result as follows. If such a machine, say a black hole, could collect infinite energy, this would imply that the entire visible universe could collapse into such an object. But general arguments based on the growth of large scale structure tell us that only if one starts out with an extra-horizon-sized black hole can this be the case. Next, it is worth pointing out that not only the total energy but also the number of photons received by any individual scattering center, integrated over the history of the universe, is finite. This can be seen by integrating the photon number density times the relevant scattering cross section, over time, as follows $$N_{tot}_{t_i}^{\mathrm{}}n_\gamma \sigma 𝑑t.$$ (12) Since $`n_\gamma t^{6/n_{dom}}`$, and since the total mass of the prospector and thus the number of scattering centers is finite, this integral is finite unless the electromagnetic cross section rises steeply with decreasing energy. However, as all such cross sections approach a constant at low energy, the number of photons collected is therefore finite. We shall return to this issue later in this paper. Finally, we note that in the case of a cosmological constant-dominated universe, Gibbons-Hawking radiation exists. One might imagine that this radiation, at a constant temperature related to the horizon size, could provide an energy source to be tapped. However, while it would take work to keep any system at a lower temperature (see below), the energy momentum of this radiation is that appropriate to a cosmological term and not a standard radiation bath, and thus it cannot be extracted for useful work without tapping the vacuum energy itself. ### 2.3 Extended sources of energy For $`n_{coll}<3`$, recoverable energy sources are infinitely extended objects (cosmic strings have $`n_{coll}=2`$ , domain walls give $`n_{coll}=1`$) which do not fall freely into any localized static machine, thus once again $`d<1`$, and the total collectible energy is finite. One caveat to this argument is that we have assumed that the energy density to be cannibalized is, on average, uniformly distributed throughout space, so that general scaling relations for energy density are appropriate. An exception to that assumption is any topological defect such as cosmic strings or domain walls, in which the number density redshifts as $`a^3`$, however the linear/surface energy density of the defect remains constant so that $`\rho `$ scales as $`a^2`$ or $`a^1`$ respectively. Could the energy in such defects be cannibalized? The problem is that the rate at which one can extract energy from the strings (or walls) is finite (at any given time there is only a finite amount of string in the observable universe) and one can not continue extracting the energy indefinitely. Why? Because whatever strategy one develops for mining the string, the universe can, and will, emulate. Consider cosmic strings. If they are unstable, then their energy density will eventually decline exponentially. If they are (topologically) stable, then the only way to mine them is by nucleating either monopole- antimonopole pairs or black-hole pairs along their length. However, the universe will also avail itself of precisely the same strategy. In fact, no matter what, black hole pairs will eventually nucleate on the strings and consume them. The length of string in the observable universe is growing at most as a power of time, whereas at long enough time (longer than the characteristic time for a black hole pair to nucleate on a string) the rate at which black hole pairs are eating the string becomes exponential. The total length of string which you can eventually mine may be extremely long, but it must ultimately be finite. Could the rate of black-hole pair nucleation along the string itself be a rapidly decreasing function of time? Only if the gravitational “constant” were changing appropriately– a possibility perhaps in some theories of gravity, but hardly a good bet for the ultimate success of life. On an optimistic note, while we argue that only finite energy resources are available, it is worth noting that in all expanding cosmologies, the actual amount is very large indeed, allowing life-forms with metabolisms equivalent to our own to exist, in principle for times in excess of $`10^{50}`$ years. Other issues, including proton decay, for example, may become relevant before an energy crisis arises. Nevertheless, we next address the question of whether, even with finite energy resources, life might, in principle, be eternal. ## 3 Living with Finite Energy In an Ever-Cooling Universe It was Dyson who first seriously addressed the question of the ultimate fate of life in an open universe. Having assumed that the supply of energy ultimately available to life would be finite (as we have shown above always to be the case), he realized that life will be forced eventually to go on an ever stricter diet to avoid consuming all the available energy. The first question he identified is whether consciousness is associated with a specific matter content, or rather with some particular structural basis. If the former, then life would need to be maintained at its current temperature forever, and could not be sustained indefinitely with finite resources. If however consciousness could evolve into whatever material embodiment best suited its purposes at that time, “then a quantitative discussion of the future of life in the \[expanding\] universe becomes possible” . We will assume here, for the sake of argument, that it is structure which is essential; we will also assume that the embodiment of that structure must be material. Dyson assumed a scaling law that is independent of the particular embodiment that life might find for itself, as follows: Dyson’s “Biological Scaling Hypothesis (DBSH): If we copy a living creature, quantum state by quantum state, so that the Hamiltonian $$H_c=\lambda UHU^1$$ (13) (where $`H`$ is the Hamiltonian of the creature, $`U`$ is a unitary operator, and $`\lambda `$ is a positive scaling factor), and if the environment of the creature is similarly copied so that the temperatures of the environments of the creature and the copy are respectively $`T`$ and $`\lambda T`$, then the copy is alive, subjectively identical to the original creature, with all its vital functions reduced in speed by the same factor $`\lambda `$. ” As Dyson pointed out, the structure of the Schrodinger equation makes the form of this scaling hypothesis plausible. We shall adopt the DBSH here and comment later on possible violations. The first consequence of the DBSH explored by Dyson is that the appropriate measure of time as experienced by a living creature is not physical (i.e. proper) time, $`t`$, but the “subjective time” $$u(t)=f_0^tT(t^{})𝑑t^{},$$ (14) where $`T(t)`$ is the temperature of the creature and $`f`$ is a scale factor with units of $`(\mathrm{Ksec})^1`$ which is introduced to make $`u`$ dimensionless. Dyson suggests $`f(300\mathrm{K}\mathrm{s}\mathrm{e}\mathrm{c})^1`$ to reflect that humans operate at approximately $`300^oK`$ and a “moment of consciousness” lasts about one second, however the precise value is immaterial, only the fact that $`f`$ is essentially constant. The second consequence of the scaling law is that any creature is characterized by its rate Q of entropy production per unit of subjective time. A human operating at $`300K`$ dissipates about 200W, therefore $$\mathrm{Q}10^{23}$$ (15) Dyson asserts that this is a measure of the complexity of the molecular structures involved in a single act of human awareness. Though one might question whether this entire Q should be associated with the act of awareness, since in the typical human a significant fraction of Q is devoted to intellectually non-essential functions, nevertheless this does suggest that a civilization of conscious beings requires $`\mathrm{log}_2\mathrm{Q}>50100`$. A creature/society with a given Q and temperature $`T`$ will convert energy to heat at a minimum rate of $$m=kfQT^2.$$ (16) $`m`$ is the minimum metabolic rate in ergs per second of physical (not subjective) time and $`k`$ is Boltzmann’s constant. It is crucial that the scaling hypothesis implies that $`mT^2`$, one factor of $`T`$ coming from the relationship between energy and entropy, the other coming from the assumed (isothermal) temperature dependence of the rate of vital processes. Suppose that life is free to choose its temperature $`T`$. There must still be a physical mechanism for radiating the creature’s excess heat into the environment. Dyson showed that there is an absolute limit on the rate of disposal of waste heat as electromagnetic radiation $$I(T)<2.84\frac{N_ee^2}{m_e\mathrm{}^2c^3}(kT)^3$$ (17) where $`N_e`$ is the number of electrons (or positrons) at temperature $`T`$. This limit arises from the rate of dipole radiation by the electrons. Any other form of radiation will have a stronger dependence on $`T`$, at least at low $`T`$: massless neutrinos are emitted from matter only by weak interactions, which are mediated by massive intermediate particles, gravitational radiation is coupled only to quadrupoles. Both therefore scale more strongly with temperature at low temperature. All free particles other than photons, gravitons and neutrinos are massive thus their emission is exponentially suppressed at low temperature. The rate of energy dissipation, $`m`$, must not exceed the power that can be radiated, if the object is not to heat up, implying a fixed lower bound for the temperatures of living systems: $$T>\frac{2Q\mathrm{}f}{N_ek2\alpha \gamma }\frac{m_ec^2}{k}\frac{Q}{N_e}10^{12}\mathrm{K}.$$ (18) $`N_e`$ cannot be increased without limit, since the supply of energy (and hence mass) is finite. Q however cannot be decreased without limit. (A system of one bit complexity is probably not living, a system of less than one bit complexity is certainly not living.) The slowing down of metabolism described by the DBSH is therefore insufficient to allow life to survive indefinitely. Dyson goes on to suggest a strategy – hibernation. Life may metabolize intermittently but continue to radiate away waste heat during hibernation. In the active phase, life will be in thermal contact with the radiator at temperature $`T`$. During hibernation, life will be at a lower temperature, so that metabolism is effectively stopped. If a society spends a fraction $`g(t)`$ of its physical time active and a fraction $`[1g(t)]`$ hibernating, then the total subjective time will be given by $$u(t)=f_0^tg(t^{})T(t^{})𝑑t^{}$$ (19) and the average rate of dissipation of energy is $$m=kfQgT^2$$ (20) The constraint (18) is replaced by $$T(t)>T_{min}\frac{Q}{N_e}g(t)10^{12}\mathrm{K}.$$ (21) Life can both keep in step with this limit and have an infinite subjective lifetime. For example, if $`g(t)=\frac{T(t)}{T_o}`$, with $`T_o>(Q/N_e)10^{12}K`$, and we let $`T(t)`$ scale as $`t^p`$, then the total subjective time is $$u(t)^tt^{2p}𝑑t^{}$$ (22) which diverges for $`p1/2`$. The total energy consumed scales as $$^tm(t^{})𝑑t^{}^tt^{3p}𝑑t^{}$$ (23) which is finite for $`p>1/3`$. Thus if $`1/3<p1/2`$, the total energy consumed is finite and the total subjective time is infinite. It is clear that this strategy will not work in a cosmological constant-dominated universe. This is because a cosmological constant dominated universe is permeated by background radiation at a constant temperature $`T_{deS}=\sqrt{\mathrm{\Lambda }/12\pi ^2}`$. A particle detector (such as a radiator for radiating away energy) will register the de Sitter background radiation, and bring the radiator into thermal equilibrium with the background. (Note however, for the reasons mentioned earlier, the energy in the cosmological constant cannot be tapped or converted into useful work if the cosmological constant remains constant.) Therefore $`T_{deS}`$ is the minimum temperature at which life can function. It is then impossible to have both infinite subjective lifetime and consume a finite amount of energy. Life must end, at least in the sense of being forced to have finite integrated subjective time. (Note that one cannot use the de Sitter radiation as a perpetual source of free energy. A cold body will indeed be warmed by the radiation, but it takes more free energy to cool the body than can be extracted.) In fact, we now argue that this hibernation strategy will fail not only in a cosmological constant dominated universe, but in any ever-expanding universe. In order to implement the hibernation strategy there are two challenges. First, one must construct alarms which must be relied on to awaken the sleeping life. Second, one must recognize that eventually thermal contact with one’s surroundings effectively ends: 1) A standard alarm clock, one which is subject to the DBSH, suffers from the same constraints as those imposed above upon life. This clock must be powered at some level to keep time and it will thus dissipate energy. If it is subject to the DBSH, then there is a minimum temperature at which it can be operated. The alarm clock is a system of some complexity $`Q_{alarm}`$, which as Dyson showed cannot therefore be operated at arbitrarily low temperature. Since $`Q_{alarm}`$ cannot be reduced forever, eventually one cannot operate a standard alarm clock. As we shall show in section 4, even if one could manage to expend energy only to wake up the hibernator, and not to run the alarm clock in the interim, the alarm clock would still eventually exhaust the entire store of energy. 2) The living system is not in thermal equilibrium. As we have shown, the integrated number of CMB photons received over all time is finite. Therefore, after a certain time the probability of detecting another CBR photon, integrated over all of future history approaches zero. Thus, thermal contact with this background (and all other backgrounds) is lost. Note also that in any case, the Dyson expression for dipole radiation, assumed above, clearly breaks down at some level, notably when the wavelength of thermal radiation becomes very large compared to the characteristic size of the radiating system. Put another way, the thermal energies will eventually become small compared to the characteristic quantized energy levels of the system, at which point radiation will by suppressed by a factor $`e^{E_{char}/kT}`$ compared to the estimate of Dyson. Once this occurs, further cooling will be difficult. The only alternative to avoid this is to increase the characteristic size of the system with $`a`$, which presents its own challenges. Lastly, another problem ultimately presents itself independent of the above roadblocks. Alarm clocks are eventually guaranteed to fail. In the low temperature mode these failures may be statistical or quantum mechanical. If the number of material particles which can be assembled is finite, the catastrophic failure lifetime may be large, but it cannot be made arbitrarily so. In the absence of a sentient being to repair the broken alarm clock, hibernation would continue forever. In fact this argument about broken alarm clocks applies equally well to living beings themselves. Eventually, the probability of a catastrophic failure induced by quantum mechanical fluctuations resulting in a loss of consciousness becomes important. One might hope to avoid this fate by keeping the structures in contact with their surroundings (which can suppress quantum fluctuations such as tunneling). However, hibernation requires precisely the opposite, and moreover, we have seen that such contact gets smaller over time. In any case, for a plethora of reasons, under the DBSH, it appears that consciousness is eventually lost in any eternally expanding universe. ## 4 Beyond the Biological Scaling Hypothesis Clearly, if consciousness is to persist indefinitely, one must consider moving beyond the DBSH. The DBSH assumes implicitly that only rescalings and no fundamental improvements or alterations can be made in the mechanisms of consciousness. A particular consequence is that the rate of entropy production scales as $`T^2`$. Can one do better? It may appear that a full answer to this question requires that we understand the mechanisms of consciousness. However, in fact, our above discussions indirectly point to an approach which demonstrates that as long as the mechanism of consciousness is physical, life cannot endure forever. Let us return momentarily to the question of whether there are non-standard alarm clocks which can be operated at arbitrarily low temperature, with arbitrarily low energy per cycle. This possibility hearkens back to recent results on the thermodynamics of computation, and more importantly, issues of reversible quantum computation. It was long thought that computation is an entropy generating process, and thus a heat generating process. More recently it has been pointed out that as long as (a) one is in contact with a heat bath, and (b) one is willing to compute arbitrarily slowly, then computing itself can be a reversible process. This opens the possibility that if living systems can alter their character so that consciousness can be reduced to computation, one could in principle reduce the amount of entropy, and hence the amount of heat produced per computation arbitrarily, if one is willing to take arbitrarily long to complete the computation. Thus, metabolism, and the continued existence of consciousness, could violate the DBSH. There are two problems. First, as we have shown, living things cannot remain in thermal equilibrium with the cosmic background forever, so inevitably the process of computation becomes irreversible. Also, the question of computational reversibility is in some sense irrelevant, since the process of erasing, or resetting registers inevitably produces entropy. If one simply reshuffled data back and forth between registers, reversibility would be adiabatically possible in principle. However, we have shown that only a finite number of material particles are accessible. Thus any civilization can have only a finite total memory available, and resetting registers is therefore essential for any organism interacting with its environment, or initiating new calculations. While an existence, even nirvana, might be possible without this, we do not believe it is sensible to define this as life. Life therefore cannot proceed reversibly, and organisms cannot continue computationally metabolize energy into heat at less than essentially $`kT`$ per computation. In this case, one must perform a detailed analysis to determine if the energy radiated can continue to cool a system so that its metabolism falls fast enough to allow progressively less energy utilization, leading to a finite integrated total energy usage. We find that the constraints on such radiation even in the most optimistic case require the density of the radiating system to reduce along with the expansion. We do not provide the details of this analysis here because we believe there is a more general argument which establishes that consciousness cannot be eternal. In order to perform any computation, quantum or classical, at least two states are needed. One can in principle force the computation to proceed in one direction or another, reversibly, by altering adiabatically the external conditions. However, if erasures are performed, or if heat is generated because one is not in perfect equilibrium with the environment, then after the computation one must be in a lower energy state than before the computation, as heat has been radiated. To perform an infinite number of calculations then implies one must have an infinite tower of states. This does not require infinite energy, if the states approach an accumulation point near the ground state. However, no finite system has such a property. (The emission of arbitrarily many massless particles of ever low energy should not be regarded as adding new states, since such particles cannot be confined in a finite material system.) Hence, no finite system can perform an infinite number of computations. Thus, if consciousness can be reduced to computation, life, at least life which involves more than eternal reshuffling of the same data, cannot be eternal. It may be that this reductionist view of consciousness as computation is incorrect. However, it is hard to imagine a physical basis for consciousness which avoids the scaling relationships we have described. Finally, it is worth emphasizing another issue that we have only peripherally noted thus far. We have shown that it is impossible to collect more than a finite amount of any quantity that scales as $`1/a^3`$. However, the entropy density of the universe scales in this fashion. Thus, independent of issues of whether there is infinite information in an infinite universe, it is impossible to collect more than a finite amount. Effectively even an infinite universe allows only a finite computational system. ## 5 Conclusions The picture we have painted here is not optimistic. If, as the current evidence suggests, we live in a cosmological constant dominated universe, the boundaries of empirical knowledge will continue to decrease with time. The universe will become noticeably less observable on a time-scale which is fathomable. Moreover, in such a universe, the days — either literal or metaphorical — are numbered for every civilization. More generally, perhaps surprisingly, we find that eternal sentient material life is implausible in any universe. The eternal expansion which Dyson found so appealing is a chimera. We can take solace from two facts. The constraints we provide here are ultimate constraints on eternal life which may be of more philosophical than practical interset. The actual time frames of interest which limit the longevity of civilization on physical grounds, are extremely long, in excess of $`10^{50}10^{100}`$ years, depending upon cosmological and biological issues. On such time-scales much more pressing issues, including the death of stars, and the possible ultimate instability of matter, may determine the evolution of life. Next, and perhaps more important, strong gravitational effects on the geometry or topology of the universe might effectively allow life, or information, to propagate across apparent causal boundaries, or otherwise obviate the global spatial constraints we claim here. For example, it might one day possible to manipulates such effects to artificially create baby-universes via wormholes or black hole formation or via the collision of monopoles . Then one might hope that in such baby universes conscious life could eventually appear, or that one might be able to move an arbitrarily large amount of information into or out of small or distant regions of the universe. While these are interesting possibilities, at this point they are vastly more speculative than the other possibilities we have discussed here. ## Acknowledgements We thank C. Fuchs, H. Mathur, J. Peebles, C. Taylor, M. Trodden and T. Vachaspati for discussions and J. Peebles and F. Wilczek for comments on the manuscript. This research is supported in part by the DOE and NSF.
no-problem/9902/hep-ph9902232.html
ar5iv
text
# Figure Captions ## Figure Captions Fig. 1 : Diquark breaking component for nucleon-nucleus scattering and two inelastic collisions. Fig. 2 : DP component for nucleon-nucleus scattering for two inelastic collisions. Fig. 3 : Baryon yields for $`pPb`$ and $`PbPb`$ collisions at 160 GeV per nucleon calculated from eqs. (1-5) using the values in Table 1 before (dotted line) and after (full line) final state interaction. The dashed lines are the results with final state interaction using the Monte-Carlo results for the conventional DP and string breaking components. The data are from the WA 97 collaboration . Fig. 4 : Same as in Fig. 3 for the ratios $`\overline{B}/B`$. ## Table | | $`p`$ | $`\mathrm{\Lambda }`$ | $`\mathrm{\Xi }^{}`$ | $`\mathrm{\Omega }`$ | | --- | --- | --- | --- | --- | | $`dN_{DB}^{\mathrm{\Delta }B}/dy`$ | $`1.87\times 10^1`$ | $`6.22\times 10^2`$ | $`3.26\times 10^3`$ | $`2.38\times 10^4`$ | | $`dN_{sea}^{\overline{B}}/dy`$ | $`4.36\times 10^3`$ | $`2.44\times 10^3`$ | $`2.92\times 10^4`$ | $`2.91\times 10^5`$ | | $`dN_{DP}^{\mathrm{\Delta }B}/dy`$ | $`6.90\times 10^2`$ | $`1.40\times 10^2`$ | 0 | 0 | | $`dN_{string}^{\overline{B}}/dy`$ | $`8.50\times 10^3`$ | $`2.83\times 10^3`$ | $`1.65\times 10^4`$ | $`5.05\times 10^6`$ | | $`dN_{pp}^{\mathrm{\Delta }B}/dy`$ | $`3.60\times 10^2`$ | $`7.00\times 10^3`$ | 0 | 0 | | $`dN_{pp}^{\overline{B}}/dy`$ | $`1.70\times 10^2`$ | $`5.65\times 10^3`$ | $`3.30\times 10^4`$ | $`1.01\times 10^5`$ | Table I : Values of the rapidity densities at $`y^{}=0`$ in eqs. (1,3-5). The values in the first lower rows are for central $`PbPb`$ collisions ($`\overline{n}_A=178`$, $`\overline{n}=858`$) and those in the last two rows for $`pp`$. Three numbers in the first row and one in the second one have been adjusted to the data (see main text). Figure 1 Figure 2 Figure 3 Figure 4
no-problem/9902/hep-ph9902293.html
ar5iv
text
# Untitled Document hep-ph/9902293 WIS-99/3/Feb-DPP Neutrino Masses and Mixing with Non-Anomalous Abelian Flavor Symmetries Yosef Nir and Yael Shadmi Department of Particle Physics Weizmann Institute of Science, Rehovot 76100, Israel ftnir@clever.weizmann.ac.il, yshadmi@wicc.weizmann.ac.il The experimental data on atmospheric and solar neutrinos are used to test the framework of non-anomalous Abelian horizontal gauge symmetries with only three light active neutrinos. We assume that the hierarchy in mass-squared splittings is not accidental and that the small breaking parameters are not considerably larger than 0.2. We find that the small angle MSW solution of the solar neutrino problem can only be accommodated if the $`\nu _\mu \nu _\tau `$ mass hierarchy depends on the charges of at least three sterile neutrinos. The large angle MSW solution can be accommodated in simpler models if $`\nu _e`$ and $`\nu _\mu `$ form a pseudo-Dirac neutrino, but it is difficult to induce large enough deviation from maximal mixing. The vacuum oscillation solution can be accommodated rather simply. We conclude that it is possible to accommodate the neutrino parameters in the framework of Abelian horizontal symmetries, but it seems that these parameters by themselves will not provide convincing evidence for this framework. 2/99 1. Introduction and Results Approximate Abelian horizontal symmetries can explain the smallness and the hierarchy in the flavor parameters $``$ fermion masses and mixing angles $``$ in a natural and simple way . One can think of three types of evidence for such symmetries: First, the full theory involves fields that are related to the spontaneous symmetry breaking and to the communication of the breaking to the observable sector. Direct discovery of such particles is, however, very unlikely because constraints from flavor changing neutral current (FCNC) processes and from Landau poles imply that they should be very heavy . Second, the supersymmetric flavor parameters are also determined by the selection rules of the horizontal symmetry \[3,,4\]. (This is likely to be the case if supersymmetry breaking is mediated to the observable sector by Planck-scale interactions ; in contrast, gauge mediation would erase the effects of the horizontal symmetry from the sfermion flavor parameters.) The spectrum of supersymmetric particles and, in particular, supersymmetric effects on FCNC and on CP violation could then provide evidence for the horizontal symmetry. Third, it could be that the Yukawa parameters themselves obey simple order of magnitude relations that follow from the horizontal symmetry . In this context, Abelian horizontal symmetries have much more predictive power in the lepton sector than in the quark sector . Neutrino parameters provide then an important input for testing and refining this framework. As concerns neutrino parameters, recent measurements of the flux of atmospheric neutrinos (AN) suggest the following mass-squared difference and mixing between $`\nu _\mu `$ and $`\nu _\tau `$ : $$\mathrm{\Delta }m_{23}^22\times 10^3eV^2,\mathrm{sin}^22\theta _{23}1.$$ On the other hand, measuremnts of the solar neutrino (SN) flux can be explained by one of the following three options for the parameters of $`\nu _e\nu _x`$ ($`x=\mu `$ or $`\tau `$) oscillations (for a recent analysis, see ): $$\begin{array}{ccc}& \mathrm{\Delta }m_{1x}^2[eV^2]& \mathrm{sin}^22\theta _{1x}\\ \mathrm{MSW}(\mathrm{SMA})& 5\times 10^6& 6\times 10^3\\ \mathrm{MSW}(\mathrm{LMA})& 2\times 10^5& 0.8\\ \mathrm{VO}& 8\times 10^{11}& 0.8\end{array}$$ Here MSW refers to matter-enhanced oscillations, VO refers to vacuum oscillations, and SMA (LMA) stand for small (large) mixing angle. Only central values are quoted for the various parameters. Our basic assumption will be that eqs. (1.1) and (1.1) imply that the ratio between the mass splittings is suppressed by the small breaking parameter of an Abelian horizontal symmetry, $`\lambda 0.2`$, while the $`\nu _\mu \nu _\tau `$ mixing angle is not: $$\begin{array}{cc}\hfill \mathrm{sin}\theta _{23}& 1,\hfill \\ \hfill \frac{\mathrm{\Delta }m_{12}^2}{\mathrm{\Delta }m_{23}^2}& \{\begin{array}{cc}\lambda ^2\lambda ^4\hfill & \text{MSW,}\hfill \\ \lambda ^{10}\lambda ^{12}\hfill & \text{VO.}\hfill \end{array}\hfill \end{array}$$ We note, however, that it is not impossible that, if the solar neutrino problem is a result of the MSW mechanism, the ratio between $`\mathrm{\Delta }m_{\mathrm{SN}}^2`$ and $`\mathrm{\Delta }m_{\mathrm{AN}}^2`$ is accidentally, rather than parametrically, suppressed \[10--13\]. Then, the analysis of this work and of ref. is irrelevant, and the framework of Abelian horizontal symmetries can accommodate the neutrino parameters in a simple way. In a previous work we investigated the implications of (1.1) for models where an Abelian horizontal symmetry $`H`$ is broken by a single small parameter. (For recent related work, see \[15--32\].) This assumption is best-motivated in models where the horizontal symmetry is an anomalous $`U(1)_H`$ gauge symmetry \[34--37\]. The anomaly is cancelled by the Green-Schwarz mechanism . The contribution of the Fayet-Iliopoulos term to the D-term cancels against the contribution from a VEV of a Standard Model (SM) singlet field $`S`$ with $`H`$-charge that is opposite in sign to $`\mathrm{tr}H`$. (Without loss of generality, we choose the $`H`$-charge of $`S`$ to be $`1`$). The information about the breaking is communicated to the observable sector (MSSM) at the string scale. The ratio $`\lambda =S/m_{\mathrm{Pl}}\frac{\mathrm{tr}H}{192\pi ^2}`$ \[39--41\] provides the small breaking parameter of $`H`$. The single VEV assumption is also plausible if the horizontal symmetry is discrete. In the single VEV framework, as explained in , it is non-trivial to get large mixing together with a large hierarchy as implied by (1.1). To obtain that one needs to invoke either holomorphic zeros or discrete symmetries, often with a symmetry group that is a direct product of two factors. The situation is different if the Abelian horizontal symmetry is a non-anomalous gauge symmetry. If supersymmetry is not to be broken at the scale of spontaneous $`H`$-breaking, then $`H`$ should be broken along a D-flat direction. The simplest possibility then is that two scalars, $`S`$ and $`\overline{S}`$, of opposite $`H`$-charges (say, $`\pm 1`$) assume equal VEVs, $`S=\overline{S}`$ \[42--48,,13\]. In this work we investigate the implications of (1.1) in the two-VEVs framework. It is straightforward to see that our previous mechanisms are irrelevant in the new framework. First, with discrete symmetries there is no motivation for the two-VEVs scenario. There is also no sense in talking about negative charges. Second, with VEVs of opposite charges there can be no holomorphic zeros. On the other hand, this framework is in some sense less predictive and, consequently, allows new mechanisms to accommodate simultaneous large mixing and mass hierarchy between neutrinos. In particular, this situation can be obtained naturally with a single $`U(1)`$ horizontal symmetry. As we shall see, if the small-angle MSW solution is found to be valid, it would mean that the light neutrino masses depend crucially on the horizontal charges of (at least three) heavy sterile neutrinos. If, on the other hand, the large-angle MSW solution is confirmed, two possibilities are allowed. One is that the light neutrino masses are again affected by the charges of heavy sterile neutrinos. The second is that the MSW solution corresponds to oscillations between the first and second generation neutrinos, which form a pseudo-Dirac neutrino. In the latter case, there is another constraint on the neutrino parameters, $`\mathrm{sin}2\theta _{12}<0.9`$. This constraint is not simple to satisfy since the mixing in the pseudo-Dirac case is close to maximal . In fact, the deviation from maximal mixing is at most $`𝒪(\lambda ^2)`$. These models are therefore marginally viable. They are only consistent if the $`𝒪(\lambda ^2)`$ correction is enhanced by about three. The plan of this paper is as follows. In section 2 we specify our theoretical framework. In Section 3 we argue that, in the framework of Abelian horizontal symmetries, $`\nu _\mu `$ and $`\nu _\tau `$ cannot pair to a pseudo-Dirac neutrino. In section 4 we review previous results in the framework of an anomalous $`U(1)`$ symmetry. In section 5 we analyze in detail the lepton parameters in the framework of non-anomalous Abelian gauge symmetry. (Proofs of some of the statements of this section are given in the appendix.) We conclude in section 6, emphasizing the difficulties that the data on solar and atmospheric neutrinos pose to the framework of Abelian horizontal gauge symmetries. 2. The Theoretical Framework Our theoretical framework is defined as follows. We consider a low energy effective theory with particle content that is the same as in the Supersymmetric Standard Model. In addition to supersymmetry and to the Standard Model gauge symmetry, there is an approximate $`U(1)_H`$ symmetry that is broken by two small parameters $`\lambda `$ and $`\overline{\lambda }`$ \[49,,42--48,,13\]. The two parameters are assumed to be equal in magnitude: $$\lambda =\overline{\lambda }0.2.$$ (The choice of numerical value comes from the quark sector, where the largest small parameter is $`\mathrm{sin}\theta _C=0.22`$.) To derive selection rules, we attribute to the breaking parameters $`U(1)_H`$ charges: $$H(\lambda )=+1,H(\overline{\lambda })=1.$$ Then, the following selection rule applies: Terms in the superpotential or in the Kahler potential that carry (integer) $`H`$-charge $`n`$ are suppressed by $`\lambda ^{|n|}`$.<sup>1</sup> The same selection rule would apply in a theory with a single breaking parameter and no supersymmetry. We assume that the active neutrinos (coming from lepton doublet supermultiplets $`L_i`$), are light because of a seesaw mechanism involving heavy sterile neutrinos (coming from singlet supermultiplets $`N_i`$). However, the relations (1.1) do not necessarily depend on the charges of all active and sterile neutrinos. For example, if $`\nu _e`$ is much lighter than all other neutrinos, it does not enter (1.1). Likewise, if all sterile neutrinos may be integrated out at the $`H`$-breaking scale, they do not affect the effective light neutrino mass matrix. We will refer to a model containing $`n_a`$ relevant active neutrinos and $`n_s`$ relevant sterile neutrinos as an $`(n_a,n_s)`$ model. We will only consider models where all fields carry integer $`H`$-charges (in units of the charge of the breaking parameters). It is possible that some or all of the lepton fields carry half-integer charges . Then there is a residual, unbroken discrete symmetry. Such models can be phenomenologically viable and lead to interesting predictions. We leave the investigation of this class of models to a future publication . Note that if the horizontal symmetry is a continuous symmetry with a single breaking parameter, as would be the case for an anomalous $`U(1)`$, the selection rule stated above is modified. Superpotential terms that carry negative $`H`$-charge cannot appear, as they would require powers of $`\lambda ^{}`$, which is forbidden by holomorphy . We refer to these absent terms as “holomorphic zeros”. The theory is limited in the sense that it cannot predict the exact coefficients of $`𝒪(1)`$ for the various terms. Wherever we use the symbol “$``$” below we mean to say that the unknown coefficients of $`𝒪(1)`$ are omitted. RGE effects could enhance the neutrino mixing angle \[51--55,,13\]. The enhancement can take place if $`\mathrm{tan}\beta `$ is large and if the mass ratio between the corresponding neutrinos is not small. These enhancement effects are not important in our framework and we will not take them into account. 3. On Pseudo-Dirac Neutrinos As a first step in our discussion, we would like to make a general comment about pseudo-Dirac neutrinos in the framework of Abelian horizontal symmetries. In many of our examples (here and in ), two of the three active neutrinos pair to form a pseudo-Dirac neutrino. In all of these examples, the parameters of the pseudo-Dirac neutrino (maximal mixing and very small mass splitting) are fitted to solve the SN problem. Since AN observations seem to favor maximal, and not just generic $`𝒪(1)`$, mixing, one may wonder whether we could find a model where, indeed, the mass splitting between the components of the pseudo-Dirac neutrino corresponds to $`\mathrm{\Delta }m_{\mathrm{AN}}^2`$. We argue now that this is impossible. The argument goes as follows. Let us define $`m_{\mathrm{pD}}`$ to be the mass of the pseudo-Dirac neutrino and $`\delta _{\mathrm{pD}}m_{\mathrm{pD}}`$ to be the mass splitting between its components. An Abelian symmetry cannot give an exact relation between three entries in the mass matrix. (The symmetric structure of $`M_\nu `$ relates pairs of entries, which enables us to find models with a pseudo-Dirac neutrino.) Therefore, the mass-squared splitting between the pseudo-Dirac neutrino and the other mass eigenstate is at least $`𝒪(m_{\mathrm{pD}}^2)`$.<sup>2</sup> It is of course possible to fine tune the $`𝒪(1)`$ coefficients to get a stronger degeneracy . On the other hand, the mass-squared splitting between the components of the pseudo-Dirac neutrino is $`𝒪(\delta _{\mathrm{pD}}m_{\mathrm{pD}})`$. Since $`\delta _{\mathrm{pD}}m_{\mathrm{pD}}`$, the mass-squared splitting between the components of the pseudo-Dirac neutrino is much smaller than the mass-squared splitting between the pseudo-Dirac neutrino and the third mass eigenstate. Therefore, the former corresponds to $`\mathrm{\Delta }m_{\mathrm{SN}}^2`$ and the latter to $`\mathrm{\Delta }m_{\mathrm{AN}}^2`$.<sup>3</sup> If one considers only the AN problem, then it is of course possible that $`\nu _\mu `$ and $`\nu _\tau `$ form a pesudo-Dirac neutrino . It is worth emphasizing that the discussion of this section applies to any Abelian symmetry, continuous or discrete. 4. Continuous Symmetry with a Single Breaking Parameter Before moving on to our discussion of continuous symmetries with two breaking parameters, let us recall some results of in the single VEV framework. The main obstacle in obtaining, within the single-VEV framework, large mixing between hierarchically separated neutrinos, can be explained as follows. Consider a single $`U(1)_H`$ symmetry. Large mixing between, say, $`\nu _2`$ and $`\nu _3`$ can only be obtained in two cases: either the $`H`$-charges of the lepton doublets are equal , $`H(L_2)=H(L_3)`$, or they are opposite , $`H(L_2)=H(L_3)`$. In the first case the mixing is $`𝒪(1)`$ but the masses are of the same order of magnitude, $`m(\nu _2)m(\nu _3)`$. In the second case, to a good approximation, the mixing is maximal, $`\mathrm{sin}^22\theta _{23}=1`$ and the masses are equal, $`m(\nu _2)=m(\nu _3)`$. (This is the case of a pseudo-Dirac neutrino.) In either case, there is no mass hierarchy. If the symmetry is continuous but more complicated, say $`U(1)_1\times U(1)_2`$ with the respective breaking parameters of order $`\lambda ^m`$ and $`\lambda ^n`$, then one can still define an effective $`H`$-charge, $$H_{\mathrm{eff}}=mH_1+nH_2.$$ Large mixing can only be obtained for $`H_{\mathrm{eff}}(L_2)=\pm H_{\mathrm{eff}}(L_3)`$, so that, again, there is no mass hierarchy. This conclusion can only be evaded if holomorphic zeros appear in the neutrino mass matrix such that one of the two mass eigenvalues vanishes. In other words, in the single VEV framework, when considering $`\nu _2`$ and $`\nu _3`$ only, the only way to get large mixing and a large hierarchy between them is to make $`\nu _2`$ massless. Clearly though, we need to generate a mass for $`\nu _2`$ as well in order to account for (1.1). This can be arranged by having $`\nu _2`$ combine with $`\nu _1`$ to form a pseudo-Dirac neutrino. But then $`\mathrm{sin}2\theta _{12}`$ is large, and we cannot obtain the small angle MSW solution. In the two VEV framework, the notion of effective charge is not very useful. It is the absence of similarly powerful constraints that, on the one hand, makes the two VEV models less predictive but, on the other hand, allows one to accommodate the neutrino parameters more easily. 5. Continuous Symmetry with Two Breaking Parameters 5.1. No pseudo-Dirac neutrino We start our investigation of Abelian horizontal symmetries with two breaking parameters by assuming that the solution of the SN problem does not involve a pseudo-Dirac neutrino. Then the hierarchy of mass-squared splittings is simply related to the hierarchy of masses: $$\frac{\mathrm{\Delta }m_{12}^2}{\mathrm{\Delta }m_{23}^2}\frac{m_2^2}{m_3^2}.$$ The case $`m_2m_1`$ is only relevant to the VO solution of the SN problem and does not affect our conclusions in this subsection. These assumptions allow us to investigate the relevant parameters in a $`(2,n_s)`$ framework. A simple mechanism for inducing large mixing between hierarchically separated masses can be demonstrated in a simple $`(2,0)`$ model. In this model, the lepton mass matrices have the form: $$M_\nu \frac{\varphi _u^2}{M}\left(\begin{array}{cc}\lambda ^{2|H(L_2)|}& \lambda ^{|H(L_2)+H(L_3)|}\\ \lambda ^{|H(L_2)+H(L_3)|}& \lambda ^{2|H(L_3)|}\end{array}\right),$$ $$M_\mathrm{}^\pm \varphi _d\left(\begin{array}{cc}\lambda ^{|H(L_2)+H(\overline{\mathrm{}}_2)|}& \lambda ^{|H(L_2)+H(\overline{\mathrm{}}_3)|}\\ \lambda ^{|H(L_3)+H(\overline{\mathrm{}}_2)|}& \lambda ^{|H(L_3)+H(\overline{\mathrm{}}_3)|}\end{array}\right).$$ To have $`m_2m_3`$, we need $`|H(L_2)||H(L_3)|`$. We can still get a mixing of $`𝒪(1)`$ if $$H(L_2)+H(L_3)=2H(\overline{\mathrm{}}_3).$$ In a general $`(n_a,n_s)`$ model, large mixing could arise also from the neutrino Dirac mass matrix or the Majorana mass matrix of the sterile neutrinos. But in all these cases, it is required, similarly to (5.1), that $$H(L_2)H(L_3)=0(\mathrm{mod}2).$$ Eq. (5.1) has interesting implications for the mass hierarchy. From eq. (5.1) we learn that $$\frac{m_2^2}{m_3^2}\lambda ^{4[|H(L_2)+H(L_3)|2|H(L_3)|]}\lambda ^{8n}(n=\mathrm{integer}).$$ This creates a phenomenological problem. If the hierarchy is $`\lambda ^0`$, $`\mathrm{\Delta }m_{\mathrm{SN}}^2`$ is too large. The next weakest possibilities are $`\lambda ^8`$ or $`\lambda ^{16}`$. But for the MSW solutions we need $`\lambda ^{24}`$ and for the VO solution we need $`\lambda ^{1012}`$. We can achieve neither in this framework. The MSW solutions are particularly disfavored; the VO solution may still correspond to the $`\lambda ^8`$ hierarchy if $`\lambda `$ is actually close to 0.1 (rather than the value of 0.2 that we usually use).<sup>4</sup> The MSW solution can still be accommodated if the small breaking parameters are large, $`\lambda 0.5`$ . Of course, we have only considered the (2,0) framework. However, one can show that eq. (5.1) holds also in the (2,2) case. Within ($`2,n_s3`$) models, however, the hierarchy is an integer power of $`\lambda ^4`$ and, consequently, could be milder. <sup>5</sup> We thank F. Feruglio for pointing out a $`(2,3)`$ example with $`\lambda ^4`$ hierarchy. A proof of these statements can be found in the Appendix. We conclude then that one of the following options has to happen in order to achieve neutrino parameters that are consistent with the MSW solutions: (i) There are at least three sterile neutrinos that affect the mass hierarchy $`m_2/m_3`$. (A $`(2,3)`$ model which accommodates the MSW(SMA) solution can be found in .) (ii) $`\nu _e`$ and $`\nu _\mu `$ form a pesudo-Dirac neutrino. This case is, of course, relevant only to the MSW(LMA) solution. A simple example of how the VO solution can be implemented in this framework, with $`\lambda 0.1`$, is achieved with the following set of charges: $$\begin{array}{cc}& L_1(+9),L_2(+3),L_3(+1),\hfill \\ & \mathrm{}_1(15),\mathrm{}_2(6),\overline{\mathrm{}}_3(2).\hfill \end{array}$$ The neutrino mass matrix is $$M_\nu \frac{\varphi _u^2}{M}\left(\begin{array}{ccc}\lambda ^{18}& \lambda ^{12}& \lambda ^{10}\\ \lambda ^{12}& \lambda ^6& \lambda ^4\\ \lambda ^{10}& \lambda ^4& \lambda ^2\end{array}\right),$$ which yields $$\frac{\mathrm{\Delta }m_{\mathrm{SN}}^2}{\mathrm{\Delta }m_{\mathrm{AN}}^2}\lambda ^8.$$ For the charged lepton mass matrix we find $$M_{\mathrm{}}\varphi _d\left(\begin{array}{ccc}\lambda ^6& \lambda ^3& \lambda ^7\\ \lambda ^{12}& \lambda ^3& \lambda \\ \lambda ^{14}& \lambda ^5& \lambda \end{array}\right),$$ which gives $`\mathrm{sin}\theta _{23}1`$ and, for $`\mathrm{tan}\beta \lambda ^2`$, the required charged lepton mass hierarchy. 5.2. A pseudo-Dirac neutrino: Hierarchy of mass splittings without hierarchy of masses A large $`\nu _\mu \nu _\tau `$ mixing, relevant to the AN problem, and a large $`\nu _e\nu _\mu `$ mixing, relevant to the SN problem, could arise from very different mechanisms: $`𝒪(1)`$ $`\nu _\mu \nu _\tau `$ mixing from unequal charges (as discussed in the previous subsection), and maximal $`\nu _e\nu _\mu `$ mixing from their pairing to a pseudo-Dirac combination. Such a situation opens up the interesting possibility that there is actually no mass hierarchy: All three neutrino masses may be of the same order of magnitude, which is the scale set by AN, with the mass-squared splittings hierarchically separated. It is simple to see that all three neutrino masses are of the same order of magnitude if we take $$|H(L_1)+H(L_2)|=2|H(L_3)|.$$ Instead of (5.1) we now have $$\frac{\mathrm{\Delta }m_{12}^2}{\mathrm{\Delta }m_{23}^2}\frac{m_{\mathrm{pD}}\delta _{\mathrm{pD}}}{m_3^2}\frac{\delta _{\mathrm{pD}}}{m_{\mathrm{pD}}}.$$ The dependence of this ratio on the lepton charges is given by $$\frac{\mathrm{\Delta }m_{\mathrm{SN}}^2}{\mathrm{\Delta }m_{\mathrm{AN}}^2}\lambda ^{2[|H(L_2)||H(L_3)|]}<\lambda ^4.$$ In contrast to (5.1), the mass hierarchy is appropriate for the MSW parameters. Yet, the MSW(LMA) solution cannot be achieved because the deviation from maximal mixing is suppressed by at least $`𝒪(\lambda ^4)`$. The reason is that a deviation of $`𝒪(\lambda ^2)`$ can be achieved only if $`H(L_1)H(L_2)`$ is odd, but this is impossible because of (5.1). Therefore, in this class of models, only the VO solution can be accommodated. We now demonstrate this by an explicit example. Consider the following set of $`H`$-charges for the lepton fields in the (3,0) framework: $$\begin{array}{cc}\hfill L_1(+7),L_2(5),& L_3(+1),\hfill \\ \hfill \overline{\mathrm{}}_1(15),\overline{\mathrm{}}_2(+10),& \overline{\mathrm{}}_3(+2).\hfill \end{array}$$ The neutrino mass matrix is of the form $$M_\nu \frac{\varphi _u^2}{M}\left(\begin{array}{ccc}\lambda ^{14}& A\lambda ^2& \lambda ^8\\ A\lambda ^2& \lambda ^{10}& \lambda ^4\\ \lambda ^8& \lambda ^4& B\lambda ^2\end{array}\right).$$ For later purposes we explicitly wrote down the $`𝒪(1)`$ coefficients, $`A`$ and $`B`$, of the dominant entries. We see that all three neutrinos have masses of the same order of magnitude, that is $$m(\nu _i)\frac{\varphi _u^2}{M}\lambda ^2\mathrm{for}i=1,2,3.$$ The mass splittings are, however, hierarchical: $$\frac{\mathrm{\Delta }m_{12}^2}{\mathrm{\Delta }m_{23}^2}\lambda ^8,$$ which fits the VO solution for $`\lambda 0.1`$. A large $`23`$ mixing is obtained from the charged lepton sector: $$M_{\mathrm{}}\varphi _d\left(\begin{array}{ccc}\lambda ^8& \lambda ^{17}& \lambda ^9\\ \lambda ^{20}& \lambda ^5& \lambda ^3\\ \lambda ^{14}& \lambda ^{11}& \lambda ^3\end{array}\right).$$ We learn that $$\mathrm{sin}^22\theta _{12}1,\mathrm{sin}\theta _{13}\lambda ^6,\mathrm{sin}\theta _{23}1.$$ Note that this scenario of hierarchical mass-squared splittings is not included in ref. . The form of the neutrino mass matrix (5.1) in the charged lepton mass basis is given to a good approximation by $$M_\nu \frac{\varphi _u^2}{M}\lambda ^2\left(\begin{array}{ccc}0& Ac& As\\ Ac& Bs^2& Bcs\\ As& Bcs& Bc^2\end{array}\right),$$ where $`c\mathrm{cos}\theta _{23}`$ and $`s\mathrm{sin}\theta _{23}`$. Indeed, this corresponds to neither of the two forms advocated in . It is amusing to note, however, that it can be presented as the sum of these two forms. 5.3. Large mixing from equal effective charges A different way of obtaining large mixing together with a large hierarchy is the analog of the ‘holomorphic zeros’ mechanism of ref. . In both frameworks we take the horizontal symmetry to be $`U(1)_1\times U(1)_2`$, with equal effective charges (see eq. (4.1)) for $`L_2`$ and $`L_3`$, so that the $`23`$ mixing is $`𝒪(1)`$. The separate charges can, however, be chosen so as to induce a holomorphic zero in the single VEV framework and to suppress one of the masses in the two VEV framework. As an example consistent with the MSW(LMA) solution, consider the following set of charges for the lepton fields within a (3,0) model : $$\begin{array}{cc}\hfill L_1(1,0),L_2(1,1),& L_3(0,0),\hfill \\ \hfill \overline{\mathrm{}}_1(3,4),\overline{\mathrm{}}_2(3,2),& \overline{\mathrm{}}_3(3,0).\hfill \end{array}$$ The lepton mass matrices are of the form $$M_\nu \frac{\varphi _u^2}{M}\left(\begin{array}{ccc}\lambda ^2& \lambda & \lambda \\ \lambda & \lambda ^4& \lambda ^2\\ \lambda & \lambda ^2& 1\end{array}\right),M_\mathrm{}^\pm \varphi _d\left(\begin{array}{ccc}\lambda ^8& \lambda ^6& \lambda ^4\\ \lambda ^7& \lambda ^5& \lambda ^3\\ \lambda ^7& \lambda ^5& \lambda ^3\end{array}\right).$$ Without the positively charged $`\overline{\lambda }_1`$, the (22), (23) and (32) entries in $`M_\nu `$ would vanish because of holomorphy . Here, the holomorphic zeros are lifted, but the new entries are small and affect neither the mass hierarchy nor the mixing. Thus, the analysis of is still valid, yielding $$\frac{\mathrm{\Delta }m_{12}^2}{\mathrm{\Delta }m_{23}^2}\lambda ^3,\mathrm{sin}2\theta _{12}=1𝒪(\lambda ^2),s_{23}1,s_{13}\lambda .$$ The VO solution is similarly obtained with $$\begin{array}{cc}\hfill L_1(1,4),L_2(2,2),& L_3(0,0),\hfill \\ \hfill \overline{\mathrm{}}_1(6,5),\overline{\mathrm{}}_2(3,2),& \overline{\mathrm{}}_3(3,0).\hfill \end{array}$$ This gives $$M_\nu =\left(\begin{array}{ccc}\lambda ^{10}& \lambda ^3& \lambda ^5\\ \lambda ^3& \lambda ^8& \lambda ^4\\ \lambda ^5& \lambda ^4& 1\end{array}\right),$$ yielding $`\frac{\mathrm{\Delta }m_{12}^2}{\mathrm{\Delta }m_{23}^2}\lambda ^{11}`$. 6. Conclusions In models with a non-anomalous Abelian horizontal gauge symmetry, one expects that the symmetry is spontaneously broken by fields of opposite horizontal charges that acquire equal VEVs. We find two general mechanisms by which such models can accommodate the neutrino parameters that explain both the atmospheric neutrino and the solar neutrino problems. In particular, these mechanisms allow for $$\mathrm{sin}\theta _{23}1,\frac{\mathrm{\Delta }m_{12}^2}{\mathrm{\Delta }m_{23}^2}1.$$ The possible scenarios divide into two general classes: (i) The three neutrino masses are hierarchical. Then the hierarchy between the mass-squared splittings of eq. (6.1) is an integer power of $`\lambda ^4`$. The bound can only be saturated in models where at least three sterile neutrinos affect the mass hierarchy. Otherwise, the hierarchy is an integer power of $`\lambda ^8`$ which is inconsistent with the MSW solutions. The VO solution can be accommodated in the simple models that have $`\lambda ^{8n}`$ hierarchy if the small breaking parameter is somewhat smaller than the ‘canonical’ value of $`0.2`$ related to the Cabibbo mixing. (ii) The two lighter neutrinos form a pseudo-Dirac neutrino. The mixing related to the solar neutrino solution is then close to maximal, so that obviously only large angle solutions to the SN problem are possible. It is not simple, though not impossible, to obtain large enough deviation from maximal mixing as necessary for the large angle MSW solution. The VO solution is, again, simply accommodated. A special case in this class is that of three same order-of-magnitude neutrino masses, with hierarchical splittings. We emphasize that our arguments are not valid for discrete Abelian symmetries . Moreover, they can be circumvented even if the symmetry is continuous but the spontaneous symmetry breaking is not complete, leaving a residual exact discrete symmetry \[17,,50\]. In particular, it has been demonstrated that the MSW(SMA) solution can be easily generated in these cases \[14,,17\]. While the conclusion of both this work and of ref. is that large mixing and large hierarchy can be accommodated in the framework of Abelian horizontal symmetries, we would still like to emphasize the following points: a. The most predictive class of models of Abelian horizontal symmetries is that of an anomalous $`U(1)`$ with holomorphic zeros having no effect on the physical parameters . The various solutions suggested here and in require that either the symmetry is non-anomalous with two breaking parameters, or the symmetry is discrete, or that holomorphic zeros do play a role. In all these cases there is a loss of predictive power. If, indeed, (6.1) holds in nature, it would mean that neutrino parameters by themselves will not make a convincing case for the Abelian horizontal symmetry idea, even if they cannot rule it out. b. We argued here that, if (6.1) holds, the neutrino parameters that correspond to the atmospheric neutrino oscillations are not related to a pseudo-Dirac neutrino. Consequently, while Abelian horizontal symmetries allow for $`𝒪(1)`$ $`\nu _\mu \nu _\tau `$ mixing, they cannot explain maximal mixing (except as an accidental result). If the case for $`\mathrm{sin}^22\theta _{23}=1`$ is experimentally made with high accuracy, and the solar neutrino problem is indeed solved by neutrino oscillations, the framework of Abelian horizontal symmetries would become less attractive. Acknowledgements We are grateful to Ferruccio Feruglio for providing us with a counter-example to statements made in the original version of this paper. We thank Enrico Nardi for helpful discussions. This work was supported in part by the United States $``$ Israel Binational Science Foundation (BSF) and by the Minerva Foundation (Munich). Appendix A. Mass Hierarchy with Large Mixing We study models with $`n_a`$ active neutrinos and $`n_s`$ sterile ones (‘($`n_a,n_s`$) models’). We will argue that, as concerns the implications for $`\mathrm{\Delta }m_{12}^2`$, the models divide into three classes: a. Effective $`(2,n_s3)`$ models, where we find that $`\mathrm{\Delta }m_{12}^2/\mathrm{\Delta }m_{23}^2\lambda ^{4n}`$ ($`n=`$ integer). b. Effective $`(2,2)`$ models, where we find that $`\mathrm{\Delta }m_{12}^2/\mathrm{\Delta }m_{23}^2\lambda ^{8n}`$ ($`n=`$ integer). c. Models where $`\nu _e`$ and $`\nu _\mu `$ form a pseudo-Dirac neutrino, where $`\mathrm{\Delta }m_{12}^2m_{1,2}^2`$. The first step in our argument is to show that the large mixing between $`\nu _2`$ and $`\nu _3`$ requires that the horizontal charges of $`L_2`$ and $`L_3`$ obey $$H(L_2)H(L_3)=0(\mathrm{mod}2).$$ There are three possible sources (in the interaction basis) for large mixing: (i) the charged lepton mass matrix $`M_{\mathrm{}}`$, (ii) the neutrino Dirac mass matrix $`M_\nu ^{\mathrm{Dir}}`$, and (iii) the sterile neutrino Majorana mass matrix $`M_{\nu _s}^{\mathrm{Maj}}`$. We now examine the three mechanisms in turn. (i) If the large mixing arises from $`M_{\mathrm{}}`$, then $$(M_{\mathrm{}})_{23}(M_{\mathrm{}})_{33}|H(L_2)+H(\overline{\mathrm{}}_3)|=|H(L_3)+H(\overline{\mathrm{}}_3)|.$$ Therefore, either $`H(L_2)=H(L_3)`$, or $`H(L_2)+H(L_3)=2H(\overline{\mathrm{}}_3)`$. In either case, (A.1) holds. (ii) If the large mixing arises from $`M_\nu ^{\mathrm{Dir}}`$, then a similar argument holds with $`H(\overline{\mathrm{}}_3)`$ replaced by $`H(N_3)`$. (iii) Let us examine the conditions for large mixing induced by $`M_{\nu _s}^{\mathrm{Maj}}`$. To do so, we consider a (2,2) model. We define the relevant matrix elements by $$(M_{\nu _s}^{\mathrm{Maj}})^1=\left(\begin{array}{cc}r_{22}& r_{23}\\ r_{23}& r_{33}\end{array}\right).$$ For simplicity, we take $`M_\nu ^{\mathrm{Dir}}`$ to be diagonal (consistent with our assumption that the large mixing does not come from this matrix): $$M_\nu ^{\mathrm{Dir}}=\left(\begin{array}{cc}d_2& \\ & d_3\end{array}\right).$$ Then $$M_{\nu _a}^{\mathrm{Maj}}=(M_\nu ^{\mathrm{Dir}})(M_{\nu _s}^{\mathrm{Maj}})^1(M_\nu ^{\mathrm{Dir}})^T=\left(\begin{array}{cc}d_2^2r_{22}& d_2d_3r_{23}\\ d_2d_3r_{23}& d_3^2r_{33}\end{array}\right).$$ Large mixing can be induced in two cases: first, $`d_2d_3r_{23}d_3^2r_{33}`$ which leads to a pesudo-Dirac neutrino in contrast to our assumptions; second, $`d_2d_3r_{23}d_3^2r_{33}`$, which can be achieved with $$\frac{d_2r_{23}}{d_3r_{33}}\lambda ^{|H(L_2)+H(N_2)||H(L_3)+H(N_3)|+|H(N_2)+H(N_3)|2|H(N_2)|}1.$$ The condition on the exponent is then of the form $$a_2H(L_2)+a_3H(L_3)+2b_2H(N_2)+2b_3H(N_3)=0,$$ where $`a_2,a_3=\pm 1`$, $`b_3=0,\pm 1`$ and $`b_2=0,\pm 1,\pm 2`$. Clearly, it leads to (A.1). The second step is to find the hierarchy between the masses in terms of the lepton charges. If no pair among the active neutrinos forms a pseudo-Dirac state, then we can estimate the mass ratio from $$\frac{m_2}{m_3}\frac{detM^{(n_s+2)}detM^{(n_s)}}{[detM^{(n_s+1)}]^2}.$$ Here, $`detM^{(n_s)}`$ is the product of the masses of the sterile neutrinos, which is approximately equal to $`detM_{\nu _s}^{\mathrm{Maj}}`$, and $`detM^{(n_s+n_a^{})}`$ is the product of the masses of the sterile neutrinos and the masses of the $`n_a^{}`$ heaviest active neutrinos. To estimate the masses, we use $$(M_\nu ^{\mathrm{Dir}})_{ij}\varphi _u\lambda ^{|H(L_i)+H(N_j)|},$$ $$(M_{\nu _s}^{\mathrm{Maj}})_{ij}M\lambda ^{|H(N_i)+H(N_j)|}.$$ In $`detM^{(n_s)}`$, each $`H(N_i)`$ appears twice in the exponent, each time with either a plus or a minus sign. Consequently, we obtain the following type of dependence on lepton charges: $$detM^{(n_s)}M^{n_s}\lambda ^{a_iH(N_i)},a_i=0,\pm 2.$$ Similarly, in $`detM^{(n_s+1)}`$, each $`H(N_i)`$ appears twice in the exponent, each time with a plus or a minus sign. As concerns $`H(L_i)`$, there are two possibilities: either a single $`H(L_i)`$ appears twice, each time with either a plus or a minus sign, or two different charges appear once: $$\begin{array}{cc}\hfill detM^{(n_s+1)}& M^{n_s1}\varphi _u^2\mathrm{max}\{\lambda ^{a_iH(N_i)+bH(L_j)},\lambda ^{a_iH(N_i)+c[H(L_j)+dH(L_k)]}\}\hfill \\ & a_i,b=0,\pm 2,c,d=\pm 1.\hfill \end{array}$$ In a similar manner, we obtain $$detM^{(n_s+2)}M^{n_s2}\varphi _u^4\lambda ^{a_iH(N_i)+b_jH(L_j)},a_i,b_j=0,\pm 2.$$ It is straightforward to see that each of $`detM^{(n_s)}`$, $`detM^{(n_s+2)}`$ and $`[detM^{(n_s+1)}]^2`$ depends on $`\lambda ^{2n}`$ where $`n`$ is an integer. We remind the reader that eq. (A.1) holds only when there is no pseudo-Dirac light neutrino. In other words, it holds in effective $`(2,n_s)`$ models. Then, $`H(L_1)`$ does not appear in eqs. (A.1)-(A.1); in particular, the $`H(L_i)`$-dependence of the second factor in (A.1) is of the form $`c[H(L_2)+dH(L_3)]`$. Eq. (A.1) implies that this combination of charges is even. Generically, however, this fact has no special consequences and we find $$\frac{\mathrm{\Delta }m_{12}^2}{\mathrm{\Delta }m_{23}^2}\frac{m_2^2}{m_3^2}\lambda ^{4n}[(2,n_s)\mathrm{models}].$$ The situation is more constrained if the charges of only two of the sterile neutrinos (say, $`N_2`$ and $`N_3`$) affect $`s_{23}`$ and $`m_2/m_3`$. This is the $`(2,2)`$ model. There are two additional special features in this case: (i) For $`n_s=2`$, we have $`detM^{(n_s=2)}M^2\lambda ^{2|H(N_2)+H(N_3)|}`$ which, in particular, allows only $`a_i=\pm 2`$ in (A.1). (ii) For any $`(n,n)`$ model, we have $`detM_\nu =[detM_\nu ^{\mathrm{Dir}}]^2`$. Consequently, $`a_i,b_j=\pm 2`$ (and cannot vanish) in eq.(A.1). As a result of all these features, we now find $$\frac{\mathrm{\Delta }m_{12}^2}{\mathrm{\Delta }m_{23}^2}\frac{m_2^2}{m_3^2}\lambda ^{8n}[(2,2)\mathrm{models}].$$ References relax C.D. Froggatt and H.B. Nielsen, Nucl. Phys. B147 (1979) 277. relax M. Leurer, Y. Nir and N. Seiberg, Nucl. Phys. B398 (1993) 319, hep-ph/9212278. relax M. Dine, A. Kagan and R. Leigh, Phys. Rev. D48 (1993) 4269, hep-ph/9304299. relax Y. Nir and N. Seiberg, Phys. Lett. B309 (1993) 337, hep-ph/9304307. relax Y. Grossman, Y. Nir and R. Rattazzi, in Heavy Flavours II, eds. A.J. Buras and M. Lindner, Advanced Series on Directions in High Energy Physics, (World Scientific, Singapore), hep-ph/9701231. relax M. Leurer, Y. Nir and N. Seiberg, Nucl. Phys. B420 (1994) 468, hep-ph/9310320. relax Y. Grossman and Y. Nir, Nucl. Phys. B448 (1995) 30, hep-ph/9502418. relax Y. Fukuda et al., the Super-Kamiokande Collaboration, Phys. Rev. Lett. 81 (1998) 1562, hep-ex/9807003. relax J.N. Bahcall, P.I. Krastev and A. Yu. Smirnov, Phys. Rev. D58 (1998) 096016, hep-ph/9807216. relax P. Binetruy, S. Lavignac and P. Ramond, Nucl. Phys. B477 (1996) 353, hep-ph/9802334. relax N. Irges, S. Lavignac and P. Ramond, Phys. Rev. D58 (1998) 035003, hep-ph/9802334. relax J.K. Elwood, N. Irges and P. Ramond, Phys. Rev. Lett. 81 (1998) 5064, hep-ph/9807228. relax J. Ellis, G.K. Leontaris, S. Lola and D.V. Nanopoulos, hep-ph/9808251. relax Y. Grossman, Y. Nir and Y. Shadmi, JHEP 10 (1998) 007, hep-ph/9808355. relax P. Binetruy, S. Lavignac, S. Petcov and P. Ramond, Nucl. Phys. B496 (1997) 3, hep-ph/9610481. relax P. Binetruy, N. Irges, S. Lavignac and P. Ramond, Phys. Lett. B403 (1997) 38, hep-ph/9612442. relax R. Barbieri, L.J. Hall, D. Smith, A. Strumia and N. Weiner, JHEP 12 (1998) 017, hep-ph/9807235. relax G. Altarelli and F. Feruglio, hep-ph/9809596; hep-ph/9812475. relax M. Jezabek and Y. Sumino, Phys. Lett. B440 (1998) 327, hep-ph/9807310. relax G. Eyal, Phys. Lett. B441 (1998) 191, hep-ph/9807308. relax E. Ma, Phys. Lett. B442 (1998) 238, hep-ph/9807386. relax R.N. Mohapatra and S. Nussinov, Phys. Lett. B441 (1998) 299, hep-ph/9808301. relax R. Barbieri, L.J. Hall and A. Strumia, hep-ph/9808333. relax E. Ma, D.P. Roy and U. Sarkar, Phys. Lett. B444 (1998) 391, hep-ph/9810309. relax F. Vissani, JHEP 11 (1998) 025, hep-ph/9810435. relax C.D. Froggatt, M. Gibson and H.B. Nielsen, hep-ph/9811265. relax E. Ma and D.P. Roy, hep-ph/9811266. relax K. Choi, E.J. Chun and K. Hwang, hep-ph/9811363. relax Z. Berezhiani and A. Rossi, hep-ph/9811447. relax Q. Shafi and Z. Tavartkiladze, hep-ph/9811463. relax C. Liu and J. Song, hep-ph/9812381. relax M. Tanimoto, hep-ph/9807517; hep-ph/9901210. relax L. Ibanez, Phys. Lett. B303 (1993) 55. relax P. Binetruy and P. Ramond, Phys. Lett. B350 (1995) 49, hep-ph/9412385. relax V. Jain and R. Shrock, Phys. Lett. B352 (1995) 83, hep-ph/9412367. relax E. Dudas, S. Pokorski and C.A. Savoy, Phys. Lett. B356 (1995) 45, hep-ph/9504292. relax Y. Nir, Phys. Lett. B354 (1995) 107, hep-ph/9504312. relax M. Green and J. Schwarz, Phys. Lett. B149 (1984) 117. relax M. Dine, N. Seiberg and E. Witten, Nucl. Phys. B289 (1987) 589. relax J. Atick, L. Dixon and A. Sen, Nucl. Phys. B292 (1987) 109. relax M. Dine, I. Ichinoise and N. Seiberg, Nucl. Phys. B293 (1987) 253. relax H. Dreiner, G.K. Leontaris, S. Lola, G.G. Ross and C. Scheich, Nucl. Phys. B436 (1995) 461, hep-ph/9409369. relax G.K. Leontaris, S. Lola and G.G. Ross, Nucl. Phys. B454 (1995) 25, hep-ph/9505402. relax C.H. Albright and S. Nandi, Phys. Rev. D53 (1996) 2699, hep-ph/9507376. relax G.K. Leontaris, S. Lola, C. Scheich and J. Vergados, Phys. Rev. D53 (1996) 6381, hep-ph/9509351. relax E. Papagregoriu, Zeit. Phys. C64 (1995) 509. relax B.C. Allanach, hep-ph/9806294. relax M.E. Gomez, G.K. Leontaris, S. Lola and J.D. Vergados, hep-ph/9810291. relax L.E. Ibanez and G.G. Ross, Phys. Lett. B332 (1994) 100. relax Y. Nir and Y. Shadmi, work in progress. relax P.H. Chankowski and Z. Pluciennik, Phys. Lett. B316 (1993) 312, hep-ph/9306333. relax K.S. Babu, C.N. Leung and J. Pantaleone, Phys. Lett. B319 (1993) 191, hep-ph/9309223. relax M. Tanimoto, Phys. Lett. B360 (1995) 41, hep-ph/9508247. relax N. Haba and T. Matsuoka, Prog. Theor. Phys. 99 (1998) 831, hep-ph/9710418. relax N. Haba, N. Okamura and M. Sugiura, hep-ph/9810471.
no-problem/9902/cond-mat9902073.html
ar5iv
text
# Comment on “Dynamic Scaling in the Spatial Distribution of Persistent Sites” Recently, Manoj and Ray investigated the spatial distribution of unvisited sites in the one-dimensional single-species annihilation process $`A+A0`$. They claimed that this distribution is characterized by a new length scale $`(t)t^z`$, and that the dynamical exponent $`z`$ depends upon the initial concentration. We show numerically that this assertion is erroneous. Regardless of the initial concentration, this spatial distribution is characterized by the diffusive length scale $`_D(t)(Dt)^{1/2}`$. The spatial distribution of unvisited sites is described by $`F_l(t)`$, the probability density that two consecutive unvisited sites are separated by $`l`$ sites. This quantity satisfies $`P_0(t)=_lF_l(t)`$ and $`1=_l(l+1)F_l(t)`$, with $`P_0(t)`$ the fraction of unvisited sites. The latter condition follows from length conservation. In the diffusive annihilation process $`A+A0`$, the particle density $`n(t)`$ decays algebraically according to $`n(t)(8\pi Dt)^{1/2}`$, where $`D`$ is the hopping rate. This behavior is independent of the initial concentration. Furthermore, writing $`n(t)1/_D(t)`$ suggests that the diffusive length is the only asymptotically relevant length scale. As shown in Fig. 1, the following scaling form holds $$F_l(t)t^1\left(lt^{1/2}\right),$$ (1) indicating that $`F_l(t)`$ is characterized by $`_D(t)`$ alone. The scaling form (1) is consistent with the normalization $`1=_l(l+1)F_l(t)`$. We stress that the scaling function $`(x)`$ is also independent of the initial conditions. The known decay of the number of unvisited sites $`P_0(t)t^\theta `$ ($`\theta =3/8`$ is the persistence exponent) together with the requirement $`P_0(t)=_lF_l(t)`$ can now be used to infer the small $`x`$ divergence of $`(x)`$ $$(x)x^{2(1\theta )}x0.$$ (2) In the complementary $`x\mathrm{}`$ limit, we observed an exponential decay $`(x)\mathrm{exp}(Ax)`$, indicating independence of distant unvisited sites. Similar small argument divergence and exponential tail underly a related quantity $`P_n(t)`$ , the probability that a site has been visited $`n`$ times. In both cases, the corresponding exponent follows directly from $`\theta `$. Furthermore, Eqs. (1)-(2) extend to persistent sites in the $`q`$-state Potts-Glauber model if $`\theta `$ is replaced by $`\theta (q)`$ . In summary, the spatial distribution of unvisited sites is characterized by the diffusive length scale. Although the initial concentration affects the transient behavior, it is irrelevant asymptotically (see Fig. 2). The scaling form suggested in Ref. is correct only if the choices $`z=1/2`$, $`\omega =\theta `$, and $`\tau =2(1\theta )=5/4`$ are made. Thus, no additional independent exponents emerge from $`F_l(t)`$.
no-problem/9902/quant-ph9902047.html
ar5iv
text
# One-Parameter Gaussian State of Anharmonic Oscillator: Nonlinear Realization of Bogoliubov Transformation ## Abstract We find a one-parameter Gaussian state for an anharmonic oscillator with quadratic and quartic terms, which depends on the energy expectation value. For the weak coupling constant, the Gaussian state is a squeezed state of the vacuum state. However, for the strong coupling constant, the Gaussian state represents a different kind of condensation of bosonic particles through a nonlinear Bogoliubov transformation of the vacuum state. preprint: quant-ph/9902047 Harmonic oscillator has been used to describe a non-interacting ideal bosonic gas. One of physically interesting quantum states is the Gaussian state. Recently we have found a one-parameter squeezed Gaussian state for a time-independent harmonic oscillator whose squeezing parameter is the energy expectation value . It was also shown there that even for a time-dependent harmonic oscillator this one-parameter Gaussian state is still the squeezed state of the vacuum state with the minimum uncertainty. This squeezed Gaussian state forms a condensation (squeezed vacuum) of bosonic particles, and represents a higher energy state. On the other hand, anharmonic oscillator, as a $`(0+1)`$-dimensional toy model for a quantum field, describes a self-interacting bosonic gas. Therefore it would be physically interesting to have an analogous Gaussian state for the anharmonic oscillator. In this paper we find a one-parameter Gaussian state for the anharmonic oscillator whose parameter depends on the energy expectation value, and study the relation between the Gaussian state and the (approximate) vacuum state. It is found that for the weak coupling constant the one-parameter Gaussian state is a squeezed vacuum of the vacuum state, but for the strong coupling constant it can not be obtained from the vacuum state through a linear Bogoliubov transformation. That is, the Gaussian state for the strong coupling constant does not represent merely the squeezed vacuum, rather it exhibits a different kind of condensation of bosonic particles through a nonlinear Bogoliubov transformation. We now wish to find the Gaussian state for an anharmonic oscillator with quadratic and quartic potential terms. This anharmonic oscillator has been frequently used as a toy model for the $`\mathrm{\Phi }^4`$-theory . From a field theoretical point of view, one of physically interesting quantum states is the Gaussian state, for it represents a resummation of loop diagrams such as daisies. Let us consider a model Hamiltonian of the form $$\widehat{H}_{a.h.o.}=\frac{1}{2}\widehat{p}^2+\frac{\omega ^2}{2}\widehat{x}^2+\frac{\lambda }{4}\widehat{x}^4.$$ (1) It was shown in Ref. that the Hamiltonian (1) has the following Gaussian state $$\mathrm{\Psi }(x)=\left(\frac{1}{2\pi \chi ^2}\right)^{1/4}\mathrm{exp}\left[\left(\frac{1}{4\chi ^4}i\frac{\dot{\chi }}{2\chi }\right)x^2\right],$$ (2) where $`\chi `$ satisfies the nonlinear equation $$\ddot{\chi }+\omega ^2\chi +3\lambda \chi ^3\frac{1}{4\chi ^3}=0.$$ (3) Eq. (3) can be easily integrated to yield $$\frac{1}{2}\dot{\chi }^2+\frac{\omega ^2}{2}\chi ^2+\frac{3\lambda }{4}\chi ^4+\frac{1}{8\chi ^2}=ϵ,$$ (4) where $`ϵ`$ is a constant of motion. The $`ϵ`$ is nothing but the energy expectation value $$\mathrm{\Psi }|\widehat{H}|\mathrm{\Psi }=\frac{1}{2}\left[\dot{\chi }^2+\omega ^2\chi ^2+\frac{3\lambda }{2}\chi ^4+\frac{1}{4\chi ^2}\right]=ϵ.$$ (5) The solution to Eq. (4) can be found by interpreting Eq. (3) as describing a one-dimensional system of unit mass with an effective potential $$V_{\mathrm{eff}.}=\frac{\omega ^2}{2}\chi ^2+\frac{3\lambda }{4}\chi ^4+\frac{1}{8\chi ^2}.$$ (6) The effective potential, though one-dimensional, has a centrifugal term $`\frac{1}{8\chi ^2}`$ from quantization in addition to the other classical potential terms. The energy expectation value has the minimum value at $`\chi ^2=\frac{1}{2\mathrm{\Omega }_G}`$: $$ϵ_{\mathrm{min}.}=\frac{\mathrm{\Omega }_G}{2}\frac{3\lambda }{16\mathrm{\Omega }_G^2},$$ (7) where $`\mathrm{\Omega }_G`$ is determined from the gap equation $$\mathrm{\Omega }_G^2=\omega ^2+\frac{3\lambda }{2\mathrm{\Omega }_G}.$$ (8) For an energy above $`ϵ_{\mathrm{min}.}`$, there are two turning points $`\chi _1`$ and $`\chi _2`$. By setting $`y=\chi ^2`$, we integrate Eq. (4) by quadrature method $$_{y_1}^y\frac{dy}{\sqrt{2ϵy\omega ^2y^2\frac{3\lambda }{2}y^3\frac{1}{4}}}=2t+\phi _0,$$ (9) where $`\phi _0`$ is an integration constant. The integrand has two positive roots $`y_2y_1>0`$ corresponding to physical turning points and an unphysical negative root $`y_3<0`$: $$2ϵy\omega ^2y^2\frac{3\lambda }{2}y^3\frac{1}{4}=\frac{3\lambda }{2}(y_2y)(yy_1)(yy_3).$$ (10) By doing the integral (9), we find the solution $$F(\kappa ,p)=\sqrt{\frac{3\lambda }{2}(y_2y_3)}\left(t+\frac{\phi _0}{2}\right),$$ (11) where $`F(\kappa ,p)`$ is the first elliptic integral and $$\kappa =\mathrm{arcsin}\left\{\sqrt{\frac{(y_2y_3)(yy_1)}{(y_2y_1)(yy_3)}}\right\},p=\sqrt{\frac{y_2y_1}{y_2y_3}}.$$ (12) We have thus shown that the Gaussian state (2) is the one-parameter state whose parameter is the energy expectation value. In order to find the meaning and relation of this one-parameter Gaussian state (2) with the (approximate) vacuum state, we shall compare it with that of a harmonic oscillator, which is relatively well understood. We first consider the case of the harmonic oscillator with $`\lambda =0`$ $$\widehat{H}_{h.o.}=\frac{1}{2}\widehat{p}^2+\frac{\omega ^2}{2}\widehat{x}^2.$$ (13) Then one has $`p=0`$ in Eq. (12) and $`F(\kappa ,0)=\kappa `$ in Eq. (11), and the solution (11) reduced to $$y=\chi ^2=\frac{ϵ}{\omega ^2}+\frac{ϵ}{\omega ^2}\sqrt{1\frac{\omega ^2}{4ϵ^2}}\mathrm{cos}(2\omega t),$$ (14) where we have chosen a constant $`\phi _0=\frac{\pi }{\omega }`$. The connection between the vacuum (minimum energy) state and the one-parameter Gaussian state (2) with $`y=\chi ^2`$ from Eq. (14) can be manifestly shown in the Liouville-Neumann approach . Moreover, the Liouville-Neumann approach provides a better physical intuition. In this approach one looks for the following creation and annihilation operators $$\widehat{a}^{}(t)=i\left(u(t)\widehat{p}\dot{u}(t)\widehat{x}\right),\widehat{a}(t)=i\left(u^{}(t)\widehat{p}\dot{u}^{}(t)\widehat{x}\right),$$ (15) which satisfy the Liouville-Neumann equations $$i\frac{}{t}\widehat{a}^{}(t)+[\widehat{a}^{}(t),\widehat{H}_{h.o.}]=0,i\frac{}{t}\widehat{a}(t)+[\widehat{a}(t),\widehat{H}_{h.o.}]=0,.$$ (16) The great advantage of this approach is that the exact quantum state of either time-independent or time-dependent harmonic oscillator is an eigenstate of such operators up to some time-dependent phase factor. From the fact that any functionals of $`\widehat{a}(t)`$ and $`\widehat{a}^{}(t)`$ also satisfy Eq. (16), it follows that the (number) eigenstate of $`\widehat{N}(t)=\widehat{a}^{}(t)\widehat{a}(t)`$ is an exact quantum state of the harmonic oscillator . The operators (15) satisfy Eq. (16) when $`u(t)`$ is a complex solution to the classical equation of motion $$\ddot{u}(t)+\omega ^2u(t)=0.$$ (17) One can make $`\widehat{a}(t)`$ and $`\widehat{a}(t)`$ the creation and annihilation operators by imposing the standard commutation relation $`[\widehat{a}(t),\widehat{a}^{}(t)]=1`$, which is the Wronskian condition $$\dot{u}^{}(t)u(t)u^{}(t)\dot{u}(t)=i.$$ (18) The Gaussian state that is defined by $`\widehat{a}(t)|0=0`$ has the coordinate representation $$\mathrm{\Psi }_u=\left(\frac{1}{2\pi u^{}(t)u(t)}\right)^{1/4}\mathrm{exp}\left[i\frac{\dot{u}^{}(t)}{2u^{}(t)}x^2\right].$$ (19) In fact, one can show the equivalence between Eq. (17) and the nonlinear equation (3) with $`\lambda =0`$, by expressing $`u(t)`$ in a polar form $`u(t)=\chi (t)e^{i\theta (t)}`$ and by writing Eq. (18) as $`\dot{\theta }(t)=\frac{1}{2\chi ^2(t)}`$, which is integrated to be $`\theta =\omega t`$. It should be noted that a solution with the particular positive frequency $`\omega `$, $$u_0(t)=\frac{1}{\sqrt{2\omega }}e^{i\omega t},$$ (20) provides the true vacuum state with the minimum energy. The vacuum state meets the selection rule of the minimum uncertainty . Eq. (17) being linear and $`u_0^{}(t)`$ another independent solution, any superposition of $`u_0(t)`$ and $`u_0^{}(t)`$ also constitutes a solution. We may parameterize such coefficients so that they satisfy the Wronskian condition (18): $$u_\mathrm{r}(t)=\mathrm{cosh}(r)u_0(t)+\mathrm{sinh}(r)u_0^{}(t).$$ (21) By identifying $`\chi ^2=u_\mathrm{r}^{}u_\mathrm{r}`$ in Eq. (14), the squeezing parameter $`r`$ is found in terms of the energy as $$\mathrm{cosh}(r)=\sqrt{\frac{ϵ}{\omega }+\frac{1}{2}},\mathrm{sinh}(r)=\sqrt{\frac{ϵ}{\omega }\frac{1}{2}}.$$ (22) The creation and annihilation operators defined in terms of $`u_\mathrm{r}(t)`$ $$\widehat{a}_\mathrm{r}^{}(t)=i\left(u_\mathrm{r}(t)\widehat{p}\dot{u}_\mathrm{r}(t)\widehat{x}\right),\widehat{a}_\mathrm{r}(t)=i\left(u_\mathrm{r}^{}(t)\widehat{p}\dot{u}_\mathrm{r}^{}(t)\widehat{x}\right),$$ (23) are related with those for the vacuum state defined in terms of $`u_0(t)`$ through the Bogoliubov transformation $`\widehat{a}_\mathrm{r}^{}(t)`$ $`=`$ $`\mathrm{sinh}(r)\widehat{a}_0(t)+\mathrm{cosh}(r)\widehat{a}_0^{}(t),`$ (24) $`\widehat{a}_\mathrm{r}(t)`$ $`=`$ $`\mathrm{cosh}(r)\widehat{a}_0(t)\mathrm{sinh}(r)\widehat{a}_0^{}(t).`$ (25) Hence, the one-parameter Gaussian state for harmonic oscillator is the squeezed state (squeezed vacuum) of the true vacuum state that is annihilated by $`\widehat{a}_0(t)`$ and has $`r=0`$ . We now turn to the case of the anharmonic oscillator $`(\lambda 0)`$. Years ago Rajagopal and Marshall found a coherent and Gaussian state for time-independent anharmonic oscillator with a polynomial potential , and recently one (SPK) of the authors used the Liouville-Neumann approach to find a similar coherent and Gaussian state for time-dependent anharmonic oscillator . The idea of Ref. is to find the following operators $$\widehat{A}^{}(t)=i\left(v(t)\widehat{p}\dot{v}(t)\widehat{x}\right),\widehat{A}(t)=i\left(v^{}(t)\widehat{p}\dot{v}^{}(t)\widehat{x}\right),$$ (26) which satisfy the Liouville-Neumann equations for the full Hamiltonian (1) $$i\frac{}{t}\widehat{A}^{}(t)+[\widehat{A}^{}(t),\widehat{H}_{a.h.o.}]=0,i\frac{}{t}\widehat{A}(t)+[\widehat{A}(t),\widehat{H}_{a.h.o.}]=0.$$ (27) It was found that Eq. (27) is satisfied only when $`v(t)`$ is a complex solution to $$\ddot{v}(t)+0(t)|\frac{\delta ^2}{\delta \widehat{x}^2}V(\widehat{x})|0(t)v(t)=0,$$ (28) where the expectation value is taken with respect to the state defined by $`\widehat{A}(t)|0(t)=0`$. For the Hamiltonian (1), Eq. (28) becomes $$\ddot{v}(t)+\omega ^2v(t)+3\lambda \left(v^{}(t)v(t)\right)v(t)=0.$$ (29) It is worthy of noting that the same mean-field equation (29) can also be obtained by expressing the Hamiltonian (1) in terms of the operators in Eq. (26), by arraying it in a normal ordering, and by truncating it up to the quadratic terms $`\widehat{H}_G`$ $`=`$ $`\left[\dot{v}^{}\dot{v}+\omega ^2v^{}v+3\lambda (v^{}v)^2\right]\left(\widehat{A}^{}\widehat{A}+{\displaystyle \frac{1}{2}}\right){\displaystyle \frac{3\lambda }{4}}(v^{}v)^2`$ (31) $`+{\displaystyle \frac{1}{2}}\left[\dot{v}^2\dot{v}+\omega ^2v^2+3\lambda (v^{}v)v^2\right]\widehat{A}^2+{\displaystyle \frac{1}{2}}\left[\dot{v}^2\dot{v}+\omega ^2u^2+3\lambda (v^{}v)v^2\right]\widehat{A}^2,`$ and finally by solving Eq. (16) with the approximate Hamiltonian $`\widehat{H}_G`$. A particular solution to Eq. (29) with $`\mathrm{\Omega }_G`$ determined by the gap equation (8), $$v_0(t)=\frac{1}{\sqrt{2\mathrm{\Omega }_G}}e^{i\mathrm{\Omega }_Gt},$$ (32) provides an approximate vacuum (minimum energy) state. As for the harmonic oscillator case, by setting $`v(t)=\chi (t)e^{i\theta (t)}`$ and using $`\dot{\theta }(t)=\frac{1}{2\chi ^2(t)}`$ from the commutation relation $`[\widehat{A}(t),\widehat{A}^{}(t)]=1`$, one can show that Eq. (29) becomes identically the nonlinear equation (3) with $`\lambda 0`$. As a special case, we consider the weak coupling constant $`(\lambda \omega ^2)`$, and from now on compute all quantities to the linear order of $`\lambda `$. From the three roots approximately given by $`y_1`$ $`=`$ $`{\displaystyle \frac{ϵ}{\omega ^2}}\left[1\sqrt{1{\displaystyle \frac{\omega ^2}{4ϵ^2}}}\right]{\displaystyle \frac{3\lambda }{16\omega ^6}}\left[16ϵ^2\omega ^2{\displaystyle \frac{16ϵ^23\omega ^2}{\sqrt{1\frac{\omega ^2}{4ϵ^2}}}}\right]+𝒪(\lambda ^2),`$ (33) $`y_2`$ $`=`$ $`{\displaystyle \frac{ϵ}{\omega ^2}}\left[1+\sqrt{1{\displaystyle \frac{\omega ^2}{4ϵ^2}}}\right]{\displaystyle \frac{3\lambda }{16\omega ^6}}\left[16ϵ^2\omega ^2+{\displaystyle \frac{16ϵ^23\omega ^2}{\sqrt{1\frac{\omega ^2}{4ϵ^2}}}}\right]+𝒪(\lambda ^2),`$ (34) $`y_3`$ $`=`$ $`{\displaystyle \frac{2\omega ^2}{3\lambda }}{\displaystyle \frac{ϵ}{\omega ^2}}+{\displaystyle \frac{6ϵ^2\lambda }{\omega ^6}}\left[1{\displaystyle \frac{\omega ^2}{16ϵ^2}}\right]+𝒪(\lambda ^2),`$ (35) the solution (11) is found to be $`y=\chi ^2`$ $`=`$ $`\left[{\displaystyle \frac{ϵ}{\omega ^2}}{\displaystyle \frac{9\lambda }{16\omega ^6}}\left(8ϵ^2\omega ^2\right)+𝒪(\lambda ^2)\right]`$ (36) $`+`$ $`\left[{\displaystyle \frac{ϵ}{\omega ^2}}\sqrt{1{\displaystyle \frac{\omega ^2}{4ϵ^2}}}{\displaystyle \frac{3\lambda }{16\omega ^6}}{\displaystyle \frac{16ϵ^23\omega ^2}{\sqrt{1\frac{\omega ^2}{4ϵ^2}}}}+𝒪(\lambda ^2)\right]\mathrm{cos}(2\mathrm{\Omega }t),`$ (37) where $$\mathrm{\Omega }=\omega +\frac{3\lambda }{4\omega }\left[\frac{2ϵ}{\omega ^2}+\sqrt{1\frac{\omega ^2}{4ϵ^2}}\right]+𝒪(\lambda ^2).$$ (38) It should be remarked that near the minimum energy $`ϵ_{\mathrm{min}.}`$, $`\mathrm{\Omega }`$ is equal to $`\mathrm{\Omega }_G`$ from the gap equation (8): $$\mathrm{\Omega }=\omega +\frac{3\lambda }{4\omega ^2}+𝒪(\lambda ^2)=\mathrm{\Omega }_G.$$ (39) This can be understood intuitively because near the minimum of potential the anharmonic oscillator is approximated by a harmonic potential with corrections from the quartic term. We are thus able to show the following linear superposition $$v_\mathrm{r}(t)=\mathrm{cosh}(r)v_0(t)+\mathrm{sinh}(r)v_0^{}(t),$$ (40) where $`\mathrm{cosh}(r)`$ $`=`$ $`\sqrt{{\displaystyle \frac{\mathrm{\Omega }y_2}{2}}+{\displaystyle \frac{\mathrm{\Omega }(y_2y_1)^2}{8y_3}}}+\sqrt{{\displaystyle \frac{\mathrm{\Omega }y_1}{2}}+{\displaystyle \frac{\mathrm{\Omega }(y_2y_1)^2}{8y_3}}},`$ (41) $`\mathrm{sinh}(r)`$ $`=`$ $`\sqrt{{\displaystyle \frac{\mathrm{\Omega }y_2}{2}}+{\displaystyle \frac{\mathrm{\Omega }(y_2y_1)^2}{8y_3}}}\sqrt{{\displaystyle \frac{\mathrm{\Omega }y_1}{2}}+{\displaystyle \frac{\mathrm{\Omega }(y_2y_1)^2}{8y_3}}}.`$ (42) Hence, to the linear order of the coupling constant (weak coupling limit), the Bogoliubov transformation between $`\widehat{A}_\mathrm{r}(t),\widehat{A}_\mathrm{r}^{}(t)`$ and $`\widehat{A}_0(t),\widehat{A}_0^{}(t)`$ holds $`\widehat{A}_\mathrm{r}^{}(t)`$ $`=`$ $`\mathrm{sinh}(r)\widehat{A}_0(t)+\mathrm{cosh}(r)\widehat{A}_0^{}(t),`$ (43) $`\widehat{A}_\mathrm{r}(t)`$ $`=`$ $`\mathrm{cosh}(r)\widehat{A}_0(t)\mathrm{sinh}(r)\widehat{A}_0^{}(t),`$ (44) where $$\widehat{A}_\mathrm{r}^{}(t)=i\left(v_\mathrm{r}(t)\widehat{p}\dot{v}_\mathrm{r}(t)\widehat{x}\right),\widehat{A}_\mathrm{r}(t)=i\left(v_\mathrm{r}^{}(t)\widehat{p}\dot{v}_\mathrm{r}^{}(t)\widehat{x}\right).$$ (45) Therefore, one-parameter Gaussian state for the weak coupling constant is the squeezed vacuum of the Gaussian state with $`\mathrm{\Omega }_G`$. In summary, we have found a one-parameter (energy expectation value) dependent Gaussian state for a quartic anharmonic oscillator. This Gaussian state is not in general a squeezed vacuum of the vacuum state through Bogoliubov transformation. This fact is in strong contrast with the harmonic oscillator case where any Gaussian state is always a squeezed state of the true vacuum. Only for the weak coupling constant, this Gaussian state is the squeezed state of the vacuum state. However, for the strong coupling constant, one can not obtain the Gaussian state through a linear Bogoliubov transformation of the vacuum state. The one-parameter Gaussian state with an energy much higher than the vacuum energy represents a different kind of condensation of bosonic particles in a nonlinear way. Though not shown explicitly in this paper, the operator formalism for the nonlinear transformation between the vacuum state and the other Gaussian states would be interesting and and will be addressed in a future publication. ###### Acknowledgements. We would like to thank Dr. Dongsu Bak, Prof. Sung Ku Kim, Prof. Kwang-Sup Soh, and Prof. Jae Hyung Yee for many useful discussions and comments. This work was supported by the Non-Directed Research Fund, Korea Research Foundation, 1997.
no-problem/9902/cond-mat9902351.html
ar5iv
text
# Theory of the Josephson effect in superconductor / one-dimensional electron gas / superconductor junction \[ ## Abstract We present a theory for the Josephson effect in an unconventional superconductor / one-dimensional electron gas / unconventional superconductor ($`s/o/s`$) junction, where the Josephson current is carried by components injected perpendicular to the interface. When superconductors on both sides have triplet symmetries, the Josephson current is enhanced at low temperature due to the zero-energy states formed near the interface. Measuring Josephson current in this $`s/o/s`$ junction, we can identify parity of the superconductor. preprint: s/o/s, Feb. 1999 \] Nowadays, novel interference effects of the quasiparticle tunneling in unconventional superconductor junctions, where pair potentials change sign on the Fermi surface, have been paid much attention. One of the remarkable features is the formation of the zero-energy states (ZES) localized near surfaces of unconventional superconductors . The ZES are detectable by tunneling spectroscopy as conductance peaks. Experimental observations of the ZES on surfaces of high-$`T_c`$ superconductors have been reported in several papers. Motivated by these works, general formulas for the Josephson current in (even parity) unconventional superconductors were presented by taking account of the ZES. Calculated results show several anomalous properties including the strong enhancement of the Josephson current at low-temperature under the influence of the ZES formation. Recently, Maeno, $`et`$ $`al`$. discovered superconductivity in Sr<sub>2</sub>RuO<sub>4</sub>, where symmetry of the pair potential is believed to be triplet.. In (odd parity) triplet superconductor junctions, it is also expected that the Josephson current is enhanced by the formation of the ZES similarly to the even parity cases. Since the ZES formation is a universal phenomenon for any pair potential with the sign change on the Fermi surface irrespective of parity of the pair potential, it is not so easy to determine the parity of the unconventional superconductor using usual Josephson junctions. In order to distinguish odd parity superconductors from even parity ones, we propose a new method using a superconductor / one dimensional electron gas (1DEG) /superconductor ($`s/o/s`$) junction. Anomalous behaviors in the Josephson effect are expected only for odd parity superconductor in this junction configuration. This is because direction of quasiparticle injection, which is a decisive factor for the formation of the ZES, is restricted to be normal to the interface. In this configuration, the appearance of the ZES is governed by the parity of the superconductor, as precisely discussed below. Thus $`s/o/s`$ junction provides a simple strategy to determine the parity of the superconductor. Recent rapid progress in the technology of superconductor / semiconductor hybrid structure makes it possible to fabricate and to study $`s/o/s`$ junctions. Hence the way is promising enough. Several theories have already been presented about the effect of interaction in 1DEG on the Josephson effect using superconductor / Luttinger liquid (LL) / superconductor ($`s/LL/s`$) junctions. In these works, however, the superconductor is assumed to be BCS-type $`s`$-wave and cases for unconventional superconductors are not clarified yet. In this paper, a formula of the Josephson current is presented for $`s/o/s`$ junctions assuming that the 1DEG is non-interacting. The Josephson current is shown to be sensitive to the parity of the superconductor. We further study the effect of interaction for the 1DEG using the Tomonaga-Luttinger (TL) model. A Josephson-current formula for general $`s/o/s`$ junctions with normal boundary reflections is obtained by generalizing the method by Maslov et al., which again shows sensitivity of the current to the parity of the superconductor. Let us consider a semi-infinite superconductor with a flat interface at $`x=0`$ as shown in Fig.1. The effective potentials for injected and reflected quasiparticles with spin index $`\sigma `$ are given by $`\mathrm{\Delta }_{L\sigma }(\theta )`$ and $`\mathrm{\Delta }_{L\sigma }(\pi \theta )`$, respectively. In usual Josephson junctions, ZES at a surface are formed if a condition $`\mathrm{\Delta }_{L\sigma }(\theta )\mathrm{\Delta }_{L\sigma }(\pi \theta )<0`$ is satisfied . On the other hand, as we stated above, the most remarkable difference in $`s/o/s`$ junctions from usual Josephson junctions is that only the components of the current which flow perpendicular to the interface ($`\theta `$=0) contribute to the Josephson current. For singlet superconductors, since $`\mathrm{\Delta }_{L\sigma }(0)=\mathrm{\Delta }_{L\sigma }(\pi )`$, the condition for the ZES is never satisfied. On the other hand, for triplet superconductors, since $`\mathrm{\Delta }_{L\sigma }(0)=\mathrm{\Delta }_{L\sigma }(\pi )`$ is satisfied, ZES are always expected . This is the reason why we propose a $`s/o/s`$ junction to distinguish the parity of the superconductor. To perform the simplest model calculation, we consider $`s/o/s`$ junctions with perfectly flat interfaces in the clean limit. In this model, the interface is perpendicular to the $`x`$-axis and is located at $`x=0`$ and $`x=d`$ where $`d`$ is the length of the 1DEG region. In real junctions, insulator is inevitably located between the superconductors and the 1DEG. We model the insulator by a delta functions, namely $`H\delta (x)`$ and $`H\delta (xd)`$, where $`H`$ denotes strength of the barrier. We assume that the superconductors are two-dimensional. The Fermi wave number $`k_F`$ and the effective mass $`m`$ are assumed to be equal in the left- and right superconductors. In the 1DEG, the magnitude of the Fermi wave number and the effective mass are also chosen as $`k_F`$ and $`m`$, respectively. In the following, we will calculate the Josephson current in the $`s/o/s`$ junction shown in Fig.2. For simplicity, the Cooper pair is assumed to be formed by two electrons with antiparallel spins both for the singlet pairing and for the triplet pairing ($`S=1`$, $`S_z=0`$). We first consider the case with non-interacting 1DEG. In the framework of the quasi-classical approximation, the effective pair potentials for the quasiparticles depend on their directions of their motions. We assume an electron-like quasiparticle (ELQ) is injected from the left. The effective pair potentials for the injected ELQ \[a reflected hole-like quasiparticle (HLQ)\], the reflected ELQ, the transmitted ELQ and the transmitted HLQ are given by $`\mathrm{\Delta }_{L\sigma }(0)\mathrm{exp}(i\phi _L)`$, $`\mathrm{\Delta }_{L\sigma }(\pi )\mathrm{exp}(i\phi _L)`$, $`\mathrm{\Delta }_{R\sigma }(0)\mathrm{exp}(i\phi _R)`$, and $`\mathrm{\Delta }_{R\sigma }(\pi )\mathrm{exp}(i\phi _R)`$, respectively (see Fig. 2). The quantities $`\phi _L`$ and $`\phi _R`$ denote the macroscopic phases, which are measured along the $`x`$-axis, of the left and right superconductors, respectively. The Josephson current through the junction is expressed in terms of the coefficients of the Andreev reflection \[$`a_\sigma (\phi )`$\] as $$R_NI(\phi )=\frac{\pi k_BT}{e\sigma _T}\underset{\omega _n,\sigma }{}\frac{\mathrm{\Delta }_{L\sigma }(0)}{2\mathrm{\Omega }_n}[a_\sigma (\phi )a_\sigma (\phi )]$$ (1) with $`\mathrm{\Omega }_n=\sqrt{\omega _n^2+\mathrm{\Delta }_{L\sigma }(0)^2}`$, $`\phi =\phi _L\phi _R`$, and $`\omega _n=2\pi k_BT(n+1/2)`$ with an integer, $`n`$ . Conductance of the junction in the normal state $`\sigma _T`$ is given by $`\sigma _T`$ $`=`$ $`{\displaystyle \frac{\sigma _N^2}{\{1+(1\sigma _N)^2+F(1\sigma _N)\}}}`$ (2) $`F`$ $`=`$ $`[2(2\sigma _N1)\mathrm{cos}(2k_Fd)+4\sqrt{\sigma _N(1\sigma _N)}\mathrm{sin}(2k_Fd)]`$ (3) with $`\sigma _N=4/(4+Z^2)`$ and $`Z=2mH/\mathrm{}^2`$. Coefficients of the Andreev reflection are obtained by solving the following equations, $`\mathrm{\Psi }(x=0_{})=\mathrm{\Psi }(x=0_+),\mathrm{\Psi }(x=d_{})=\mathrm{\Psi }(x=d_+),`$ $`{\displaystyle \frac{d}{dx}}\mathrm{\Psi }(x)_{x=0_+}{\displaystyle \frac{d}{dx}}\mathrm{\Psi }(x)_{x=0_{}}={\displaystyle \frac{2mH}{\mathrm{}^2}}\mathrm{\Psi }(x)_{x=0_+}`$ $$\frac{d}{dx}\mathrm{\Psi }(x)_{x=d_+}\frac{d}{dx}\mathrm{\Psi }(x)_{x=d_{}}=\frac{2mH}{\mathrm{}^2}\mathrm{\Psi }(x)_{x=d_+},$$ (4) where $`\mathrm{\Psi }(x)`$ denotes the two component wave functions. In the following, we will consider two cases; (1) singlet superconductor / 1DEG / singlet superconductor ($`ss/o/ss`$) junction \[$`\mathrm{\Delta }_{L(R)\sigma }(0)=\mathrm{\Delta }_{L(R)\sigma }(\pi )=s\mathrm{\Delta }_0`$\], (2) triplet superconductor / 1DEG / triplet superconductor ($`ts/o/ts`$) junction \[$`\mathrm{\Delta }_{L(R)\sigma }(0)=\mathrm{\Delta }_0`$, $`\mathrm{\Delta }_{L(R)\sigma }(\pi )=\mathrm{\Delta }_0`$\], with $`s=1`$ ($`s=1`$) for up (down) spin electron injection. The Josephson current is expressed as (1)$`ss/o/ss`$ junction case; $$R_NI(\phi )=\frac{\pi k_BT}{e\sigma _T}\underset{\omega _n}{}\frac{4\gamma \eta ^2\sigma _N^2\mathrm{sin}\phi }{\sigma _N\mathrm{\Lambda }+(1\sigma _N)(1+\eta ^2)^2t}$$ (5) (2)$`ts/o/ts`$ junction case; $$R_NI(\phi )=\frac{\pi k_BT}{e\sigma _T}\underset{\omega _n}{}\frac{4\gamma \eta ^2\sigma _N^2\mathrm{sin}\phi }{\sigma _N\mathrm{\Lambda }+(1\sigma _N)(1\eta ^2)^2t}$$ (6) where $`\mathrm{\Lambda }=(1+\gamma ^2\eta ^4+2\gamma \eta ^2\mathrm{cos}\phi )(1\sigma _N)(\gamma ^2+\eta ^4+2\gamma \eta ^2\mathrm{cos}\phi ),`$ $`\eta ={\displaystyle \frac{\mathrm{\Delta }_0}{\mathrm{\Omega }_n+\omega _n}},\gamma =\mathrm{exp}[2\omega _nd/\mathrm{}v_F],`$ $$t=1+\gamma ^2\gamma (t_s\delta +\frac{1}{t_s\delta }),t_s=\frac{2iZ}{2+iZ},\delta =\mathrm{exp}(2ik_Fd).$$ (7) Temperature dependence of the maximum Josephson current $`I_C(T)`$ of $`ss/o/ss`$ and $`ts/o/ts`$ junctions is plotted in Fig. 3. With increasing $`Z`$, magnitude of $`R_NI_C(T)`$ for $`ss/I/ss`$ junction is reduced. On the other hand, for $`ts/o/ts`$ junction, it is enhanced oppositely with increasing $`Z`$. The enhancement of the Josephson current for larger $`Z`$ is due to the resonating current through the ZES formed near the interface. In real junctions, an insulating barrier inevitably exists near the interface. Such a situation corresponds to the larger magnitude of $`Z`$ in our calculations. The present result suggests that we can distinguish the parity of the superconductor, whether $`I_C(T)`$ shows an upturn curvature (triplet case) or not (singlet case). Now, we consider an effect of the interaction in the 1DEG. We derive the Josephson current formula for unconventional superconductors with arbitrary barrier heights by taking account of both the Andreev reflection and normal reflection at the interfaces. The effect of interaction in 1DEG is introduced following Maslov et al. using the TL-model. Basis for 1DEG is spanned by bound states formed in the superconducting gap. For simplicity, we consider here only the low temperature limit and assume that relevant excitations determining the Josephson current have energy, $`\epsilon \mathrm{\Delta }_0`$. Within these conditions, a difference between $`ss/LL/ss`$ and $`ts/LL/ts`$ junctions appears only in the following generalized boundary conditions for the fermion field operators, $`\psi _{\pm ,s}(x+2d)=\lambda \psi _{\pm ,s}(x)`$ (8) $`\psi _{+,s}(x)=s\psi _{,s}^{}(x)`$ (9) Here $`\psi _{\pm ,s}`$ represents right-going (left-going) fermion field with spin $`s`$. Extra phase factor $`\lambda `$, which is a function of $`k_F`$, $`d`$, $`\sigma _N`$, and $`\phi `$, coincide with the factor in Eqs. (16a), (16b) of Ref. , when a $`ss/LL/ss`$ junction with only the Andreev refrection at the boundary is considered. Following the bosonization technique for the open boundary conditions, $`\psi _{\pm ,s}`$ can be represented by chiral boson fields. We see that only zero modes are affected by the parity of the superconductor through the boundary condition, (7), and $`\chi `$ in the equation (28) of Ref. is replaced by a complicated function of $`\phi `$ for general situations considered here. Explicit formulas for $`\psi `$ as well as $`\lambda `$ will be presented elsewhere. The current is obtained by $`I(\phi )=\frac{2ek_BT}{\mathrm{}}\frac{}{\phi }\mathrm{log}Z(\phi )`$ where $`Z(\phi )`$ is the partition function. As Maslov et al. have claimed, the Josephson current in the present limit is determined by the zero mode (the topological excitations) and non-zero modes do not contribute. Our general formula of the Josephson current for interacting 1DEG systems shows essentially the same feature as non-interacting cases in that $`I(\phi )`$ is enhanced for $`ts/LL/ts`$ compared with $`ss/LL/ss`$. In this paper, we propose a new method to identify the parity of a superconductor using a $`s/o/s`$ junction. We derive a formula for the Josephson current assuming that the 1DEG is non-interacting. Anomalous behavior in the Josephson effect is expected only in triplet superconductor with odd parity. This is because the direction of quasiparticle injection, which is a decisive factor for the formation of ZES, is selected to be normal to the interface . For the singlet superconductor with even parity, the ZES never appear in the present geometry as precisely discussed. In the present calculation, the suppression of the pair potential near the interface is neglected. Even if we take into account of this effect, qualitative features in the upturn curvature due to the ZES at low temeratures will not be changed, then the present results are still valid. We have further studied the effect of interaction for the 1DEG using the TL model. It is shown that the essential feature is determined by the parity of the superconductor and the influence of the interaction effect is not so important within the TL model at the low temperature limit. We will report detailed properties of general $`s/LL/s`$ junctions in a forthcoming paper using a bosonization technique with further consideration of the inter-electron interaction . This work has been supported by the Core Research for Evolutional Science and Technology (CREST) of the Japan Science and Technology Corporation (JST) and a Grant-in-Aid for Scientific Research from the Ministry of Education, Science, Sports and Culture.
no-problem/9902/astro-ph9902165.html
ar5iv
text
# BOK GLOBULES IN THE LMCBased on observations with the NASA/ESA Hubble Space Telescope obtained at Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, under NASA contract NAS5-26555. ## 1 Introduction Fifty years ago, Bok & Reilly (1947) described the dark clouds now known as Bok globules, and proposed that these clouds were likely undergoing gravitational collapse to form new stars. It was many years until that speculation was confirmed, however. Since then, a number of recent infrared and radio studies have demonstrated that Bok globules show a variety of evidence for new stars: embedded warm IR sources, often showing evidence for local dust heating, nebulosity, or the signatures of protostars (Keene et al. 1983; Yun & Clemens 1990, 1995), Herbig-Haro objects (Reipurth, Heathcote, & Vrba 1992), and molecular gas outflows (Yun & Clemens 1992). At the same time, detailed millimeter-wave emission line studies of some of these objects have provided some of the best direct evidence yet for infall of material onto an accreting protostar (e.g., Zhou et al. 1993). Isolated Bok globules have thus turned out to be important laboratories for the study of protostellar collapse. Nevertheless, a few important uncertainties remain regarding the properties of the Galactic globules, mainly because distances to individual Bok globules are difficult to determine, although new developments promise to improve this situation in the near future (Peterson & Clemens 1998). In this short article, we present evidence for small, isolated dark clouds in the Large Magellanic Cloud, discovered serendipitously through Hubble Space Telescope imaging, which are likely counterparts to the Galactic Bok globules. Although the amount of information we have is limited, we describe some derived properties of the LMC globules. Because these objects are all at a well-constrained distance, Bok globules in the Magellanic Clouds provide an opportunity to determine the ensemble properties of globules and their embedded protostars, as well as the connection between the globules and the local environment. ## 2 Observations The images discussed here were obtained as part of a parallel imaging program of diffuse ionized gas in the Magellanic Clouds; a complete description of the program goals and the data obtained is given in Walsh et al. (1998, in preparation). All of the images were obtained with the WFPC-2 camera on HST. Of the four fields in which globules were found, all four were imaged with the F656N filter (H$`\alpha `$), three were imaged in F675W (R band equivalent), three were observed in F547M (emission-free V band), and two in F502N (\[O III\]). For this paper we discuss only the F656N images, which have the highest signal/noise. The broad-band images are short exposures intended only to identify continuum sources in the F656N band, while the \[O III\] images are generally less well-exposed than the H$`\alpha `$ images. Total exposure times for the H$`\alpha `$ images ranged from 1800 to 5000 seconds. Table 1 gives a journal of the H$`\alpha `$ images and the positions of the fields. For a distance of 50 kpc to the LMC (Panagia et al. 1991), we can resolve structures as small as 0.05 pc (0$`\mathrm{}`$.2). All of the images were pipeline-processed with the most appropriate calibration files. We mosaiced and registered the individual exposures for each separate field when necessary, then combined them by averaging with the gcombine task in STSDAS, using the ‘crreject’ option to remove radiation hits. We generally had only two or three H$`\alpha `$ exposures per field, so CR rejection was not perfect with the long exposure times involved. We did not apply a correction for charge transfer efficiency effects; the dust globules are very small-scale objects, and in general H$`\alpha `$ emission was present at an average signal level of at least a few DN all across the images, so CTE effects are not expected to exceed a few percent across each chip. ## 3 Identification and Properties of Globules ### 3.1 Criteria for Identification Clemens & Barvainis (1988; hereafter CB88) presented a set of relatively specific criteria for identifying a dark cloud as a Bok globule: (1) small size ($`<`$ 10 for Milky Way globules); (2) relative isolation, that is, not part of larger dust complexes; (3) no constraint on ellipticity. In compiling their catalog of small molecular clouds, they rejected dense cores of larger clouds, as well as stringy, filamentary dust clouds (which were considered unlikely to be hosts for collapsing protostars). We employ the CB88 identification criteria to determine what dust clouds in our images qualify as candidates for Bok globules. Some comments on the identification process are required, however. First, there is an obvious scale difference between the Galactic objects and LMC candidates. CB88 presented two arguments favoring an average distance of about 600 pc for the Galactic Bok globules. At a distance of 50 kpc, LMC objects could be expected to be smaller by about a factor of 80. The globules in the CB88 catalog range in angular size from 1 to 10, with a mean size of 4, but with a strong peak at 1-2. Direct scaling to the LMC distance thus implies angular sizes less than 7-10$`\mathrm{}`$, with most of the objects around 1$`\mathrm{}`$ in size. Second is that ‘isolation’ is not always obvious in our LMC images: it becomes clear from examination that we are looking at multiple overlapping structures, and more than one dust structure may be found along a given line of sight. Related to this is the possibility that the LMC H$`\alpha `$ emission may have components both foreground and background to a dust globule, and in many cases a globule may be largely washed out by foreground emission line gas. Thus, although we see many structures that may in fact be globules, for this paper we point out only the most obvious structures. Deep continuum imaging and future millimeter-wave interferometer studies may find many more Magellanic Cloud globules. Within these guidelines, we identify five small dust clouds which appear to be excellent candidates to be Bok globules. We list these candidates in Table 2 and show an atlas of them in Figure 1. The coordinates for the rough center of each globule listed in the table were obtained using the ’metric’ task in STSDAS. The sizes that we list are approximately the FWHM along the major and minor axes; these are only approximate because the globules do not always show symmetric profiles and can be affected by foreground H$`\alpha `$ emission. ### 3.2 Optical Depths and Extinction Estimates For these five globules, we provide estimates in Table 2 for the peak absorption optical depth in the F656N band, determined from the relation $$\frac{I}{I_0}=e^\tau .$$ (1) This estimate is affected by a number of sources of contamination. Our H$`\alpha `$ images in general show emission at some level all across each field, and we are unable to identify blank regions to subtract the background, which arises from several sources. First, there is the contribution from the foreground Galactic and zodiacal light, as well as possible contamination from geocoronal H$`\alpha `$. To estimate the contribution from these sources, we examined deep FOS spectra of a blank sky field at 2<sup>h</sup> 59<sup>m</sup> 47<sup>s</sup>, $``$20 13 06$`\mathrm{}`$, taken with the G570H grating (program GO-5968; PI Freedman). The total 8800 second exposure yielded a surface brightness within the F656N bandpass of about 7$`\times `$10<sup>-17</sup> ergs cm<sup>-2</sup> s<sup>-1</sup> arcsec<sup>-2</sup>. This converts to a count rate of about 1.1$`\times `$10<sup>-3</sup> counts per second per pixel in the WFPC-2 images. In comparison, the WFPC-2 Exposure Time Calculator predicts a background count rate of 9$`\times `$10<sup>-4</sup> counts per second per pixel at the blank sky position, and 7$`\times `$10<sup>-4</sup> at our LMC positions. For our purposes, we use a background count rate of 10<sup>-3</sup> counts per second per pixel, with an uncertainty of $`\pm `$50%. This contributed less than 10% to the observed H$`\alpha `$ signal. We scaled the countrates by the effective exposure times of our averaged images and subtracted the background before computing the optical depth. A second source of contamination is foreground H$`\alpha `$ within the LMC. It is apparent in some cases that there is H$`\alpha `$ emission in front of the dust globules, which will cause us to underestimate the optical depth. We make no correction for such emission but note that it is likely to be present, and we treat the optical depths as lower limits only. Our lower limits for $`\tau `$(6563) correspond to lower limits on the visual extinction $`A_V`$ of 1.5-3 magnitudes, based on the Galactic extinction curve for $`R_V`$ = 3.1 (Cardelli, Clayton, & Mathis 1989); at the wavelengths considered, the Galactic extinction curve is sufficiently similar to the LMC curve, especially for small values of extinction. On the other hand, the nature of dust grains in metal-poor environments such as the LMC is not well known. This is an area greatly needing future investigation. As expected, our derived values for $`A_V`$ are significantly lower than those obtained for Galactic globules from star counts. It is likely that diffuse foreground H$`\alpha `$ emission, which is seen all across our F656N images, contributes significant emission in the cores of the globules. A deep broad-band continuum study would be better suited for estimating accurate extinctions within these clouds. ### 3.3 Estimates of Globule Masses We can estimate roughly the masses of the LMC globules by extrapolating the properties of Galactic globules. Clemens, Yun, & Heyer (1991) inferred an average H<sub>2</sub> number density $`<`$N(H<sub>2</sub>)$`>`$ $``$ 10<sup>3</sup> cm<sup>-3</sup> from millimeter studies of CO in Galactic Bok globules. If we approximate the LMC clouds as spheres with a radius $`r`$ = $`(ab)^{0.5}`$, (where $`a`$ and $`b`$ are the semi-major and semi-minor axes as obtained from the sizes listed in Table 2), and if we assume the LMC clouds have the same average H<sub>2</sub> density as Galactic globules, we obtain the cloud masses listed in Table 2. The masses range between 0.3 and 80 solar masses, with four of the five clouds under one solar mass. In comparison, Clemens et al. (1991) found a range of 0.6 to 200 solar masses for the Galactic sample, with an average mass of about 11 M. It is possible we may underestimate the LMC globule masses somewhat if foreground H$`\alpha `$ emission causes us to underestimate the angular sizes of the clouds. For example, underestimating the sizes of the smallest globules by only 25% (abut 2 WFC pixels) would lead us to underestimate their masses by a factor of two. Again, deep broad-band imaging could reduce the uncertainty. The discrepancy between the typical masses for our LMC globules and the Milky Way globules is curious, and may bring the assumed density into question. If we use our estimated extinctions and the diffuse gas N(H)/E(B$``$V) ratio for the LMC (Fitzpatrick 1985, Koorneef 1982), we obtain densities and masses about an order of magnitude larger than those listed in Table 1. On the other hand, it is likely that our estimated extinctions are severe underestimates, and it is not clear that the diffuse gas dust/gas ratio should be applicable to dark clouds. Note that the four small clouds in our LMC sample would be among the smallest objects in the CB88 sample and thus might be expected to have small masses as well. We also note that the recent distance estimate for the Milky Way globule CB24 (Peterson & Clemens 1998) results in a mass estimate only one-fifth of that based on the estimated average distance of 600 pc. It may be that many galactic globules have quite low masses as well. The mass function of LMC globules will become better defined as more globules are detected in high resolution images. ### 3.4 Space Density of LMC Globules We can estimate crudely the number of small globules that might exist in the LMC from the surface density of detected globules. We detected five small globules in four WFPC2 fields, which cover an angular area of 23 square arcminutes. At a distance of 50 kpc, this translates to a globule surface density of approximately 1000 per kpc<sup>2</sup> (not corrected for the LMC’s inclination). If we assume that the small globules are distributed over the full 50 square degrees of the main body of the LMC, we would predict a total of 4$`\times `$10<sup>4</sup> LMC globules. For comparison, Clemens et al. (1991) estimated a total of 3.2$`\times `$10<sup>5</sup> small globules in the Milky Way, corresponding to a surface density of about 400 globules per kpc<sup>2</sup> within 15 kpc of the galactic center. Thus, the simplest calculation leads us to estimate that the LMC is forming globules at about twice the rate per unit area as the Galaxy. By coincidence, this is also the ratio of the estimated star formation rates per unit area for the two galaxies (R. C. Kennicutt, private communication). One could easily question some of the assumptions behind this simple estimate. First, we have already noted that our sample is biased toward the most obvious globules. We could be missing many objects because of low contrast due to the strong foreground emission in this area. On the other hand, there is no evidence that globules should be distributed homogeneously across the LMC main body. It may be that such structures are largely located in the region of dark clouds/molecular gas and strong star formation in the vicinity of 30 Doradus, where our fields are located. Hodge (1988) cataloged 146 dark clouds in the LMC based on CTIO 4m prime focus photographic plates. However, 115 of these clouds were located in two fields at the east end of the LMC bar, near 30 Dor. If we assume that small globules are preferentially located in these fields (covering a little over one square degree), we would predict about 1000 globules would be found in the LMC. Therefore, although the coincidence between the ratios of globule surface densities and star formation rates is intriguing, the numbers are too uncertain at present to interpret reliably. As more fields are imaged, the estimated number of globules in the LMC will improve greatly. ## 4 Other Dust Structures Figures 2 and 3 show a number of other structures which are analogous to structures observed in Galactic dust clouds. Figure 2 shows an “elephant trunk” structure in the same field as globule BGJ053933-691338. This feature is about 12$`\mathrm{}`$ (about 3 pc) long, but only about 0.1 pc wide in its narrowest parts. Such coherent dust structures are thought to be clouds stretched out by the interaction between the dark clouds and a less dense flow of neutral or ionized gas past the cloud (e.g., Schneps, Ho, & Barrett 1980), although magnetic fields may play an important role as well (Carlqvist, Kristen, & Gahm 1998). Figure 3 shows a larger dust cloud, in the same field as BGJ053615-691822, which shows evidence for several dense clumps, similar in size to the isolated globules. This may be a molecular cloud with multiple cores that will eventually become a cluster of stars. It is interesting that the densest clumps appear to be on a side of the dark cloud that is sharply defined by a photoionized edge. This may be a case in which gravitational collapse is organized and possibly triggered by the mechanical and radiation energy from the hot massive stars that are responsible for the ionized gas. A survey for IR and millimeter sources in this region could prove interesting. It is also interesting to note that to the lower left of this cloud in Figure 3 is a star which is immediately adjacent to a small, arc-shaped dust cloud. This star could be a very young star that has just broken out of its dusty cocoon. ## 5 Discussion It is very likely that the small globules described in this paper are the smallest dust structures ever detected in any galaxy other than the Milky Way. The smallest dust structures detected by Hodge (1988) in the LMC were about 2 pc across, about 10$`\mathrm{}`$ at the distance of the LMC. The high-resolution imaging capability of $`HST`$ has allowed us to push this scale down another factor of ten. At this spatial scale the complexity of the structure for both emission and dust absorption in the LMC becomes very evident. Hodge (1988) noted the difficulty of detecting dark clouds against a faint and variable background. This is also true of our WFPC images, to which we add the confusion caused by overlapping structures and the low contrast due to foreground emission. Hodge also noted a curious lack of correspondence between CO emission (Cohen et al. 1988) and dark clouds in some parts of the LMC. This may reflect partly the low resolution of the CO surveys, but it also suggests that both visual surveys for dark clouds and CO emission studies are necessary to characterize the population of dark clouds in galaxies. In many ways the parallel imaging shows that the structure of the ISM and dust clouds in the LMC is very similar to that observed in the Galaxy. We see in our LMC fields many of the same dust structures that have been identified in the Galaxy, and the tentative inferred properties of those structures appear to be similar to those of their Galactic counterparts, although the estimated masses appear to be a bit low. Future studies of globules in the Magellanic Clouds are of importance for understanding star formation in very different environments from the Milky Way. The Clouds are at comparatively well-determined distances, and so the ensemble properties of LMC and SMC globules in principle could be well constrained with high-resolution infrared and millimeter studies. Potentially interesting future observations include high resolution K-band imaging surveys for protostars within the globules. The near-IR sources discovered by Yun & Clemens (1994) in Galactic Bok globules have K-band magnitudes between 6 and 13. Assuming an average distance of 600 pc for the Galactic objects, the same near-IR sources would have K = 16-23 in the Magellanic Clouds. Deep IR imaging of LMC and SMC globules, especially with future 8-m class telescopes or HST, will likely detect embedded protostellar sources if they exist. Such observations will begin to constrain the properties of low-mass protostars in the Magellanic Clouds. High resolution ($``$ 1$`\mathrm{}`$ or better) molecular line observations would also be of great interest for density and mass determinations, from which the mass function of globules could be derived. This would help provide a refined estimate of the star formation efficiency in globules, and may facilitate comparisons between isolated globules and the embedded dark cores in larger clouds. Also of interest is whether the lower dust-to-gas ratios in the Magellanic Clouds systematically affects the properties of globules compared to their Galactic counterparts. The lower dust-to-gas ratio leads to greater penetration of UV radiation into the dark clouds, affecting the cloud structure. This is one suggested cause for the lower I(CO)/N(H<sub>2</sub>) ratios observed in the Magellanic Clouds and other dwarf irregulars (Poglitsch et al. 1995; Israel et al. 1996; Madden et al. 1997), and may lead to evaporation of small dark clouds. Measurements of velocity widths should show whether the LMC globules are virialized and stable or expanding and evaporating. Finally, the largely unobscured view of the structure of the Magellanic Clouds allows one to examine the isolated globules in the context of large-scale star formation and the relation to other dark cloud structures. We thank Tony Keyes at STScI for assistance in locating deep blank-field FOS spectra to estimate the H$`\alpha `$ foreground. DRG thanks Neal Evans for helpful discussions regarding the potential interest of Bok globules in the Magellanic Clouds. We are grateful to the anonymous referee for several excellent suggestions which enhanced the content of this paper. Support for the U.S. investigators on the parallel imaging program was provided by NASA and STScI through grants GO-3589-91A, and GO-4497-92A. DRG acknowledges support from NASA-LTSARP grant NAG5-6416, as well as the sponsorship of the European Southern Observatory in April/May 1998, where much of this paper was written. YHC and DRG also acknowledge the hospitality of the Aspen Center for Physics in June 1998, which provided a very pleasant forum for discussions of the results presented here.
no-problem/9902/cond-mat9902231.html
ar5iv
text
# Universal scaling, beta function, and metal-insulator transitions \[ ## Abstract We demonstrate a universal scaling form of longitudinal resistance in the quantum critical region of metal-insulator transitions, based on numerical results of three-dimensional Anderson transitions (with and without magnetic field), two-dimensional quantum Hall plateau to insulator transition, as well as experimental data of the recently discovered two-dimensional metal-insulator transition. The associated reflection symmetry and a peculiar logarithmic form of the beta function exist over a wide range in which the resistance can change by more than one order of magnitude. Interesting implications for the two-dimensional metal-insulator transition are discussed. today \] The scaling theory predicted that the noninteracting electrons are always localized in two-dimensional (2D) disordered systems. Recently, a new scaling argument was put forward in order to accommodate the newly-found 2D metal-insulator transition (MIT) in zero magnetic field ($`B=0`$), where the Coulomb interaction presumably becomes very important. Although the microscopic mechanism remains unclear, without violating any general scaling principles the authors assumed the following leading behavior of the “beta function” $`\beta (g)=d[ln(g)]/d[ln(L)]`$ for large conductance $`g`$ at a finite length scale $`L`$: $$\beta (g)=(d2)+A/g^\alpha +\mathrm{}$$ (1) in which $`A`$ becomes positive in the aforementioned $`B=0`$ MIT systems, leading to a metallic phase ($`\beta >0`$) at the dimensionality $`d=2`$. Since $`\beta (g)<0`$ at small $`g`$ (localized region), the beta function is then no longer a monotonic function and has to change sign at some finite $`g=g_c`$, which corresponds to a quantum critical point. Experimental measurements have indicated an exponential form for the conductance with a peculiar reflection symmetry relating the conductance and the resistance on both sides of MIT, which implies the following logarithmic form of the beta function in the quantum critical region (QCR): $$\beta (g)=\frac{1}{\nu }ln(g/g_c).$$ (2) In particular, $`\nu `$ here is the correlation length exponent, and (2) holds at a wide range ($`1/4<g/g_c<4`$) far beyond a simple small variable expansion around $`g=g_c`$. The logarithmic form of the beta function (2) looks quite remarkable. Recall that in strong localization limit one may find, exactly, $`\beta (g)=ln(g)constant`$. But the inverse exponent $`1/\nu `$ does not show up in front of $`ln(g)`$ as in (2) and the corresponding behavior of $`g`$ should be quite different. So far there still lacks a good theoretical understanding of (2) from a microscopic model. Nevertheless, one may ask an equally important question: whether (2) is a property of the beta function unique for the $`B=0`$ 2D MIT system or it actually represents a generic scaling behavior of quantum phase transitions including other MIT systems with different symmetry and dimensionality. Unfortunately, so far there is no direct anwser to this question as how the scaling function of conductance behaves and what is the form of the beta function in the QCR of various MIT systems are not known, although a lot of efforts have been focused on the critical conductance and exponent within each universality class. In this paper, we present direct numerical evidence showing that the beta function (2) in fact holds for the following systems as well: three-dimensional (3D) Anderson transitions with and without magnetic field (representing orthogonal and unitary classes, respectively); the 2D electrons in strong magnetic filed, i.e. the quantum Hall effect (QHE) system. Strikingly, $`\nu \beta (g)=ln(g/g_c)`$ is found to be a universal function in the QCR where $`g/g_c`$ may change up to two orders of magnitude. Correspondingly the resistance is of an exponential form $`\rho _{xx}e^s`$ with $`s=\pm (c_0L/\xi )^{1/\nu }`$ which also implies a reflection symmetry in the same region (here $`\xi `$ is correlation length, and $`c_0`$ is a non-universal dimensionless constant $`O(1)`$). Thus (2) may well represent a “super” universality property associated with general quantum phase transitions. Furthermore, deep into the metallic region, the beta function shows distinct behavior depending on how the resistance $`\rho _{xx}`$ deviates from the exponential form: 3D MITs and the $`B=0`$ 2D MIT experimental data seem to belong to the same group where $`d[ln(1/\rho _{xx})]/ds`$ is a monotonically decreasing function of scaling variable $`s`$ in the whole scaling region; on the other hand, the QHE system falls into a different group where $`d[ln(1/\rho _{xx})]/ds`$ becomes a monotonically increasing function of $`s`$. Interestingly, the experimental data of superconductor-insulator transition also fall into the second group, in accord with the speculation that the MIT in the QHE and superconductor-insulator transition may belong to similar universality class. We consider disordered electron systems described by the Anderson Hamiltonian : $`H={\displaystyle \underset{<ij>}{}}e^{ia_{ij}}c_i^+c_j+H.c.+{\displaystyle \underset{i}{}}w_ic_i^+c_i,`$ where the hopping integral is taken as the unit, and $`c_i^+`$ is a fermionic creation operator with $`<ij>`$ referring to the nearest neighboring sites. A uniform magnetic flux per plaquette (along $`z`$ direction) can be imposed by requiring $`\varphi =_{\mathrm{}}a_{ij}=2\pi /M`$, where the summation runs over four links around a plaquette in the $`xy`$ plane. $`w_i`$ is a random potential uniformly distributed between (-$`W/2,W/2`$). We first study the 3D electron system without magnetic field ($`a_{ij}=0`$) which belongs to the orthogonal class. The longitudinal conductance $`G_{xx}`$ is calculated using Landauer formula. By changing the disorder strength $`W`$, a metal-insulator transition is found at a critical disorder strength $`W_c=16.5`$ at the Fermi energy $`E_f=0`$, with a critical conductance $`G_c=0.37`$ (in units of $`e^2/h`$) and correlation exponent $`\nu =1.6`$. All the data at different sample sizes ($`L=8,10,12,14`$ and $`16`$) can be then collapsed onto two branches as a function of $`L/\xi `$ as shown in Fig. 1(a) ($``$ curve). Note that the resistance data are plotted in the figure. (More than $`2000`$ configurations are taken in the average for $`L=16`$, and more for smaller $`L`$’s.) A 3D MIT is similarly obtained in the presence of strong magnetic field (unitary class). We have chosen two different flux strengths $`\varphi =2\pi /M`$: $`M=5`$ at sample sizes $`L=10`$ and $`15`$; and $`M=4`$ at sample sizes $`L=8,12,`$ and $`16`$, respectively. All the longitudinal resistance data with different $`\varphi `$’s and $`L`$’s again can be scaled onto two branches ($`+`$ curve in Fig. 1(a)). At the critical point, $`W_c=18.3`$, $`\nu =1.43`$ and $`G_c=0.294`$ at $`E_f=0`$. Since the universality of the MIT in the unitary class is distinct from the one of the orthogonal class as expected in the scaling theory, two scaling curves shown in Fig. 1(a) are generally different from each other. However, if we re-plot the data in terms of the scaling variable $$s=\pm \left(c_0L/\xi \right)^{1/\nu },$$ (3) where the sign $`+()`$ corresponds to the metallic (insulating) branch, two curves of longitudinal resistance in the QCR can be precisely scaled together as shown in Fig. 1(b). Here the dimensionless constant $`c_0`$ has the non-universal values $`2.27`$ and $`1.82`$ for orthogonal and unitary class, respectively. As shown in the insert of Fig. 1(b), the resistance in the transition region well follows a simple exponential form $$\rho _{xx}/\rho _c=exp(s)$$ (4) over a rather broad region: $`2<s<2`$ or $`1/8<\rho _{xx}/\rho _c<8`$. Because of such a wide range of $`s`$ (instead of a small parameter expansion), the exponential behavior appears very robust. In the same region, one always finds the so-called reflection symmetry: $`\rho _{xx}(s)/\rho _c=\rho _c/\rho _{xx}(s)`$ between the metallic and insulating branches. Now let us consider a qualitatively different MIT in the QHE system where 2D electron gas is subjected to a strong magnetic field. By tuning the Fermi energy (or the density of electrons) near the lowest Landau Level (LL), an insulator to metal transition can be induced which is characterized by a one-parameter scaling theory with an exponent $`\nu =7/3`$ and $`\rho _c=1`$ (in units of $`h/e^2`$). The Hall conductance here is calculated using Kubo formula. By going to large sample sizes ($`L=24,32,48,56,64`$), we were able to obtain the scaling behavior of $`\rho _{xx}`$ for the QHE systems. In Fig. 2, $`\rho _{xx}`$ is plotted as a function of scaling variable $`s`$ defined in (3). Again the resistance exhibits the same exponential dependence $`\rho _{xx}=exp(s)`$ (solid line) in the critical region covering a similar wide range of resistance ($`1/5<\rho _{xx}/\rho _c<5`$) as in the 3D MITs. In Fig. 2, two different disorder strengths, $`W=1`$ and $`W=4`$, are considered which represent weak and strong LL coupling limit, respectively. The corresponding scaling functions start to deviate from the exponential form beyond the critical region and simultaneously become $`W`$-dependent in the insulating region. We would like to point out an interesting reflection symmetry for the $`W=1`$ case: as shown in the insert of Fig. 2, $`\rho _{xx}(s)`$ on the insulating side and $`1/\rho _{xx}(s)`$ on the metallic side perfectly coincide with each other over the whole scaling region and covering a resistance range $`1/100<\rho _{xx}<100`$ which is way beyond the critical region. Our interpretation is that at weak disorder ($`W=1`$), the particle-hole symmetry is still approximately maintained near the lowest LL such that the Hamiltonian is self-dual in the Chern-Simon boson language, which then leads to the wide range of the reflection symmetry. By contrast, when disorder is strong and all the LLs are coupled together without the particle-hole symmetry, the reflection symmetry only exists around the QCR where the exponential behavior (4) is followed. As demonstrated by the above numerical calculations, the scaling function of longitudinal resistance shows a universal exponential behavior over a wide range in 3D and QHE MITs. In Fig. 3, these data are plotted together with the experimental data obtained in the $`B=0`$ 2D MIT in the $`Si`$ sample. Note that the experimental data were measured at finite temperature so the length scale $`L`$ should be replaced by the dephasing length $`L_{in}T^{1/z}`$. (Here $`z=1`$ is the dynamical exponent). The correlation length $`\xi 1/T_0|\delta _n|^\nu `$ in the transition region ($`\delta _n`$ is the electron density measured from the critical point). So scaling variable $`s`$ in this case becomes $`\pm (c_0T_0/T)^{1/\nu }`$ with $`\nu =1.6`$ and $`c_0`$ is a dimensionless constant. In Fig. 3, a universal scaling function of the longitudinal resistance is clearly shown for all these systems in the QCR (with $`1.5<s<1.5`$) despite their different symmetry classes, dimensionalities, and microscopic mechanisms of the MIT. The corresponding beta functions for those systems are shown in the insert of Fig. 3 if we define $`g1/\rho _{xx}`$, where a straight dashed line represents the logarithmic form of (2) which can be obtained straightforwardly from (4). Note that the beta function is multiplied by the critical exponent $`\nu `$ in the insert such that the resulting function becomes universal in the QCR. Furthermore, we would like to comment on an interesting trend in the metallic region for those systems. In weak disorder limit of 3D MITs, the resistance approaches to zero in power law: $`\rho _{xx}(\xi /L)=c_0s^\nu `$. The curve for $`B=0`$ 2D MIT system follows very closely to the ones of 3D MITs on the same side of the solid line in Fig. 3 as it drops to zero slower than $`exp(s)`$. In contrast, in the QHE system, the resistance deviates the solid line on the opposite side which means it approaches to zero even quicker than in the QCR. This behavior can be easily seen in its asymptotic form: $`\rho _{xx}\sigma _{xx}exp(s^\nu /c_0)`$ at large $`s`$ limit (since in the QHE plateau region electrons are also localized such that at large $`L/\xi `$ limit $`\sigma _{xx}exp(L/\xi )`$, $`\sigma _{xy}=1`$ and $`\rho _{xx}\sigma _{xx}`$). One may then define a generalized dimensionless function as follows: $$\beta _1d[ln(g)]/ds=\nu \beta /s$$ (5) As shown in Fig. 4, all the data fall onto the straight line with $`\beta _1=1`$ in the QCR. In the metallic phase the distinctive large-$`s`$ behavior of $`\beta _1`$ separates the metallic regime into two regions. METAL denotes the region where $`\beta _1`$ scales to zero which is followed by the 3D MITs as well as the experimental data of the $`B=0`$ 2D MIT system. On the other hand, $`\beta _1`$ for the MIT in the QHE system diverges to infinity at $`s\mathrm{}`$, which is denoted as METAL(B) region known as the “Bose” metal following the theoretical description. $`\beta _1`$ for disorder-tuned superconductor-insulator transition in the metallic region is also plotted in Fig. 4 ($``$ curve) which indeed shows a quick increase like in the QHE system. (Here the scaling variable $`s`$ is of the same form used in plotting the experimental data of B=0 2D MIT.) According to Ref., these two systems should belong to the same category as classified by the “Bose” metal here. We conclude by making several comments on the nature of the $`B=0`$ 2D MIT systems based on the present work. First, no matter what the microscopic mechanism is, such a 2D MIT seems to belong to a quantum phase transition instead of a classical phase transition (or crossing over): The experimental data of the resistance precisely coincides with those of other known MITs in the QCR, plotted as a function of the scaling variable $`(L_{in}/\xi )^{1/\nu }`$, which covers a range of the resistance by more than one order of magnitude. Second, the reflection symmetry of resistance is the natural consequence of the universal resistance scaling in the QCR. Finally, the metallic phase behaves more like a normal metal than a superconductor as revealed by the classification based on the $`\beta _1`$ function shown in Fig. 4. Acknowledgments \- The authors would like to thank S.V. Kravchenko for stimulating discussions and providing his experimental data for comparison. This work is supported by the ARP grant No. 3652707, and by the State of Texas through Texas Center for Superconductivity at University of Houston.
no-problem/9902/cond-mat9902058.html
ar5iv
text
# Quantum versus Semiclassical Description of Selftrapping: Anharmonic Effects ## I Introduction Recent work by Grigolini and collaborators , and by Salkola and the present authors , have uncovered subtle features associated with selftrapping of quasiparticles in interaction with vibrations. The vibrations considered in all those analyses have been harmonic. The question of how polaron dynamics and selftrapping are affected by anharmonicities in the vibrations was raised by Kenkre several years ago at the level of the discrete nonlinear Schrödinger equation (DNLSE) and analyzed by Kenkre and collaborators in the context of rotational polarons, exponential saturation , and general considerations . Since the validity of the DNLSE has been called into question by recent considerations , it is important to examine the issue of what polarons, or selftrapping, owe to harmonic features from a starting point which is fully quantum. The present paper is devoted to such an examination for a two-site system. We will focus here on confined systems rather than on periodic systems such as those which may lead to rotational polarons . In a certain sense, the most anharmonic potential conceivable is that which corresponds to a box with infinitely high walls as it corresponds to a harmonic piece with vanishing frequency throughout the interior of the box but one with infinite frequency at the wall. We choose the symmetric Pöschl-Teller potential given by $$V_{PT}(x)=U_0\mathrm{tan}^2(ax),$$ (1) because it allows continuous transition between the harmonic oscillator and the box limits and because it can be treated analytically with ease. In Eq. (1) $`U_0,a`$ are constants defining, respectively, the strength and confining region of the potential. The potential becomes infinitely steep at $`x=\pm \pi /2a`$. By rewriting the strength of the potential $`U_0`$ as $`\lambda (\lambda 1)\mathrm{}^2a^2/2m`$ where $`m`$ is the mass of the particle, we introduce the parameter $`\lambda `$ which describes the departure of the potential between the box and harmonic oscillator limits. In the limit $`\lambda 1`$, the Pöschl-Teller potential becomes the infinite square well of width $`\pi /a`$. In the opposite limit $`\lambda \mathrm{},a0,\lambda a^2`$ remaining constant (and finite), one recovers the harmonic oscillator potential. The eigenenergies of the Pöschl-Teller potential (1) are given by $$E_n=\frac{\mathrm{}^2a^2}{2m}(n^2+2n\lambda +\lambda ),n=0,1,2,\mathrm{}$$ (2) and the corresponding eigenfunctions are $$\varphi _n(x)x|\varphi _n=N_n\mathrm{cos}^{1/2}axP_{n+\lambda 1/2}^{1/2\lambda }(\mathrm{sin}ax),$$ (3) where $`P_\alpha ^\beta (t)`$ are the associated Legendre functions, with $`N_n=\left(\frac{a(n+\lambda )\mathrm{\Gamma }(n+2\lambda )}{\mathrm{\Gamma }(n+1)}\right)^{1/2}`$. ## II Anharmonic Polaron - Stationary aspects Consider a two-site system consisting of a quasiparticle, like an electron or an exciton, whose inter-site hopping is described by a matrix element of strength V. The quasiparticle also strongly interacts with a vibrational mode between the two sites. This vibrational mode is described by the Pöschl-Teller potential (1). In the harmonic case the usual interaction is linear in the vibrational amplitude and consequently connects nearest neighbour eigenstates of the oscillator. These two features are distinct from each other. In developing a scheme for analyzing effects of a genealization to anharmonic situations, we must maintain either one or the other of the two features. We have studied both cases. Here we present results of maintaining the second feature, viz., an interaction which joins nearest neighbour energy eigenstates. As Nieto and Simmons and Crawford and Vrcsay have pointed out in a different context, a sinusoidal interaction posseses this feature for the Pöschl-Teller potential, and also reduces to the linear form in the harmonic limit. We thus take the full Hamiltonian of our system to be $$H=\frac{\omega _0}{2(\lambda +1/2)}\left(\widehat{\pi _z}^2+\lambda (\lambda 1)\mathrm{tan}^2\widehat{z}\right)+\omega _0g\sqrt{\left(\lambda +\frac{1}{2}\right)}\widehat{p}\mathrm{sin}\widehat{z}+V\widehat{r},$$ (4) where $$\omega _0=\frac{a^2}{m}(\lambda +\frac{1}{2})$$ (5) is the difference between the energies of the first excited state and the ground state of the Pöschl-Teller potential, $`z`$ is the dimensionless oscillator coordinate $`ax`$, and $`g`$ is the quasiparticle-oscillator coupling constant. Here and henceforth we put $`\mathrm{}=1`$ for simplicity. The operators $`\widehat{p},\widehat{r}`$ are the operators describing the quasiparticle, with $`\widehat{p}=c_1^{}c_1c_2^{}c_2,\widehat{r}=(c_1^{}c_2+c_2^{}c_1)`$, where the $`c`$’s are quasiparticle creation and destruction operators. The factorization or the semiclassical approximation (SCA) consists of assuming, equivalently, that the oscillator operators behave classically or that products of quasiparticle-oscillator operators can be factorized. We compare the SCA with the fully quantum mechanical results first by computing the polaron binding energies. This is done easily in the strong-coupling limit by freezing the quasiparticle hopping dynamics. We note first that, in the harmonic oscillator limit, the polaron binding energy is proportional to $`g^2`$ whereas, in the opposite limit of the infinite square well, the width of the well remains finite and the interaction produces a lowering of energy that is proportional to $`g`$. This cross-over behaviour becomes evident from the full quantum-mechanical calculations as shown in Fig. 1. Plotted in the main figure is the binding energy (in arbitrary units) as a function of $`g`$. In the inset, the same quantities are plotted on a logarithmic scale. The bold lines indicate the fully quantum-mechanical calculation and the dashed lines indicate the results of the SCA. In all the cases, the oscillator frequency $`\omega _0`$ has been kept fixed, and $`\lambda `$ varied allowing the oscillator to pass smoothly from the box ($`\lambda =1`$) limit to the harmonic ($`\lambda \mathrm{}`$) limit. Two key results are evident in Fig. 1. First, the SCA results agree with the exact ones only in the harmonic oscillator limit; in the square-well limit, the departure becomes quite drastic. Second, (see inset), when the system is in the box-limit ($`\lambda =1`$), the fully quantum system behaves harmonically for small $`g`$ but exhibits a cross-over for larger values of $`g`$ showing the true box-limit slope of unity. We also calculate the overlap between the adiabatically displaced ground-state wavefunctions. This overlap, basically the Huang-Rhys factor, governs the polaronic tunneling rate in the strong-coupling limit. One knows, for instance, that in the harmonic oscillator limit, the tunneling rate is proportional to $`e^{g^2}`$. In Fig. 2, we plot the overlap factor (logarithmically) as a function of $`g`$ for the box limit, i.e., $`\lambda =1`$ (dashed line), and for the harmonic oscillator limit, i.e., $`\lambda \mathrm{}`$ (solid line). The quadratic dependence is clearly seen for the latter. However, for $`\lambda =1`$, the rise is sublinear, showing that the dependence of the overlap factor on $`g`$ is much weaker. ## III Heisenberg equations We discuss in this section the temporal evolution of the system, and show how the SCA differs from the fully quantum mechanical treatment. The equations of motion corresponding to the Hamiltonian (4) can be written as $`\dot{\widehat{p}}`$ $`=`$ $`2V\widehat{q}`$ (7) $`\dot{\widehat{q}}`$ $`=`$ $`2V\widehat{p}+2\omega _0g(\lambda +1/2)^{1/2}\widehat{r}\mathrm{sin}\widehat{z}`$ (8) $`\dot{\widehat{r}}`$ $`=`$ $`2\omega _0g(\lambda +1/2)^{1/2}\widehat{q}\mathrm{sin}\widehat{z}`$ (9) $`\dot{\widehat{z}}`$ $`=`$ $`{\displaystyle \frac{\omega _0}{(\lambda +1/2)}}\widehat{\pi }_z`$ (10) $`\dot{\widehat{\pi }_z}`$ $`=`$ $`\omega _0\left({\displaystyle \frac{\lambda (\lambda 1)}{(\lambda +1/2)}}\mathrm{tan}\widehat{z}\mathrm{sec}^2\widehat{z}+g(\lambda +1/2)^{1/2}\widehat{p}\mathrm{cos}\widehat{z}\right),`$ (11) where $`\widehat{q}=i(c_1^{}c_2c_2^{}c_1)`$ and the quasiparticle operators $`\widehat{p},\widehat{q},\widehat{r}`$ cyclically satisfy the commutation relations, $`[\widehat{p},\widehat{q}]=2i\widehat{r}`$, and $`[\widehat{z},\widehat{\pi _z}]=i`$. As stated earlier, the SCA consists in assuming the oscillator operators $`\widehat{z},\widehat{\pi _z}`$ to be $`c`$-numbers. In the temporal analysis, we compare the results of such an approximation with those given by the full quantum evolution described by Eqs. (III). We plot in Figs. 3-5, the evolution of the population difference of the quasiparticle between the two sites $`p(t)`$ as a function of dimensionless time $`Vt`$. In all our calculations, the initial condition used for the quantum system is the ground state of the quasiparticle-oscillator system projected onto the one-site localized part of the Hilbert space, such that $`\widehat{p}(0)=1.`$ The initial condition used for the SCA calculation is $`p(0)=1,\dot{z}(0)=\dot{\pi }_z(0)=0`$. In all the plots, the solid line indicates the full quantum evolution and the dashed line indicates the evolution due to the SCA. The polaron binding energy has been kept constant to facilitate comparison. This value (which we take to be 1.5 V) equals $`g^2\omega _0/2`$ in the harmonic oscillator limit. In Fig. 3, $`\lambda =200`$, the oscillator potential is essentially that of the harmonic oscillator potential. The oscillator energy $`\omega _0`$ takes on the values $`10V,V,0.1V`$ in (a),(b), and (c) respectively. As discussed elsewhere , whereas the SCA shows self-trapping for all the values of the oscillator frequency, the full quantum evolution differs substantially, except in the limit of low oscillator energy $`\omega _0=0.1V`$. In this limit, for short times, the full quantum evolution and SCA agree in that both show self-trapping, with nearly the same average value of self-trapping and oscillation frequency. However, the quantum evolution shows a considerably richer structure involving collapses and revivals. At much longer times, the dressed quasiparticle tunnels from one site to the other. This is evident in Fig. 3 (b). When $`\lambda =10`$, (Fig. 4), the potential is more ‘square-well’-like and some departures, especially in the small oscillator frequency regime (Fig. 4 (b)) are visible. For instance, the ‘silent runs’ separating the collapse and revival sequence, are less quiescent, and the agreement between the SCA and the full quantum evolution is slightly worse. In Fig. 5, we take $`\lambda =1`$, wherein the potential is essentially the infinite square-well. Whereas the agreement between the SCA and the full quantum evolution is best when the oscillator energy is least, $`\omega _0=0.1V`$, (Fig. 5 (c)), the agreement is far worse than for the harmonic potential (Fig. 5 (c)). Further, the ‘silent runs’ are barely noticeable, with the collapses and revivals intruding into each other. A key time-scale in the temporal evolution is the one associated with the polaronic tunneling between the two sites. Since this time scale is intimately connected with the polaron binding energy, we plot in Fig. 6, the logarithm of the tunneling time as a function of the binding energy. The solid line denotes the harmonic oscillator limit, $`\lambda \mathrm{}`$, whereas the dashed line denotes the infinite square-well limit, $`\lambda =1`$. Note that for small values of the binding energy, the time-scales for both cases is only weakly dependent on the energy. However, for larger coupling (binding energy), both show a clear linear dependence (albeit with different slope). This clearly indicates that even for the box-like potential, the polaronic tunneling time scale is exponentially dependent on the binding energy. While well-known for harmonic polarons, this exponential dependence constitutes an important new result for anharmonic polarons emerging from the present analysis. ## IV Summary By analyzing the dynamics and energetics of a quasiparticle interacting with a tunably anharmonic oscillator, specifically described by a Pöschl-Teller potential, we find that, in the limit of strong coupling between the quasiparticle and the oscillator, selftrapping is robust and persists for strong anharmonicities, with the polaron tunneling time scale being exponentially dependent on the polaron binding energy, a feature that has been earlier known to be true for harmonic polarons. We further find that the full quantum result agrees with the predictions of the semiclassical approximation only in this strong-coupling, low-frequency regime, in agreement with earlier findings for harmonic polarons. One of us (VMK) acknowledges the financial support of the National Science Foundation under grant no. DMR-9614848, and of the Los Alamos National Laboratory under grant no. 0409J0004-3P.
no-problem/9902/math9902077.html
ar5iv
text
# Double coverings of octic arrangements with isolated singularities ## 1. Introduction In this paper we study a class of Calabi–Yau manifolds. By a Calabi–Yau manifold we mean a kähler, smooth threefold with trivial canonical bundle and no global 1-forms. One of the method of constructing Calabi–Yau manifold is to study a double covering of $`^3`$ branched along an octic surface. Let $`B`$ be a surface in $`^3`$ of degree eight. Let $`=𝒪_^3(4)`$, $``$ is a line bundle on $`^3`$ such that $`^2=𝒪_^3(B)`$. Then $``$ defines a double covering of $`^3`$ branched along $`B`$ and the singularities of the double cover are in one–to–one correspondence with singularities of $`B`$. If $`B`$ is smooth then the resulting double covering is a Calabi–Yau threefold, if $`B`$ has only nodes (ordinary double points) then the double cover has also only nodes, and these nodes can be resolved by the mean of small resolution. In this case again the resulting threefold is Calabi–Yau. This construction was precisely described in . In a case of $`B`$ being an octic arrangement (i.e. a surface which locally looks like a plane arrangement) was studied. In this paper we shall use methods introduced in to study a bigger class of octic surfaces, namely we shall consider arrangements with ordinary multiple points of multiplicity $`2`$, $`4`$ and $`5`$. Altogether we allow 11 types of singularities of branch locus. Our main result is the following theorem ###### Theorem 1.1. If an octic arrangement $`B`$ contains only * double and triple curves, * arrangement $`q`$–fold points, $`q=\mathrm{\hspace{0.17em}2},\mathrm{\hspace{0.17em}3},\mathrm{\hspace{0.17em}4},\mathrm{\hspace{0.17em}5}`$, * isolated $`q`$–fold points $`q=\mathrm{\hspace{0.17em}2},\mathrm{\hspace{0.17em}4},\mathrm{\hspace{0.17em}5}`$ then the double covering of $`^3`$ branched along $`B`$ has a non–singular model $`\widehat{X}`$ which is a Calabi–Yau threefold. Moreover if $`B`$ contains no triple elliptic curves then $$\begin{array}{c}𝐞(\widehat{X})=8\underset{i}{}(d_i^34d_i^2+6d_i)+2\underset{i<j}{}(4d_id_j)d_id_j\underset{i<j<k}{}d_id_jd_k\hfill \\ \hfill +4p_4^0+3p_4^1+16p_5^0+18p_5^1+20p_5^2+l_3+2m_2+36m_4+56m_5.\end{array}$$ The idea of the proof of this theorem is to give a resolution of singularities of $`X`$ by a sequence of admissible blowing-ups (i.e. blowing-ups that do not affect the first Betti number and the canonical divisor of the double covering, cf. ). We apply Theorem 1.1 to give examples of Calabi–Yau manifolds with 206 different Euler number (we realize any even number from the interval $`296,104`$ as an Euler number of a Calabi–Yau manifold). ## 2. Surfaces with ordinary multiple points Let $`B`$ be a surface in $`^3`$ with only ordinary multiple points. That means that if we consider the blowing–up $`\sigma :\stackrel{~}{}^3^3`$ of $`^3`$ at all singular points of $`B`$ then the strict transform $`\stackrel{~}{B}`$ of $`B`$ is smooth and intersect the exceptional divisor of $`\sigma `$ transversally. Let $`m_p`$ denotes the number of $`p`$–fold points on $`B`$. The following Proposition contains the numerical data of $`B`$ and $`\stackrel{~}{B}`$. ###### Proposition 2.1. (1) $`c_1^2(\stackrel{~}{B})`$ $`=`$ $`d(d4)^2{\displaystyle \underset{p}{}}(p2)^2pm_p`$ (2) $`c_2(\stackrel{~}{B})`$ $`=`$ $`d^34d^2+6d{\displaystyle \underset{p}{}}(p2)p^2m_p`$ (3) $`e(B)`$ $`=`$ $`d^34d^2+6d{\displaystyle \underset{p}{}}(p1)^3m_p`$ (4) $`p_a(\stackrel{~}{B})`$ $`=`$ $`\left({\displaystyle \genfrac{}{}{0pt}{}{d1}{3}}\right){\displaystyle \underset{p}{}}\left({\displaystyle \genfrac{}{}{0pt}{}{p}{3}}\right)m_p`$ ## 3. Octic arrangements with isolated singularities ###### Definition 3.1. An octic arrangement with isolated singularities is a surface $`B^3`$ of degree 8 which is a sum of irreducible surfaces $`B_1,\mathrm{},B_r`$ with only ordinary multiple points which satisfies the following conditions: 1. For any $`ij`$ the surfaces $`B_i`$ and $`B_j`$ intersects transversally along a smooth irreducible curve $`C_{i,j}`$ or they are disjoint, 2. The curves $`C_{i,j}`$ and $`C_{k,l}`$ are either equal or disjoint or they intersects transversally. This definition is a generalization of the notion of octic arrangement introduced in (where the surfaces $`S_i`$ are assumed to be smooth). Observe that from (1) Sing$`B_iB_j=\mathrm{}`$ for $`ij`$. We shall denote $`d_i:=dimB_i`$. A singular point of $`B_i`$ we shall call an isolated singular point of the arrangement. A point $`PB`$ which belongs to $`p`$ of surfaces $`B_1,\mathrm{},B_r`$ we shall call an arrangement $`p`$–fold point. We say that irreducible curve $`CB`$ is a $`q`$–fold curve if exactly $`q`$ of surfaces $`B_1,\mathrm{},B_r`$ passes through it. We shall use the following numerical data for an arrangement: * Number of arrangement $`q`$–fold points lying on exactly $`i`$ triple curves, * Number of triple lines, * number of isolated $`q`$–fold points. ## 4. Proof of Theorem 1.1 Let $`B`$ be an octic arrangement satisfying assumptions of the Theorem and let $`X`$ be a double covering of $`^3`$ branched along $`B`$. We shall find a sequence of admissible blowing–ups (i.e. blowing–ups of double or triple curves and 4–fold or 5–fold points) $`\sigma :^{}^3`$ and a reduced divisor $`B^{}^{}`$ with ordinary double points (nodes) as the only singularities and such that * $`\stackrel{~}{B}BB^{}`$ (where $`\stackrel{~}{B}`$ is a strict transform and $`B^{}`$ is a pullback of $`B`$ by $`\sigma `$), * $`B^{}`$ is even as an element of the Picard group Pic$`^{}`$. Let us now describe an algorithm to obtain $`\sigma `$, the method is in fact a modification of the method introduced in . We resolve all singularities of $`B`$ except the nodes 1. Resolution of multiple curves and arrangement multiple points. In these cases we shall apply the method described in . 2. Resolution of isolated $`4`$–fold points. We blow–up a $`4`$–fold point, and then replace the branch locus by its strict transform. 3. Resolution of isolated $`5`$–fold points. We blow–up a $`5`$–fold point, and then replace the branch locus by its strict transform plus the exceptional divisor. The proper transform intersects exceptional divisor transversally along a smooth plane curve of degree 5. We treat this curve in the same way as as an arrangement double curve i.e. we blow-up this curve, and replace the branch locus by its strict transform. The double covering of $`^{}`$ branched along $`B^{}`$ has nodes (corresponding to nodes of $`B^{}`$) as the only singularities. 4. Resolution of nodes. There are two possibilities for the resolution of a node on a 3–dimensional variety: blow–up or small-resolution (for details see ) We shall denote the non–singular model of $`X`$ by $`\stackrel{~}{X}`$ if in step 4 we choose a blow–up and by $`\widehat{X}`$ if we choose a small resolution. To any node on $`B`$ we have associated a line on $`\widehat{X}`$. $`\stackrel{~}{X}`$ is a blowing-up of $`\widehat{X}`$ at all this lines. As a consequence we see that $`e(\stackrel{~}{X})=e(\widehat{X})+2m_2`$. The blow-ups used in steps 1—3 are (according to ) admissible, i.e. they do not affect the first Betti number and canonical divisor of $`\stackrel{~}{X}`$. We see therefore that $$K_{\stackrel{~}{X}}=E_2$$ where $`E_2`$ denotes the exceptional divisor on $`\stackrel{~}{X}`$ associated to all nodes of $`B`$ hence $$K_{\widehat{X}}=0.$$ In order two compute $`e(\stackrel{~}{X})`$ we compare our case with the one studied in . From the Proposition 2.1 we see that in our case $`e(\stackrel{~}{}^3)`$ increases by $`2m_2+2m_48m_5`$ whereas $`e(\stackrel{~}{B})`$ decreases by $`32m_4+72m_5`$. The Euler number $`e(\widehat{X})`$ is hencefore greater by $$2m_2+36m_4+56m_5$$ in comparison with the case with no isolated singular points. Using \[7, Thm. 3.4\] proves the theorem. ∎ ## 5. Examples In this section we shall apply Theorem1.1 to study various examples of octic arrangement. As a result we obtain 206 examples of Calabi–Yau manifolds with different Euler numbers, we shall for instance realize every even number from the interval $`296,104`$ as an Euler number of a Calabi–Yau manifold. We shall need information about the number of nodes allowed on a nodal surface of degree $`8`$ in $`^3`$ Using results from we can formulate the following proposition ###### Proposition 5.1. * For $`m_2=0,1,2,3,4`$ there exists a nodal cubic surface with exactly $`m_2`$ nodes, * For $`m_2=0,1,\mathrm{},16`$ there exists a nodal quartic surface with exactly $`m_2`$ nodes, * For $`m_2=0,1,\mathrm{},65`$ there exists a nodal sextic surface with exactly $`m_2`$ nodes, * For $`m_2=0,1,\mathrm{},107`$ there exists a nodal octic surface with exactly $`m_2`$ nodes. Using the above Proposition and Theorem 1.1 we can compile a table containing numerical data of octic arrangements and corresponding Euler numbers. Most of Euler numbers can be obtained from several different arrangements, in the table we give one example per number. In the table we avoid arrangements with 4–fold and 5–fold points, they do not leave to new Euler numbers. On the other hand arrangements with 4–fold and 5–fold points usually have higher Picard number then the ones with only nodes. Most of the examples in the table are modification of the ones given in obtained by adding isolated singularities. In many cases it is easy to write down explicit equation of the branch locus. Proof of Theorem 1.1 gives a detailed description of a resolution of singularities. Although the resolution of singularities is not uniquely determined, the different resolutions of the same double solid differ only by flop. Consequently most of the numerical data (like f.i. Euler number) are uniquely determined (cf. ). ### Acknowledgment Part of this work was done during the Author stay at the Erlangen–Nünmberg University supporetd by DFG (project number 436 POL 113/89/0). I would like to thank Prof. W. Barth for suggesting the problem and valuable remarks.
no-problem/9902/chao-dyn9902014.html
ar5iv
text
# Drifter dispersion in the Adriatic Sea: Lagrangian data and chaotic model ## I Introduction Understanding the mechanisms of transport and mixing processes is an important and challenging task, which has wide relevance from a theoretical point of view, e.g. for the study of diffusion and chaos in geophysical systems in general, or for validating simulation results from a general circulation model. It is also a necessary tool in the analysis of problems of general interest and social impact, such as the dispersion of nutrients or pollutants in sea water with consequent effects on marine life and on the environment (Adler et al., 1996). Recently, a number of oceanographic programs have been devoted to the study of the surface circulation of the Adriatic Sea by the observation of Lagrangian drifters, within the larger framework of drifter-related research in the whole Mediterranean Sea (Poulain, 1999). The Adriatic Sea is a quasi-enclosed basin, about 800 long by 200 $`km`$ wide, connected to the rest of the Mediterranean Sea through the Otranto Strait. From a topographic point of view, three major regions can be considered: the northern part is the shallowest, about 100 $`m`$ maximum depth, and extends down to the latitude of Ancona; the central part, which deepens down to about 260 $`m`$ in the Jabuka Pit, and the southern part which extends from the Gargano promontory to the Otranto Strait. The southern part is the deepest, reaching about 1200 $`m`$ in the South Adriatic Pit. Reviews on the oceanography of the Adriatic Sea can be found in Artegiani et al. (1997), Orlic et al. (1992), Poulain (1999) and Zore (1956). Lagrangian data offer the opportunity to employ techniques of analysis, well established in the theory of chaotic dynamical systems, to study the behavior of actual trajectories and compare those with a kinematic model. Let us assume that the Lagrangian drifters are passively advected in a two-dimensional flow, e.g. as would be the case in a frictionless barotropic approximation (Ottino, 1989; Crisanti et al., 1991): $$\frac{dx}{dt}=u(x,y,t)\text{and}\frac{dy}{dt}=v(x,y,t),$$ (1) where $`(x(t),y(t))`$ is the position of a fluid particle at time $`t`$ in terms of longitude and latitude and $`u`$ and $`v`$ are the zonal and meridional velocity fields, respectively. For the Eulerian description of a geophysical system, one should in principle use numerical solutions of the Navier-Stokes equations (or other suitable equations, e.g. the quasi-geostrophic model) to obtain the velocity fields. In practice, direct numerical simulation of these equations on oceanographic length scales is of course not possible, and one has to invoke approximations, i.e. turbulence modeling. This motivates to use instead a simplified kinematic approach, by adopting a given Eulerian velocity field. The criteria for the construction of such a field follow from phenomenological arguments and/or experimental observation, and have recently been reviewed in (Yang, 1996; Samelson, 1996). Let us consider the relationship between Eulerian and Lagrangian properties of a system. A wide literature on this topic (e.g. Ottino, 1989; Crisanti et al., 1991) allows us state that, in general, motion in Eulerian and Lagrangian variables can be rather different. It is not rare to have regular Eulerian behavior, e.g. a time-periodic velocity field, co-existing with Lagrangian chaos or vice-versa. In quasi-enclosed basins like the Adriatic Sea, a characterization of the mechanisms of the mixing is highly non-trivial. We first observe, see below for detailed discussion, that the use of the standard diffusion coefficients can have rather limitated applicability (see Artale et al., 1997). Already classical studies on Lagrangian particles in ocean models contain remarks on the intrinsic difficulties in using one-particle diffusion statistics (Taylor, 1921). In situations where the advective time is not much longer than the typical decorrelation time scale of the Lagrangian velocity, the diffusivity parameter related to small-scale turbulent motion cannot converge to its asymptotic value (Figueroa and Olson, 1994). One the other hand, a generalization of the standard Lyapunov exponent, the Finite-Scale Lyapunov Exponent (FSLE), originally introduced for the predictability problem (Aurell et al., 1996:1997), has been shown to be a suitable tool to describe non-asymptotic properties of transport. This finite-scale approach to Lagrangian transport measures effective rates of particle dispersion without assumptions about small-scale turbulent processes. For an alternative method see Buffoni et al. (1997); for a recent review and systematic discussion of non-asymptotic properties of transport and mixing in realistic cases, see Boffetta et al. (2000). In this paper we report data analysis of surface drifter motion in the Adriatic Sea using relative dispersion, FSLE and Lagrangian Structure Function (LSF), a quantity related to the FSLE. We also introduce a chaotic model for the Lagrangian dynamics, and use the FSLE and LSF characteristics to compare model and data. We show that it can be very difficult to get an estimate of the diffusion coefficient in a quasi-enclosed basin, and/or to look for deviations from the standard diffusion law. In fact, the time a cluster of particles takes to spread uniformly and reach the boundaries is not much longer than the largest characteristic time of the system. In contrast, the FSLE and the LSF do characterize the transport properties of Lagrangian trajectories at a fixed spatial scale. We will finally show that a simple kinematic model reproduces the data. In section 2 we describe the data set we have used, review relevant concepts and analysis techniques for Lagrangian transport and chaos. In section 3 we introduce a kinetic model of the Lagrangian dynamics, and in section 4 we compare the data and the model. Section 5 contains a summary and a discussion of the results. ## II Data set and analysis techniques ### A Data set In a large drifter research program in Mediterranean Sea, started in the late 80’s and continued into the 90’s, Lagrangian data from surface drifters deployed in the Adriatic sea have been recorded from December 1994 to March 1996. These drifters are similar to the CODE (COastal Dynamics Experiment) system (Davis, 1985) and they are designed to be sufficiently wind-resistant so to effectively give a description of the circulation at their actual depth (1 meter). The drifters were tracked by the Argos Data Location and Collection System (DCLS) carried by the NOAA polar-orbiting satellites. It is assumed that after data processing drifter positions are accurate to within 200-300 $`m`$, and velocities to within 2-3 $`cm/s`$. For a description of the experimental program, see Poulain (1999). Technical details about the treatment of raw data can also be found in Hansen and Poulain (1996), Poulain et al. (1996) and Poulain and Zanasca (1998). The data have been stored in separate files, one for each drifter. In the format used by us, each file contains: number of records (i.e. number of points of the trajectory); time in days; position of the drifter in longitude and latitude; velocity of the drifter along the zonal and meridional directions; and temperature in centigrade degrees. The sampling time is 6 hours. We can identify five main deployments on which we will concentrate our attention. Selecting the tracks by the time of the first record, it is easy to verify that these five subsets consist of drifters deployed in the same area in the Otranto strait, near 19 degrees longitude east and 40 degrees latitude north. The experimental strategy of simultaneously releasing the drifters within a distance of some kilometers, allows us to study dispersion quantitatively. From a qualitative point of view, what we observe from the plot of all the trajectories (Fig. 1) is the shape of two (cyclonic) basin-wide gyres, located in the middle and southern regions respectively, and, an anti-clockwise boundary current which moves the drifters north-westward along the east coast and south-eastward down the west coast. The latter is a permanent feature of the Adriatic sea (Poulain, 1999). On the other hand, it is known that, within a year, the pattern of basin-wide gyres may change between one, two and three gyres over a time-scale of months. The southern gyre is the most steady of the three. The data also suggest the presence of small scale structures, even though these are likely much more variable in time. The time-scale of the typical recirculation period around a basin-wide gyre is about one month and the time needed to travel along the coasts and complete one lap of the full basin is of the order of a few months. ### B Analysis techniques We recall here some basic concepts about dynamical systems, diffusion and chaos, and the quantities that we shall use to characterize the properties of Lagrangian trajectories. If we have $`N_c`$ clusters of initially close particles, each cluster containing $`n_k`$ elements, relative dispersion can be characterized by the diffusion coefficient $$D_i=\text{lim}_t\mathrm{}\frac{1}{2t}S_i^2(t)$$ (2) with $$S_i^2(t)=\frac{1}{N_c}\underset{k=1}{\overset{N_c}{}}\frac{1}{n_k}\underset{j=1}{\overset{n_k}{}}(x_i^{(k,j)}(t)<x_i(t)>^{(k)})^2$$ (3) where $$<x_i(t)>^{(k)}=\frac{1}{n_k}\underset{j=1}{\overset{n_k}{}}x_i^{(k,j)}(t)$$ (4) $`x_i^{(k,j)}`$ is the $`i`$-th spatial coordinate of the $`j`$-th particle in the $`k`$-th cluster; $`S^2=_iS_i^2`$ is the mean square displacement of the particles relatively to their time evolving mean position. If $`\delta (t)`$ is the distance between two trajectories $`𝐱^{(1)}`$ and $`𝐱^{(2)}`$ in a cluster at time $`t`$, relative dispersion is defined as $$<\delta ^2(t)>=<𝐱^{(1)}(t)𝐱^{(2)}(t)^2>$$ (5) where the average is over all pairs of trajectories in the cluster. In a standard diffusive regime, $`𝐱^{(1)}(t)`$ and $`𝐱^{(2)}(t)`$ become independent variables and, for $`t\mathrm{}`$, we have $`<\delta ^2(t)>=2S^2(t)`$. In the following, we shall consider the cluster mean square radius $`S^2(t)`$ as a measure of relative dispersion. Absolute dispersion, which is defined as the mean square displacement from an initial position will not be taken into account in our analysis. If, in the asymptotic limit, $`S_i^2(t)t^{2\alpha }`$ with $`\alpha =1/2`$, we have the linear law of standard diffusion for the mean square displacement, and the $`D_i`$’s are finite; if $`\alpha 1/2`$ we have so-called anomalous diffusion (Bouchaud and Georges, 1990). The difficulty that often arises when measuring the exponent $`\alpha `$ is that, because of the finite size of the domain, dispersion cannot reach its true asymptotic behavior. In other words, diffusion may not be observable over sufficiently large scales, i.e. much larger than the largest Eulerian length scale, and, therefore, we cannot have a robust estimate of the exponent of the asymptotic power law. Moreover, the relevance of asymptotic quantities, like the diffusion coefficients, is questionable in the study of realistic cases concerning the transport problem in finite-size systems (Artale et al., 1997). The diffusion coefficients characterize long-time (large-scale) dispersion properties. In contrast, at short times (small scales) the relative dispersion is related to the chaotic behavior of the Lagrangian trajectories. A quantitative measure of instability for the time evolution of a dynamical system (Lichtenberg and Lieberman, 1992) is commonly given by the Maximum Lyapunov Exponent (MLE) $`\lambda `$, which gives the rate of exponential separation of two nearby trajectories $$\lambda =\text{lim}_t\mathrm{}\text{lim}_{\delta (0)0}\frac{1}{t}\mathrm{ln}\frac{\delta (t)}{\delta (0)}$$ (6) where $`\delta (t)=𝐱^{(1)}(t)𝐱^{(2)}(t)`$ is the distance between two trajectories at time $`t`$. When $`\lambda >0`$ the system is said to be chaotic. There exists a well established algorithm to numerically compute the MLE introduced by Benettin et al. (1980). A characteristic time, $`T_\lambda `$, associated to the MLE is the predictability time, defined as the minimum time after which the error on the state of the system becomes larger than a tolerance value $`\mathrm{\Delta }`$, if the initial uncertainty is $`\delta `$ (Lichtenberg and Lieberman, 1992): $$T_\lambda =\frac{1}{\lambda }\mathrm{ln}\frac{\mathrm{\Delta }}{\delta }$$ (7) Let us recall that $`\lambda `$ is a mathematically well-defined quantity which measures the growth of infinitesimal errors. In physical terms, at any time, $`\delta `$ has to be much lesser than the characteristic size of the smallest relevant length of the velocity field. For example, in 3-D fully developed turbulence $`\delta `$ has to be much smaller than the Kolmogorov length. When the uncertainty grows up to non-infinitesimal sizes, i.e. macroscopic scales, the perturbation $`\delta `$ is governed by the nonlinear terms and that renders its growth rate a scale-dependent index (Aurell et al., 1996, 1997; Artale et al., 1997). It is useful to introduce the Finite Scale Lyapunov Exponent (FSLE), $`\lambda (\delta )`$. Assuming $`r>1`$ is a fixed amplification ratio and $`<\tau _r(\delta )>`$ the mean time that $`\delta `$ takes to grow up to $`r\delta `$, we have: $$\lambda (\delta )=\frac{1}{<\tau _r(\delta )>}\mathrm{ln}r$$ (8) The average $`<>`$ is performed over all the trajectory pairs in a cluster. We note the following properties of the FSLE: * in the limit of infinitesimal separation between trajectories, $`\delta 0`$, the FLSE tends to the maximum Lyapunov exponent (MLE); * in case of standard diffusion, $`<\delta (t)^2>t`$, we find that $`\lambda (\delta )\delta ^2`$ and the proportionality constant is of the order of the diffusion coefficient; * any slope $`>2`$ for $`\lambda (\delta )vs\delta `$ indicates super-diffusive behavior, i.e. non-neglectable correlations persist at long times and advection is still relevant; * in particular, when $`\lambda (\delta )=constant`$ over a range of scales, we have exponential separation between trajectories at constant rate, within that range of scales (chaotic advection). Another interesting quantity related to the FSLE is the Lagrangian Structure Function (LSF) $`\nu (\delta )`$, defined as $$\nu (\delta )=<\frac{d𝐱^{}}{dt}\frac{d𝐱}{dt}>_\delta $$ (9) where the value of the velocity difference is taken at the times for which the distance between the trajectories enters the scale $`\delta `$ and the average is performed over a large number of realizations. The LSF, $`\nu (\delta )`$, is a measure of the velocity at which two trajectories depart from each other, as a function of scale. By dimensional arguments, we expect that the LSF is proportional to the scale of the separation and to the FSLE: $$\nu (\delta )\delta \lambda (\delta )$$ (10) so that we should find similar behavior for $`\lambda (\delta )`$ and $`\nu (\delta )/\delta `$, if independently measured. In order to study the transport properties of the drifter trajectories, we have focused our interest on the measurement of $`S_i^2(t)`$, $`\lambda (\delta )`$ and $`\nu (\delta )`$. With regards to the practical definitions of the FSLE and the LSF, we have chosen a range of scales $`\delta =(\delta _0,\delta _1,\mathrm{},\delta _n)`$ separated by a factor $`r>1`$ such that $`\delta _{i+1}=r\delta _i`$ for $`i=0,n1`$. The ratio $`r`$ is often referred to as the “doubling” factor even though it is not necessarily equal to $`2`$, e.g. in our case we fixed it at $`\sqrt{2}`$. The $`r`$ value has naturally an inferior bound because of the temporal finite resolution of the trajectories (i.e. it cannot be arbitrarily close to $`1`$) and it must be not much larger than $`1`$, if we want to resolve scale separation in the system. The smallest threshold, $`\delta _0`$, is placed just above the initial mean separation between two drifters, $`10km`$, and the largest one, $`\delta _n`$, is naturally selected by the finite size of the domain, $`500km`$. Following the same procedure, it is straightforward to compute the LSF as the mean velocity difference between two trajectories at the moment in which the separation reaches a scale $`\delta `$: $$\nu (\delta )=<\sqrt{(u_1u_2)^2+(v_1v_2)^2}>_\delta $$ (11) where the average is performed over the number of all the pairs within a set of particles, at the time in which $`𝐱^{}𝐱=\delta `$. In section 4 below we shall show the results of our data analysis and compare them with the simulations from our chaotic model for the Lagrangian dynamics of the Adriatic drifters. ## III The chaotic model In phenomenological kinetic modeling of geophysical flows, two possible approaches can be considered: stochastic and chaotic. Both procedures generally involve a mean velocity field, which gives the motion over large scales, and a perturbation, which describes the action of the small scales. The model is stochastic or chaotic if the perturbation is a random process or a deterministic time-dependent function, respectively. Examples on kinematic mechanisms proposed to model the mixing process can be found in Bower (1991), Samelson (1992), Bower and Lozier (1994), Cencini et al. (1999). The choice of one or the other depends on what one is interested in, and what experimental information is available. In our case, we have opted for a deterministic model since there are indications that, at the sea surface, the instabilities of the Eulerian structures are mostly due to air-sea interactions, which are nearly periodic perturbations. We want to consider a simple model, so let us assume as main features of the surface circulation the following elements: an anti-clockwise coastal current; two large cyclonic gyres; and some natural irregularities in the Lagrangian motion induced by the small scale structures. Let us notice that the actual drifters may leave the Adriatic sea through the Otranto Strait, but we model our domain with a closed basin, in order to study the effects of the finite scales on the transport, and treat it like a $`2D`$ system, since the drifters explore the circulation in the upper layer of the sea, within the first meters of water. On the basis of the previous considerations, we introduce our kinematic model for the Lagrangian dynamics. Under the incompressibility hypothesis we write a $`2D`$ velocity field in terms of a stream function: $$u=\frac{\mathrm{\Psi }}{y}\text{and}v=\frac{\mathrm{\Psi }}{x}.$$ (12) Let us write our stream function as a sum of three terms: $$\mathrm{\Psi }(x,y,t)=\mathrm{\Psi }_0(x,y)+\mathrm{\Psi }_1(x,y,t)+\mathrm{\Psi }_2(x,y,t)$$ (13) defined as follows: $$\mathrm{\Psi }_0(x,y)=\frac{C_0}{k_0}[sin(k_0(y+\pi ))+cos(k_0(x+2\pi ))]$$ (14) $$\mathrm{\Psi }_1(x,y,t)=\frac{C_1}{k_1}sin(k_1(x+ϵ_1sin(\omega _1t)))sin(k_1(y+ϵ_1sin(\omega _1t+\varphi _1)))$$ (15) $$\mathrm{\Psi }_2(x,y,t)=\frac{C_2}{k_2}sin(k_2(x+ϵ_2sin(\omega _2t)))sin(k_2(y+ϵ_2sin(\omega _2t+\varphi _2)))$$ (16) where $`k_i=\mathrm{\hspace{0.17em}2}\pi /\lambda _i`$, for $`i=0,1,2`$, the $`\lambda _i`$ are the wavelengths of the spatial structure of the flow; analogously $`\omega _j=\mathrm{\hspace{0.17em}2}\pi /T_j`$, for $`j=1,2`$, and the $`T_j`$ are the periods of the perturbations. In the non-dimensional expression of the equations, the units of length and time have been set to $`200km`$ and $`5days`$, respectively. The choice of the values of the parameters is discussed below. The stationary term $`\mathrm{\Psi }_0`$ defines the boundary large scale circulation with positive vorticity. $`\mathrm{\Psi }_1`$ contains the two cyclonic gyres and it is explicitly time-dependent through a periodic perturbation of the streamlines. The term $`\mathrm{\Psi }_2`$ gives the motion over scales smaller than the size of the large gyres and it is time-dependent as well. A plot of the $`\mathrm{\Psi }`$-isolines at fixed time is shown in Fig. 2. The actual basin is the inner region with negative $`\mathrm{\Psi }`$ values and the zero isoline is taken as a dynamical barrier which defines the boundary of the domain. The main difference with reality is that the model domain is strictly a closed basin whereas the Adriatic Sea communicates with the rest of the Mediterranean through the Otranto Strait. That is not crucial as long as we observe the two evolutions, of experimental and model trajectories, within time scales smaller than the mean exit time from the sea, typically of the order of a few months. Furthermore, the presence of the quasi-steady cyclonic coastal current is compatible with the interplay between the Po river southward inflow at the north-western side and the Otranto channel northward inflow at the south-eastern side of the sea. The non-stationarity of the stream function is a necessary feature of a $`2D`$ velocity field in order to have Lagrangian chaos and mixing properties, that is, so that a fluid particle will visit any portion of the domain after a sufficiently long interval of time. We have chosen the parameters as follows. The velocity scales $`C_0`$, $`C_1`$ and $`C_2`$ are all equal to 1, which, in physical dimensions, corresponds to $`\mathrm{\hspace{0.33em}0.5}m/s`$. The wave numbers $`k_0`$, $`k_1`$ and $`k_2`$ are fixed at $`1/2`$, $`1`$ and $`4\pi `$, respectively. In Fig. 2a,b we can see two snapshots of the streamlines at fixed time. The length scales of the model Eulerian structures are of $`\mathrm{\hspace{0.33em}1000}km`$ (coastal current), $`\mathrm{\hspace{0.33em}200}km`$ (gyres) and $`\mathrm{\hspace{0.33em}50}km`$ (eddies). The typical recirculation times, for gyres and eddies, turn out to be of the order of 1 month and a few days, respectively. As regards to the time-dependent terms in the stream function, the pulsations are $`\omega _1=\mathrm{\hspace{0.17em}1}`$ and $`\omega _2=\mathrm{\hspace{0.17em}2}\pi `$, which determine oscillations of the two large-scale vortices over a period $`T_130days`$ and oscillations of the small-scale vortices over a period $`T_25days`$; the respective oscillation amplitudes are $`ϵ_1=\pi /5`$ and $`ϵ_2=ϵ_1/10`$ which correspond to $`100km`$ and $`10km`$. The choice of the phase factors, $`\varphi _1`$ and $`\varphi _2`$, determines how much the vortex pattern changes during a perturbation period. We have chosen to set both $`\varphi _1`$ and $`\varphi _2`$ to $`\pi /4rad`$. This choice of the parameters for the time-dependent terms in the stream function is only supposed to be physically reasonable, for the experimental data give us limited information about the time variability of the Eulerian structures. The chaotic advection (Ottino, 1989; Crisanti et al., 1991), occurring in our model, makes an ensemble of initially close trajectories spread apart from one another, until the size of the mean relative displacement reaches a saturation value corresponding to the finite length scale of the domain. The scale-dependent degree of chaos is given by the FSLE. Because of the relatively sharp separation between large and small scales in the model, we expect $`\lambda (\delta )`$ to display a step-like behavior with two plateaus, one for each characteristic time, and a cut-off at scales comparable with the size of the domain. In the limit of small perturbations, the FSLE gives an estimate of the MLE of the system. The LSF, $`\nu (\delta )`$, on the other hand, is expected to be proportional to the size of the perturbation and to $`\lambda `$, as discussed in the introduction. Therefore the quantity $`\nu (\delta )/\delta `$ is expected to be qualitatively proportional to $`\lambda (\delta )`$, in the sense that the mean slopes have to be compatible with each other. In the following section we will show results of our simulations together with the outcome of the data analysis. ## IV Comparison between data and model The statistical quantities relative to the drifter trajectories have been computed according to the following prescription. The number of selected drifters for the analysis is 37, distributed in 5 different deployments in the Strait of Otranto, containing, respectively, 4, 9, 7, 7 and 10 drifters. These are the only drifter trajectories out of the whole data set which are long enough to study the Lagrangian motion on basin scale. To get as high statistics as possible, at the price of losing information on the seasonal variability, the times of all of the 37 drifters are measured as $`tt_0`$, where $`t_0`$ is the time of deployment. Moreover, to restrict the analysis only to the Adriatic basin, we impose the condition that a drifter is discarded as soon as its latitude goes south of $`39.5`$ N or its longitude exceeds $`19.5`$ E. Let us consider the reference frame in which the axes are aligned, respectively, with the short side, orthogonal to the coasts, which we call the transverse direction, and the long side, along the coasts, which we call the longitudinal direction. Before the presentation of the data analysis, let us briefly discuss the problem of finding characteristic Lagrangian times. A first obvious candidate is $$\tau _L^{(1)}=\frac{1}{\lambda }$$ (17) Of course $`\tau _L^{(1)}`$ is related to small scale properties. Another characteristic time, at least if the diffusion is standard, is the so-called integral time scale (Taylor, 1921) $$\tau _L^{(2)}=\frac{1}{<v^2>}_0^{\mathrm{}}C(\tau )𝑑\tau $$ (18) where $`C(\tau )=_{i=1}^d<v_i(t)v_i(t+\tau )>`$ is the Lagrangian velocity correlation function and $`<v^2>`$ is the velocity variance. We want to stress that it is always possible (at least in principle) to define $`\tau _L^{(1)}`$ while to compute $`\tau _L^{(2)}`$ (the integral time scale) it is necessary to be in a standard diffusion case (Taylor, 1921). The relative dispersion curves along the two natural directions of the basin, for data and model trajectories, are shown in Figs. 3a and 3b. The curves from the numerical simulation of the model are computed observing the spreading of a cluster of $`10^4`$ initial conditions. When a particle reaches the boundary ($`\mathrm{\Psi }=0`$) it is eliminated. Along with observational and simulation data, we plot also a straight line corresponding to a standard diffusion with coefficient $`10^3m^2/s`$, (Falco et al., 2000). We discuss this comparison below. Considering the effective diffusion properties, one should expect that the shape of $`S_i^2(t)`$, before the saturation regime, can still be affected by the action of the coherent structures. Actually, neither the data nor the model dispersion curves display a clear power-law behavior, and are indeed quite irregular. The growth of the mean square radius of a cluster of drifters appears still strongly affected from the details of the system, and the saturation begins no later than $``$ 1 month ($``$ the largest characteristic Lagrangian time). This prevents any attempt at defining a diffusion coefficient for the effective dispersion in this system. Although the saturation values are very similar, we can see that, in the intermediate range, the agreement between observation and simulation is not good. We point out that the trouble in reproducing the drifter dispersion in time does not depend much on the statistics; no matter if 37 (data) or $`10^4`$ (model) trajectories, the problem is that the classic relative dispersion is not the most suitable quantity to be measured (irregular behavior even at high statistics). Let us now discuss the FSLE results. The curve measured from the data has been averaged over the total number of pairs out of 37 trajectories ($`700`$), under the condition that the evolution of the distance between two drifters is no longer followed when any of the two exits the Adriatic basin (see above). In Fig. 4 the FSLE’s for data and model are plotted. Phenomenologically, fluid particle motion is expected to be faster at small scales and slower at large scales. The decrease of $`\lambda (\delta )`$ at increasing $`\delta `$ reflects the presence of several scales of motions (at least two) involved in the dynamics. In particular, looking at the values of $`\lambda (\delta )^1`$ at the extreme points of the $`\delta `$-range, we see that small-scale (mesoscale) dispersion has a characteristic time $`4`$ days and the large-scale (gyre scale) dispersion has a characteristic time $`1`$ month. The ratio between gyre scale and mesoscale is of the same order as the ratio between the inverse of their respective characteristic times ($`10`$), so the slope of $`\lambda (\delta )`$ at intermediate scales is about $`1`$. The fact that the slope is larger than $`2`$ indicates that relative dispersion is faster than standard diffusion up to sub-basin scales, i.e. Lagrangian correlations are non-vanishing because of coherent structures. It is interesting to compare this Lagrangian technique of measuring the effective Lagrangian dispersion on finite scales to the more traditional technique of extracting a (standard) diffusivity parameter from the reconstruction of the small-scale anomalies in the velocity field (Falco et al., 2000). Estimates of the zonal and meridional diffusivity in Falco et al. (2000) are of the order of $`10^3m^2/s`$ and are compatible with the value of the effective finite-scale diffusive coefficient given by the FSLE, defined as $`\lambda (\delta )\delta ^2`$, computed at $`\delta =20km`$ ($``$ the mesoscale). The FSLE computed in the numerical simulations shows two plateaus, one at small scales and the other at large scales, describing a system with two characteristic time scales, and presents the same behavior, both qualitative and quantitative, as the FSLE computed for the drifter trajectories. It is worth noting that it is much simpler for the model to reproduce, even quantitatively, the relation between characteristic times and scales of the drifter dynamics (FSLE) rather than the behavior of the relative dispersion in time. The LSV, in Fig. 5, shows that the behavior of $`\nu (\delta )`$, the mean velocity difference between two particle trajectories at varying of the scale of the separation, is compatible with the behavior of the FSLE as expected by dimensional arguments, i.e. $`\nu (\delta )/\delta `$, the LSF divided by the scale at which it is computed, has the same slope as $`\lambda (\delta )`$. This accounts for the robustness of the information given by the finite-scale analysis. We see the theoretical predictions of FSLE and LSV are fairly well comparable with the corresponding quantities observed from the data, if we consider the relatively simple model which we used for the numerical simulations. Of course, an agreement exists because of the appropriate choice of the parameters of the model, capable to reproduce the correct relation between scales of motion and characteristic times, and because large-scale ($``$ sub-basin scales) Lagrangian dispersion is weakly dependent on the small-scale ($``$ mesoscale) details of the velocity field. ## V Discussion and conclusions In this paper we have analyzed an experimental data set recorded from Lagrangian surface drifters deployed in the Adriatic sea. The data span the period from December 1994 to March 1996, during which five sets of drifters were released at different times in the vicinity of the same point on the eastern side of the Otranto Strait. Adopting a technique borrowed from the theory of dynamical systems, we studied the Lagrangian transport properties by measuring relative dispersions, $`S_i^2`$, finite-scale Lyapunov exponents $`\lambda (\delta )`$, and Lagrangian structure function $`\nu (\delta )`$. Relative dispersion as function of time does not provide much information, but an idea of the size of the domain where saturation sets in at long times. The behavior of $`S_i^2`$ looks quite irregular and this is due not to poor statistics but rather to intrinsic reasons. In contrast, the results obtained with the FSLE, i.e. dispersion rates at different scales of motion, give a more useful description of the properties of the drifter spreading. In particular, $`\lambda (\delta )`$ detects the characteristic times associated to the Eulerian characteristic lengths of the system. We have also introduced a simple chaotic model of the Lagrangian evolution and compared it with the observations. In our point of view, the actual meaning of the chaotic model, in relation to the behavior of the drifters, is not that of a best-fitting model. We do not claim that the quite difficult task of modeling the marine surface circulation driven by wind forcing can be exploited by a simple dynamical system. But a simple dynamical system can give satisfactory results if we are interested in large-scale properties of Lagrangian dispersion, since they depend much on the topology of the velocity field and only weakly on the small-scale details of the velocity structures. In this respect, chaotic advection, very likely present in every geophysical fluid flow, is crucial for what concerns tracer dispersion, since it can easily overwhelm the effects of small-scale turbulent motions on large-scale transport (Crisanti et al., 1991). Even when a standard diffusivity parameter can be computed from the variance and the self-correlation time of the Lagrangian velocity, its relevance for reproducing the effective dispersion on finite scales, in presence of coherent structures, is questionable. In addition, practical difficulties arising from both finite resolution and boundary effects suggest a revision of the analysis techniques to be used for studying Lagrangian motion on finite scales, i.e. in non-asymptotic conditions. Considering that, generally, there is more physical information in a scale-dependent indicator ($`\lambda (\delta )`$) rather than in a time function ($`S^2(t)`$), we come to the conclusion that the FSLE is a more appropriate tool of investigation of finite-scale transport properties. It is important to remark (Aurell et al, 1996; Artale et al., 1997; Boffetta et al., 2000) that, in realistic cases, $`\lambda (\delta )`$ is not just another way to look at $`S^2(t)vst`$, in particular it is not true that $`\lambda (\delta )`$ behaves like $`(d\mathrm{ln}S^2(t)/dt)_{S^2=\delta ^2}`$. This is because $`\lambda (\delta )`$ is a quantity which characterizes Lagrangian properties at the scale $`\delta `$ in a non ambiguous way. On the contrary, $`S^2(t)`$ can depend strongly on $`S^2(0)`$, so that, in non-asymptotic conditions, it is relatively easy to get erroneous conclusions only looking at the shape of $`S^2(t)`$. In fact, at a given time, relative dispersion inside a sub-cluster of drifters can be rather different from other sub-clusters, e.g. because of fluctuations in the cross-over time between exponential and diffusive regimes. Therefore, when performing an average over the whole set of trajectories, one may obtain a quite spurious and inconclusive behavior. On the other hand, we have seen that the analysis in terms of FSLE (and LSF), studying the transport properties at a given spatial scale, rather than at a given time, can provide more reliable information on the relative dispersion of tracers. ## VI Acknowledgements The drifter data set used in this work was kindly made available to us by P.-M. Poulain. We warmly thank J. Nycander, P.-M. Poulain, R. Santoleri and E. Zambianchi for constructive readings of the manuscript and for clarifying discussions about oceanographic matters. We also thank E. Bohm, G. Boffetta, A. Celani, M. Cencini, K. Döös, D. Faggioli, D. Fanelli, S. Ghirlanda, D. Iudicone, A. Kozlov, E. Lindborg, S. Marullo, P. Muratore-Ginanneschi and V. Rupolo for useful discussions. This work was supported by a European Science Foundation “TAO exchange grant” (G.L.), by the Swedish Natural Science Research Council under contract M-AA/FU/MA 01778-334 (E.A.) and the Swedish Technical Research Council under contract 97-855 (E.A.), and by the I.N.F.M. “Progetto di Ricerca Avanzata TURBO” (A.V.) and MURST, program 9702265437 (A.V.). G.L. thanks the K.T.H. (Royal Institute of Technology) in Stockholm for hospitality. We thank the European Science Foundation and the organizers of the 1999 Tao Study Center for invitations, and for an opportunity to write up this work. REFERENCES Adler R.J., P. Müller and B. Rozovskii (eds.). 1996. Stochastic modeling in physical oceanography. Birkhäuser, Boston. Artale V., G. Boffetta, A. Celani , M. Cencini and A. Vulpiani. 1997. Dispersion of passive tracers in closed basins: beyond the diffusion coefficient. Phys. of Fluids, 9, 3162. Artegiani A., D. Bregant, E. Paschini, N. Pinardi, F. Raicich and A. Russo. 1997. The Adriatic Sea general circulation, parts I and II. J. Phys. Oceanogr., 27, 8, 1492-1532. Aurell E., Boffetta G., Crisanti A., Paladin G., Vulpiani A. 1996. Predictability in systems with many degrees of freedom. Physical Review E, 53, 2337. Aurell, E., G. Boffetta, A. Crisanti, G. Paladin and A. Vulpiani. 1996. Growth of non-infinitesimal perturbations in turbulence. Phys. Rev. Lett., 77, 1262-1265. Aurell, E., G. Boffetta, A. Crisanti, G. Paladin and A. Vulpiani. 1997. Predictability in the large: an extension of the concept of Lyapunov exponent. J. of Phys. A, 30, 1. Benettin G., L. Galgani, A. Giorgilli and J.M. Strelcyn. 1980. Lyapunov characteristic exponents for smooth dynamical systems and for Hamiltonian systems: a method for computing all of them. Meccanica, 15, 9. Boffetta G., A. Celani, M. Cencini, G. Lacorata and A. Vulpiani. 2000. Non-asymptotic properties of transport and mixing. Chaos, vol. 10, 1, 50-60, 2000. Boffetta G., M. Cencini, S. Espa and G. Querzoli. 1999. Experimental evidence of chaotic advection in a convective flow. Europhys. Lett. 48, 629-633. Bouchaud J.P. and A. Georges. 1990. Anomalous diffusion in disordered media: statistical mechanics, models and physical applications. Phys. Rep., 195, 127. Bower A.S. 1991. A simple kinematic mechanism for mixing fluid parcels across a meandering jet. J. Phys. Oceanogr., 21, 173. Bower A.S. and M.S. Lozier, 1994. A closer look at particle exchange in the Gulf Stream. J. Phys. Oceanogr., 24, 1399. Buffoni G., P. Falco, A. Griffa and E. Zambianchi. 1997. Dispersion processes and residence times in a semi-enclosed basin with recirculating gyres. The case of Tirrenian Sea. J. Geophys. Res., 102, C8, 18699. Cencini M., G. Lacorata, A. Vulpiani, E. Zambianchi. 1999. Mixing in a meandering jet: a Markovian approach. J. Phys. Oceanogr. 29, 2578-2594. Crisanti A., M. Falcioni, G. Paladin and A. Vulpiani. 1991. Lagrangian Chaos: Transport, Mixing and Diffusion in Fluids. La Rivista del Nuovo Cimento, 14, 1. Davis E.E. 1985. Drifter observation of coastal currents during CODE. The method and descriptive view. J. Geophys. Res., 90, 4741-4755. Falco P., A. Griffa, P.-M. Poulain and E. Zambianchi. 2000. Transport properties in the Adriatic Sea as deduced from drifter data. J. Phys. Oceanogr., in press. Figueroa H.O. and D.B. Olson. 1994. Eddy resolution versus eddy diffusion in a double gyre GCM. Part I: the Lagrangian and Eulerian description. J. Phys. Oceanogr., 24, 371-386. Hansen D.V. and P.-M. Poulain. 1996. Processing of WOCE/TOGA drifter data. J. Atmos. Oceanic Technol., 13, 900-909. Lichtenberg A.J. and M.A. Lieberman. 1992. Regular and Chaotic Dynamics, Springer-Verlag. Orlic M., M. Gacic and P.E. La Violette. 1992. The currents and circulation of the Adriatic Sea. Oceanol. Acta, 15, 109-124. Ottino J.M. 1989. The kinematic of mixing: stretching, chaos and transport. Cambridge University. Poulain, P.-M., A. Warn-Varnas and P.P. Niiler. 1996. Near-surface circulation of the Nordic seas as measured by Lagrangian drifters. J. Geophys. Res., 101(C8), 18237-18258. Poulain P.-M. 1999. Drifter observations of surface circulation in the Adriatic sea between December 1994 and March 1996. J. Mar. Sys., 20, 231-253. Poulain P.-M. and P. Zanasca. 1998. Drifter observation in the Adriatic sea (1994-1996). Data report, SACLANTCEN Memorandum, SM, SACLANT Undersea Research Centre, La Spezia, Italy. In press. Samelson R.M. 1992. Fluid exchange across a meandering Jet. J. Phys. Oceanogr., 22, 431. Samelson R.M. 1996. Chaotic transport by mesoscale motions, in R.J. Adler, P. Muller, B.L. Rozovskii (eds.): Stochastic Modeling in Physical Oceanography. Birkhäuser, Boston, 423. Taylor G.I. 1921. Diffusion by continuous movements. Proc. Lond. Math Soc. (2) 20, 196-212. Yang H., 1996. Chaotic transport and mixing by ocean gyre circulation, in R.J. Adler, P. Muller, B.L. Rozovskii (eds.): Stochastic Modeling in Physical Oceanography, Birkhäuser, Boston, 439. Zore M. 1956. On gradient currents in the Adriatic Sea. Acta Adriatic, 8 (6), 1-38. FIGURE CAPTIONS FIGURE 1: Plot of the 37 drifter trajectories in the Adriatic Sea used for the data analysis. The longitude east and latitude north coordinates are in degrees. The drifters were deployed in the eastern side of the Otranto strait. FIGURE 2: Model stream function isolines at a) $`t=0`$ and b) $`t=T_1/2`$, with $`T_130`$ days. The boundary of the domain is the zero isoline. The coordinates ($`x,y`$) are in $`km`$. FIGURE 3: Relative dispersion curves, for data (plus symbol) and model (dashed) trajectories, along the two natural directions in the basin geometry: a) transverse component, b) longitudinal component. The time is measured in $`days`$ and the dispersion in $`km^2`$. The experimental curve is computed over the 37 drifter trajectories; the model curve is computed over a cluster of $`10^4`$ particles, initially placed at the border of the southern gyre with a mean square displacement of $`\mathrm{\hspace{0.33em}50}km^2`$. The straight line with slope 1 has been plotted for comparison with a standard diffusive scaling with a corresponding diffusion coefficient $`10^3m^2/s`$, typical of marine turbulent motions. FIGURE 4: Finite-Scale Lyapunov Exponent for data (continuous line) and model (dashed line) trajectories. $`\delta `$ is in $`km`$ and $`\lambda (\delta )`$ is in $`day^1`$. The experimental FSLE is computed over all the pairs of trajectories out of 37 drifters; the FSLE from the model is averaged over $`10^4`$ simulations. The simulated $`\lambda (\delta )`$ has a step-like behavior with one plateau at small scales ($`<\mathrm{\hspace{0.17em}50}km`$) and one at basin scales ($`>\mathrm{\hspace{0.17em}100}km`$), corresponding to doubling times of $`3days`$ and $`30days`$, respectively. FIGURE 5: Lagrangian Structure Function for data (continuous line) and model (dashed line) trajectories. $`\delta `$ is in $`km`$ and $`\nu (\delta )/\delta `$ is in $`day^1`$. The experimental LSF is computed over all the pairs of drifters; the LSF from the model is averaged over $`10^4`$ simulations.
no-problem/9902/hep-ph9902278.html
ar5iv
text
# Comment on “Chiral Corrections in Hadron Spectroscopy” ## Abstract It is shown that the principal pattern in baryon spectroscopy, which is associated with the flavor-spin dependent hyperfine interaction, is due to the spontaneous breaking of chiral symmetry in QCD and thus cannot be addressed by chiral perturbation theory, which is based on the explicit chiral symmetry breaking. In a recent preprint Thomas and Krein questioned the foundations of the description of the baryon spectrum with a chiral constituent quark model . The argument made was that the splitting pattern implied by the operator $$\stackrel{}{\tau }_i\stackrel{}{\tau }_j\stackrel{}{\sigma }_i\stackrel{}{\sigma }_j,$$ (1) should be inconsistent with chiral symmetry because it is inconsistent with the leading nonanalytic contribution to baryon mass predicted by the chiral perturbation theory (ChPT). I here show that ChPT has no bearing on this issue. There are two distinct aspects of chiral symmetry in QCD. The first one is the spontaneous (dynamical) breaking of chiral symmetry, which leads to the constituent (dynamical) masses of quarks (which are related to the quark condensates), massless Goldstone bosons, and their couplings to constituent quarks. As a result there appears in the chiral limit the massless Goldstone boson exchange interaction between the constituent quarks: $$V_\chi =\frac{g^2}{4\pi }\frac{1}{12m^2}\stackrel{}{\tau }_i\stackrel{}{\tau }_j\stackrel{}{\sigma }_i\stackrel{}{\sigma }_j4\pi \delta (\stackrel{}{r}),$$ (2) where the tensor force component which is irrelevant to discussion here, has been dropped. In reality the contact interaction (2) is smeared by the finite size of constituent quarks and pions <sup>*</sup><sup>*</sup>*Note that in the chiral limit the volume integral constraint, $`𝑑\stackrel{}{r}V_\pi (\stackrel{}{r})=0`$, does not apply as in this case the pion Green function $`\frac{1}{(\stackrel{}{q})^2}`$ exactly cancels the $`(\stackrel{}{q})^2`$ behaviour supplied by the pion-quark vertices.. Following to Thomas and Krein I consider for simplicity only the pion-exchange part of the complete Goldstone boson exchange interaction. The second aspect of the chiral symmetry is its explicit breaking by the nonzero mass of current quarks. This will slightly modify the constituent mass $`m`$ because of the direct contribution of current quark mass and because of pion loop self-energy corrections, which are subject to renormalization. The other implication of the explicit chiral symmetry breaking is that the pion obtains a finite mass $`\mu `$, which to leading order is illustrated by the current algebra (Gell-Mann - Oakes - Renner relations). As a consequence there appears a long-range Yukawa potential interaction between quarks $$V_\chi =\frac{g^2}{4\pi }\frac{1}{12m^2}\stackrel{}{\tau }_i\stackrel{}{\tau }_j\stackrel{}{\sigma }_i\stackrel{}{\sigma }_j\left(\mu ^2\frac{e^{\mu r}}{r}4\pi \delta (\stackrel{}{r})\right).$$ (3) Chiral perturbation theory only concerns the implications of the explicit chiral symmetry breaking term in (3) and hence by definition cannot be used to derive the expression (2). The idea of the chiral constituent quark model is that the main features of the baryon spectrum are supplied by the spontaneous breaking of chiral symmetry, i.e. by the constituent mass of quarks and the interaction (2) between them Note that the short-range interaction of the same type comes also from the $`\rho `$-meson-exchange and/or correlated two-pion-exchange, etc, and there are important reasons to believe that the latter contributions are also important .. As a consequence the $`N`$ and $`\mathrm{\Delta }`$ are split already in the chiral limit, as it must be. The expressions (in the notation of ref. ) $$M_N=M_015P_{00}^\pi ,$$ $$M_\mathrm{\Delta }=M_03P_{00}^\pi ,$$ (4) where $`P_{00}^\pi `$ is positive, arise from the interaction (2). The long-range Yukawa tail, which has the opposite sign represents only a small perturbation. It is in fact possible to obtain a near perfect fit of the baryon spectrum in a dynamical 3-body calculation in the chiral limit, neglecting the long-range Yukawa tail contribution, with a quality even better than that of . The implication is that ChPT has no bearing on the interactions (1) and (2) (nor on the the expressions (4), which should be considered as leading order contributions ($`m_c^0`$, where $`m_c`$ is current quark mass) within chiral perturbation theory. This does not mean, however, that the systematic corrections from the finite meson (current quark) mass should be ignored. The pion-exchange Yukawa tail contribution contains at the same time part of corrections from the leading nonanalytic order, $`\mu ^3`$, as well as part of the corrections from the other orders, with opposite sign. A rough idea about importance of these corrections for the $`N`$ and $`\mathrm{\Delta }`$ can be obtained from the comparison of the contributions of the first and second terms in (3) in nonperturbative calculations . The former one turns out to be much smaller than the latter. This is because of a small matter radius of the $`N`$ and $`\mathrm{\Delta }`$ . For highly excited states, however, the role of the Yukawa tail increases because of a bigger baryon size and thus the importance of the ChPT corrections should be expected to increase. To consider these corrections systematically one definitely needs to consider the loop contributions to the interactions between constituent quarks as well as the couplings to decay channels, which is rather involved task. This task is one for constituent quark chiral perturbation theory which is awaiting practical implementation.
no-problem/9902/astro-ph9902239.html
ar5iv
text
# Small-scale anisotropy of cosmic rays above 1019eV observed with the Akeno Giant Air Shower Array ## 1 Introduction Investigation on anisotropy of extremely high energy cosmic rays is one of the most important aspects to reveal their origin. In energies $``$ 10<sup>19</sup>eV, cosmic rays slightly deflect in the galactic magnetic field if they are protons of galactic origin, so that one could observe the correlation of their arrival directions with the galactic structure. Especially in the highest observed energy range, correlation of cosmic rays with the local structure of galaxies may be expected if their origins are nearby astrophysical objects and the intergalactic magnetic field is less than 10<sup>-9</sup> gauss. In the 1980’s, Wdowczyk, Wolfendale and their collaborators (Wdowczyk and Wolfendale (1984); Szabelski, Wdowczyk and Wolfendale (1986)) have shown that excess of cosmic rays from the direction of the galactic plane increases systematically with energy until a little above 10<sup>19</sup>eV, though the available data was not statistically enough at that time. Gillman and Watson (1993) have summarized anisotropies in right ascension and galactic latitude combining the Haverah Park data set with the data sets from the arrays at Volcano Ranch (Linsley (1980)), Sydney (Winn et al. (1986)) and Yakutsk (Efimov et al. (1986)). No convincing anisotropies were observed; but large amplitude of the second harmonics at (4 – 8) $`\times `$ 10<sup>18</sup>eV was reported. Ivanov (1998) showed, with the Yakutsk data set, a north-south asymmetry in the galactic latitude distribution which is the southern excess with 3.5 $`\sigma `$ deviation from an isotropic distribution in (5 – 20) $`\times `$ 10<sup>18</sup>eV. Recently, we have shown a significant anisotropy with first harmonic amplitude of $``$ 4 % in (0.8 – 2.0) $`\times `$ 10<sup>18</sup>eV, which corresponds to the chance probability of 0.2 % due to fluctuation of an isotropic distribution (Hayashida et al. (1998)). This anisotropy shows broad cosmic-ray flow from the directions of the galactic center and the Cygnus regions. In the higher energies, no significant large-scale anisotropy was found. Bird et al. (1998) have shown the galactic plane enhancement in the similar energy range. These experiments show that significant fraction of cosmic rays around 10<sup>18</sup>eV come from galactic sources. In the much higher energy range $``$ 4 $`\times `$ 10<sup>19</sup>eV, Stanev et al. (1995) have claimed that cosmic rays exhibit a correlation with the direction of the supergalactic plane and the magnitude of the observed excess is 2.5 – 2.8 $`\sigma `$ in terms of Gaussian probabilities. Their result was mainly based on the Haverah Park data set. In the same energy range, such large-scale correlation with the supergalactic plane was not observed in the data sets of the AGASA (Hayashida et al. (1996)), SUGAR (Kewley, Clay and Dawson (1996)) and Fly’s Eye (Bird et al. (1998)) experiments. However, AGASA observed three pairs of cosmic rays above 4 $`\times `$ 10<sup>19</sup>eV within a limited solid angle of the experimental accuracy and the chance probability is 2.9 % if cosmic rays distribute uniformly in the AGASA field of view. Two out of three are located nearly on the supergalactic plane. If cosmic rays in each of these pairs come from the same source, the detailed study on energy, arrival time and direction distribution of these clusters may bring information on their source and the intergalactic magnetic field (Sigl and Lemoine (1998); Medina Tanco (1998)). In the observed energy spectrum, there are two distinctive energies: $`E`$ $``$ 10<sup>19</sup>eV and 4 $`\times `$ 10<sup>19</sup>eV. The former is the energy where the spectral slope changes (Lawrence, Reid and Watson (1991); Efimov et al. (1991); Bird et al. (1994); Yoshida et al. (1995); Takeda et al. (1998)). This is interpreted as transition from galactic to extragalactic origin. The latter is the energy where the GZK effect (Greisen (1966); Zatsepin and Kuz’min (1966)), which is a series of energy loss through interaction with the cosmic microwave background photons, becomes important on their propagation from sources. It is important to study whether the arrival direction distribution of cosmic rays changes at these energies. Recent result of the AGASA energy spectrum shows the extension beyond the expected GZK cutoff (Takeda et al. (1998)). Since the distance to sources of cosmic rays above the expected GZK cutoff is limited to 50 Mpc (Hill and Schramm (1985); Berezinsky and Grigor’eva (1988); Yoshida and Teshima (1993)), their arrival directions may be correlated with luminous matter distribution if they are astrophysical source origin such as hot spots of radio galaxies (Biermann and Strittmatter (1987); Takahara (1990); Rachen and Biermann (1993); Ostrowski (1998)), active galactic nuclei (Blandford (1976); Lovelace (1976); Rees et al. (1982)), accretion flow to a cluster of galaxies (Kang, Rachen and Biermann (1997)), relativistic shocks in gamma-ray bursts (Vietri (1995); Waxmann (1995)), and so on. There is another possibility that most energetic cosmic rays are generated through decay of supermassive “X” particles related to topological defects (Bhattacharjee and Sigl (1998), reference therein). In this case, arrival directions of most energetic cosmic rays are not necessarily associated with luminous matters. If such particles are the part of Dark Matter and are concentrated in the galactic halo, anisotropy associated with our galactic halo is expected (Kuzmin and Rubakov (1997); Berezinsky, Kachelriess and Vilenkin (1997)). In this paper, we first examine large-scale anisotropy in terms of various coordinates using the data set of the Akeno Giant Air Shower Array (AGASA) until August 1998, including the old data set of the Akeno 20 km<sup>2</sup> array (A20) before 1990. Then we search for the small-scale anisotropy above 10<sup>19</sup>eV with the AGASA data set. ## 2 Experiment The Akeno Observatory is situated at 138$`\mathrm{°}`$ 30$`\mathrm{}`$ E and 35$`\mathrm{°}`$ 47$`\mathrm{}`$ N. AGASA consists of 111 surface detectors deployed over an area of about 100 km<sup>2</sup>, and has been in operation since 1990 (Chiba et al. (1992); Ohoka et al. (1997)). A20 is a prototype detector system of AGASA, operated from 1984 to 1990 (Teshima et al. (1986)), and is a part of AGASA after 1990. Each surface detector consists of plastic scintillators of 2.2 m<sup>2</sup> area. The detectors are placed with a separation of about 1 km. They are controlled and operated from a central computer through optical fiber network. Relative time difference among the detectors are measured with 40 nsec accuracy; all clocks at detector sites are synchronized to the central clock and signal-propagation time in cables and electronic devices are regularly measured at start of each run (twice a day). The details of the AGASA instruments have been described in Chiba et al. (1992) and Ohoka et al. (1997). The accuracy on determination of shower parameters are evaluated through the analysis of a large number of artificial events. These artificial events are generated with taking account of air shower features and fluctuation determined experimentally. Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array shows the accuracy on arrival direction determination for cosmic-ray induced air showers as a function of energies. The vertical axis denotes the opening angle $`\mathrm{\Delta }\theta `$ between input (simulated) and output (analyzed) arrival directions. The opening angles including 68 % and 90 % of data are plotted. By analyzing artificial events with the same algorithm used above, the accuracy on energy determination is estimated to be $`\pm `$ 30 % above 10<sup>19</sup>eV (Yoshida et al. (1995)). Table 1 lists the number of selected events, N(E), with zenith angles smaller than 45$`\mathrm{°}`$ and with core locations inside the array area. Events below 10<sup>19</sup>eV are used only a reference analysis in this paper. The difference of N(E $``$ 3.2 $`\times `$ 10<sup>19</sup>eV) $`/`$ N(E $``$ 10<sup>19</sup>eV) between A20 and AGASA arises from the difference of detection efficiency of each system. Seven events are observed above 10<sup>20</sup>eV, including one event after Takeda et al. (1998). ## 3 Results Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array(a) shows arrival directions of cosmic rays with energies above 10<sup>19</sup>eV on the equatorial coordinates. Dots, open circles, and open squares represent cosmic rays with energies of (1 – 4) $`\times `$ 10<sup>19</sup>eV, (4 – 10) $`\times `$ 10<sup>19</sup>eV, and $``$ 10<sup>20</sup>eV, respectively. The shaded regions indicate the celestial regions excluded in this paper due to the zenith angle cut of $``$ 45$`\mathrm{°}`$. The galactic and supergalactic planes are drawn by the dashed lines. “GC” designates the galactic center. Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array(b) shows arrival directions of cosmic rays only above 4 $`\times `$ 10<sup>19</sup>eV on the galactic coordinates. Details of the cosmic rays above 4 $`\times `$ 10<sup>19</sup>eV are listed in Table 2. ### 3.1 Analysis in the Equatorial Coordinates #### 3.1.1 Harmonic Analysis In order to search for cosmic ray anisotropy, it is required to compare observed and expected event frequencies at each region. An expected frequency is easily estimated as far as the exposure in each direction can be obtained; the uniformity of observation time on solar time for several years, which results in the uniform observation in right ascension, is expected for a surface array detection system operating in stable like AGASA. The fluctuation of the observation time on the local sidereal time is (0.2 $`\pm `$ 0.1) % which is small enough compared with anisotropy in this energy range, so that the exposure (observation time $`\times `$ collection area) in right ascension is quite uniform. Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array shows results of the first (left) and second (right) harmonics in right ascension. The amplitude (top), the phase (middle), and the chance probability (bottom) are shown in each energy bin. In the top panels of the harmonic amplitude, the shaded region is expected from statistical fluctuation of an isotropic distribution with the chance probability larger than 10 %. No significant anisotropy above this level is found above 3.2 $`\times `$ 10<sup>18</sup>eV. This is consistent with our previous paper (Hayashida et al. (1998)), in which zenith angles up to 60$`\mathrm{°}`$ were used. #### 3.1.2 Declination Distribution Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array shows the declination distribution of events above 10<sup>19</sup>eV (light shaded histogram) and 10<sup>20</sup>eV (dark shaded histogram). A solid curve is a third order polynomial function fitted to the light shaded histogram. This curve is consistent with the zenith angle dependence of the AGASA exposure and considered to be the expected distribution if cosmic rays distribute isotropically on the celestial sphere. Since the trigger efficiency is independent of energy above 10<sup>19</sup>eV and zenith angle less than 45$`\mathrm{°}`$, this distribution is applied to in higher energies. Excess with 2.5 $`\sigma `$ deviation is found in $`\delta `$ $`=`$ \[30$`\mathrm{°}`$, 40$`\mathrm{°}`$\] and this will be discussed later. ### 3.2 Analysis in the Galactic and Supergalactic Coordinates #### 3.2.1 Galactic and Supergalactic Plane Enhancement If cosmic rays have origin associating with nearby astrophysical objects, we may expect cosmic-ray anisotropy correlated with the galactic or supergalactic plane. Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array shows the latitude distribution on the galactic (left) and supergalactic (right) coordinates in three energy ranges of (1 – 2) $`\times `$ 10<sup>19</sup>eV (top), (2 – 4) $`\times `$ 10<sup>19</sup>eV (middle), and $``$ 4 $`\times `$ 10<sup>19</sup>eV (bottom). A solid line in each panel indicates the cosmic-ray intensity expected from an isotropic distribution. In order to examine any preference for arrival directions along the galactic and supergalactic planes, the plane enhancement parameter $`f_E`$ introduced by Wdowczyk and Wolfendale (1984) was used. The $`f_E`$ value characterizes the anisotropy expressed by: $$I_{obs}(b)/I_{exp}(b)=(1f_E)+1.402f_E\mathrm{exp}(b^2),$$ (1) where $`b`$ is galactic or supergalactic latitude in radians, $`I_{obs}(b)`$ and $`I_{exp}(b)`$ are observed and expected intensities at latitude $`b`$. A positive $`f_E`$ value suggests a galactic or supergalactic plane enhancement, $`f_E=0`$ indicates that arrival direction distribution is isotropic, and a negative $`f_E`$ shows depression around the plane. Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array shows the dependence of $`f_E`$ on the primary energy for the galactic (left) and supergalactic (right) coordinates. Some excess can be seen around the supergalactic plane in the seventh energy bin ($`\mathrm{log}`$(E\[eV\]) $`=`$ \[19.1, 19.2\]), where $`f_E^{SG}=0.36\pm 0.15`$. In other energies, the arrival direction distribution is consistent with an isotropic distribution. #### 3.2.2 $`\theta _{GC}`$ Distribution Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array shows the $`\mathrm{cos}`$($`\theta _{GC}`$) distribution, where $`\theta _{GC}`$ is the opening angle between the cosmic-ray arrival direction and the galactic center direction, with energies above 10<sup>19</sup>eV (top), 2 $`\times `$ 10<sup>19</sup>eV (middle), and 4 $`\times `$ 10<sup>19</sup>eV (bottom). Histograms are the observed distribution and the solid curves are expected from an isotropic distribution. The observed distribution is consistent with the solid curve in all energy ranges. The dashed and dotted curves are expected from the Dark Matter Halo model (Berezinsky and Mikhailov (1998)) and will be discussed in Section 4.2. ### 3.3 Significance Map of Cosmic-Ray Excess/Deficit There is no statistically significant large-scale anisotropy in the above one-dimensional analyses. Here, we search for two-dimensional anisotropy with taking account of the angular resolution event by event. Figures Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array and Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array show the contour maps of the cosmic-ray excess or deficit with respect to an isotropic distribution above 10<sup>19</sup>eV and 4 $`\times `$ 10<sup>19</sup>eV, respectively. A bright region indicates that the observed cosmic-ray intensity is larger than the expected intensity and a dark region shows a deficit region. For each observed event, we calculate a point spread function which is assumed to be a normalized Gaussian probability distribution with a standard deviation of the angular resolution $`\mathrm{\Delta }\theta `$ obtained from Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array. The probability densities of all events are folded into cells of 1$`\mathrm{°}`$ $`\times `$ 1$`\mathrm{°}`$ in the equatorial coordinates. At each cell, we sum up densities within 4.0$`\mathrm{°}`$ radius for Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array and 2.5$`\mathrm{°}`$ for Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array. These radii are obtained from $`\sqrt{2}\times \mathrm{\Delta }\theta `$, and they would make excess regions clearer. The reference distribution is obtained from an isotropic distribution. In these figures, small statistics of observed and expected events result in bright regions at the lower and higher declination and hence bright spots below $`\delta =0\mathrm{°}`$ are not significant. Two distinctive bright regions are found in Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array, which are broader than the angular resolution. They are referred to as broad clusters, such as the BC1 (20<sup>h</sup>50<sup>m</sup>, 32$`\mathrm{°}`$) and BC2 ( 1<sup>h</sup>40<sup>m</sup>, 35$`\mathrm{°}`$). The member events within 4$`\mathrm{°}`$ radius of BC1 are listed in Table 3. Four brighter regions in the middle declination are found in Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array: the C1 – C4 clusters which are noted in the eighth column of Table 2. The C1 – C3 clusters follow the notation used in our previous analysis (Hayashida et al. (1996)). The C2 cluster is observed in both energy ranges. In Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array, the contour map has eight steps in \[$`3\sigma `$, $`+3\sigma `$\]; lower two steps below $`1.5\sigma `$ are absent. The significance of deviation from an isotropic distribution are estimated to be 2.4 $`\sigma `$ at the C2 cluster, 2.7 $`\sigma `$ at the BC1 cluster, and 2.8 $`\sigma `$ at the BC2 cluster. The arrival directions of cosmic rays around the BC1 cluster are shown in Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array(a), and a radius of each circle corresponds to the logarithm of its energy. Shaded circles have energies above 10<sup>19</sup>eV and open circles below 10<sup>19</sup>eV. Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array(b) shows the arrival time – energy relation, and open circles denote members of the BC1 cluster. The members of the BC1 cluster have energies between 10<sup>19</sup>eV and 2.5 $`\times `$ 10<sup>19</sup>eV and no excess of cosmic rays are observed below 10<sup>19</sup>eV around this direction. Five members of the BC1 cluster are observed around MJD 50,000. This cluster is in the direction of a famous supernova remnant — the Cygnus Loop which extends about 3$`\mathrm{°}`$ around (20<sup>h</sup>50<sup>m</sup>, 30$`\mathrm{°}`$ 34$`\mathrm{}`$). The BC2 cluster is the broader cluster without a clear boundary. The BC1 and BC2 clusters contribute the excess around $`\delta =35\mathrm{°}`$ shown in Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array. The C2 and BC2 clusters are located near the supergalactic plane and lead the largest $`f_E^{SG}`$ value in Section 3.2.1. For small statistics of observed events, Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array reflects the arrival directions of individual events (open squares and open circles in Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array). The brightest peak is at the C2 cluster where three cosmic rays are observed against expected 0.05 events. It is possible that some of these clusters are observed by a chance coincidence. It should be noted, however, that two of these clusters — the doublet (C1) including the AGASA highest energy event and the triplet (C2) — lie near the supergalactic plane, as pointed in our previous analysis (Hayashida et al. (1996)). The arrival directions (left) and arrival time – energy relation (right) for the C1 (top) and C2 (bottom) clusters are shown in Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array. A radius of each circle in the left panels corresponds to the logarithm of its energy, and open circles in the right panels denote members of the C1 and C2 clusters. Around the C2 cluster, several lower energy cosmic rays are observed very close to the C2 cluster. ### 3.4 Cluster Analysis The threshold energy of 4 $`\times `$ 10<sup>19</sup>eV is one distinctive energy where the GZK effect becomes large as mentioned in Section 1. It is, however, quite important to examine what kind of dependence on threshold energy is operating. To begin with, we estimate the chance probability of observing one triplet and three doublets from 47 cosmic rays above 4 $`\times `$ 10<sup>19</sup>eV. A cluster of cosmic rays is defined as follows: 1. Define the $`i`$-th event; 2. Count the number of events within a circle of radius 2.5$`\mathrm{°}`$ centered on the arrival direction of the $`i`$-th event; 3. If this number of events exceeds a certain threshold value $`N_{th}`$, the $`i`$-th event is counted as a cluster. This procedure was repeated for total 47 events and then the total number of clusters with $`N_{th}`$ was determined. The chance probability $`P_{ch}`$ of observing this number of clusters under an isotropic distribution is obtained from the distribution of the number of clusters using 10,000 simulated data sets. These simulated data sets were also analyzed by the same procedure described above. Out of 10,000 simulations, 32 trials had equal or more doublets ($`N_{th}=2`$) than the observed data set, so that $`P_{ch}=0.32\%`$. And $`P_{ch}=0.87\%`$ for triplets ($`N_{th}=3`$). Then, the energy dependence for observing (a) doublets and (b) triplets are estimated and the results are shown in Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array. When a new cluster is added above a threshold energy, a histogram changes discontinuously at that energy. At the maximum threshold energy where the triplet is detected, we find $`P_{ch}`$ $`=`$ 0.16 % in Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array(b). The narrow peaks of $`P_{ch}`$ $``$ 0.1 % above 4 $`\times `$ 10<sup>19</sup>eV in Figures Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array(a) result from the C1, C3 and C4 doublets, and another doublet C5 is found just below 4 $`\times `$ 10<sup>19</sup>eV. Here, these chance probabilities are estimated without taking the degree of freedom on the threshold energy into account. However, the chance probabilities are smaller than 1 % and don’t vary abruptly with energies above 4 $`\times `$ 10<sup>19</sup>eV. This means that the threshold energy of 4 $`\times `$ 10<sup>19</sup>eV for doublet and triplet in Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array may indicate any critical energy, and suggests that their sources are not very far being different from those below this energy. ### 3.5 10<sup>20</sup>eV Events Seven events have been observed with energies above 10<sup>20</sup>eV, and their energies and coordinates are also listed in Table 2. Their declination are near $`\delta 20\mathrm{°}`$ while an isotropic distribution is shown by the solid curve in Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array. To check whether these seven events distribute isotropically or not, we compare celestial distribution of seven 10<sup>20</sup>eV events with that for events between 10<sup>19</sup>eV and 10<sup>20</sup>eV in ten different coordinates. The Kolmogorov-Smirnov (KS) test (Press et al. (1988)) was used for avoiding any binning effect. The results are summarized in Table 4. The smallest KS probability in Table 4 is 2.5 % for the declination distribution; but this probability becomes larger using data set above 6.3 $`\times `$ 10<sup>19</sup>eV. One interesting feature is that five 10<sup>20</sup>eV cosmic rays come from south-west of the AGASA array, where the strength of the geomagnetic field component which is perpendicular to an air shower axis is larger than the other directions (Stanev and Vankov (1997)). ## 4 Discussion ### 4.1 Comparison with other experiments Above 3 $`\times `$ 10<sup>18</sup>eV, no large-scale anisotropy has been found with the harmonic analysis and the $`f_E^G`$ fit. Gillman and Watson (1993) summarized the $`f_E^G`$ values using the data sets obtained mainly from the Haverah Park experiment. They obtained no significant deviation from $`f_E^G=0`$. The result from the Fly’s Eye experiment (Bird et al. (1998)) is consistent with an isotropic distribution of cosmic rays with E $``$ 10<sup>19</sup>eV. The analysis with the Yakutsk data set (Ivanov et al. (1997)) shows no significant galactic plane enhancement above 10<sup>18</sup>eV. The results from all experiments are consistent with this work on no correlation of cosmic rays above 10<sup>19</sup>eV with the galactic plane. This may implicate the extragalactic origin of cosmic rays above 10<sup>19</sup>eV if they are mostly protons. The BC1, BC2 and C1 – C5 clusters are found with energies $``$ 10<sup>19</sup>eV or $``$ 4 $`\times `$ 10<sup>19</sup>eV. The C2 and BC2 clusters lead the small preference along the supergalactic plane in the energy range of $`\mathrm{log}`$(E\[eV\]) $`=`$ \[19.1, 19.2\]. With the data sets of Haverah Park, Yakutsk, and Valcano Ranch (Uchihori et al. (1996)) and AGASA, another triplet is found at the position of the C1 cluster within experimental error box on arrival direction determination. This triplet at the C1 cluster position includes the AGASA highest energy event and a 10<sup>20</sup>eV Haverah Park event. It should be noted that these triplets at the C1 and C2 positions are close to the supergalactic plane. ### 4.2 Correlation with Galactic Halo Kuzmin and Rubakov (1997) and Berezinsky et al. (1997) have suggested a cosmic-ray source model associated with Dark Matter distribution in our galactic halo. In this model, most energetic cosmic rays are generated through decay of supermassive particles which are trapped in the galactic halo and thus distribute symmetrically around the galactic center. The arrival directions of most energetic cosmic rays, therefore, exhibit anisotropy at the Earth (Berezinsky (1998)). From recent studies by Berezinsky and Mikhailov (1998) and Medina Tanco and Watson (1998), a significant anisotropy would be expected in the first harmonics of right ascension distribution, the amplitude of 40 % at phase about 250$`\mathrm{°}`$, which is independent of the ISO and NFW models of dark matter distribution in the galactic halo. The ISO and NFW models are described in Kravtsov et al. (1997) and Navarro, Frenk and White (1996), respectively. This expected anisotropy is consistent with the results of the harmonic analysis above 4 $`\times `$ 10<sup>19</sup>eV as shown in Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array. However, this amplitude is explained with statistical fluctuation of an isotropic distribution. As shown by the dashed and dotted curves in Figure Small-scale anisotropy of cosmic rays above 10<sup>19</sup>eV observed with the Akeno Giant Air Shower Array, the ISO and NFW models of Dark Matter distribution in the galactic halo lead excess toward the galactic center. Table 5 shows the reduced-$`\chi ^2`$ values of the observed $`\mathrm{cos}(\theta _{GC})`$ distribution with the isotropic, ISO and NFW models. Although the distribution expected from the ISO and NFW models are quite different from the observed distribution in energies above 10<sup>19</sup>eV, the reduced-$`\chi ^2`$ values are close to one another above 2 $`\times `$ 10<sup>19</sup>eV and 4 $`\times `$ 10<sup>19</sup>eV. Above 2 $`\times `$ 10<sup>19</sup>eV, all three models are acceptable and it is hard to distinguish one from another. ### 4.3 Correlation with Nearby Galaxies In Section 3.4, we calculated the chance probability of observing clusters under an isotropic distribution. If cosmic rays are astrophysical source origin, the non-uniform distribution of galaxies or luminous matters should be taken into account, as claimed by Medina Tanco (1998). He calculated trajectories of cosmic rays above 4 $`\times `$ 10<sup>19</sup>eV in the intergalactic magnetic field under the assumption that flux of cosmic rays is proportional to the local density of galaxies. The expected distribution of cosmic-ray intensity is no more uniform and this may result in a strong anisotropy. This is different from the results in this paper so that our estimation of the chance probability of observing clusters under an isotropic distribution is experimentally reliable. However, his calculation shows important results: the C2 cluster is on top of a maximum of the arrival probability for sources located between 20 and 50 Mpc; and the C1 cluster locates on a high arrival probability region for sources at more than 50 Mpc. This suggests the possibility that the members of these clusters are generated at different sources. One need accumulate further statistics to make arrival direction, time and energy relation to be clear (Medina Tanco (1998); Sigl and Lemoine (1998)) to distinguish whether the members of clusters come from a single source or unrelated sources. ### 4.4 Correlation with the Known Astrophysical Objects As mentioned in Section 3.4, the BC1 cluster is in the direction of the Cygnus Loop (NGC6992/95). From the Hillas confinement condition of (magnetic field $`\times `$ size) for cosmic ray acceleration (Hillas (1984)), the magnetic field in the shock of the Cygnus Loop is too small to accelerate cosmic rays up to 10<sup>19</sup>eV. And the observed energy distribution and bunch of arrival time of the cluster members don’t favor the diffusive shock acceleration. Another possible candidate is PSR 2053$`+`$36 with the period of 0.2215 sec and the magnetic field of about 3 $`\times `$ 10<sup>11</sup> gauss (Manchester and Taylor (1981)). It may be plausible that such highly magnetized pulsar has accelerated cosmic rays up to 10<sup>19</sup>eV within a short time (Gunn and Ostriker (1969); Goldreich and Julian (1969)). It is highly desired to search for any signals from this direction in other energy range around MJD 50,000. For the C1 – C5 clusters and 10<sup>20</sup>eV cosmic rays, coincidence with known astrophysical objects are searched for from three catalogs which are the second EGRET catalog (Thompson et al. (1995, 1996)), the CfA redshift catalog (Huchra et al. (1995)), and the eighth extragalactic redshift catalog (Veron-Cetty and Veron (1998)). The selection criteria are the following: (i) the separation angles within 4.0$`\mathrm{°}`$ from a member of each cluster, and 2.5$`\mathrm{°}`$ for the 10<sup>20</sup>eV cosmic ray; (ii) the redshift within 0.02. In the CfA catalog, only QSOs/AGNs are selected. Candidate objects are listed in Table 6. Out of these objects, Mrk 40 (VV 141, Arp 151) is an interacting galaxy and may be most interesting. It should be noted that Al-Dargazelli et al. (1996) claimed that nearby colliding galaxies are most favored as the sources of clusters (regions of excess events) defined by them using the world data available before 1996. ## 5 Summary In conclusion, there is no statistically significant large-scale anisotropy related to the galactic nor supergalactic plane. The slight supergalactic plane enhancement is observed just above 10<sup>19</sup>eV and arises mainly from the BC2 and C2 clusters. Above 4 $`\times `$ 10<sup>19</sup>eV, one triplet and three doublets are found and the probability of observing these clusters by a chance coincidence is smaller than 1 %. Especially the triplet is observed against expected 0.05 events. Out of these clusters, the C2 (AGASA triplet) and C1 (doublet including the AGASA highest energy event or triplet together with the Haverah Park 10<sup>20</sup>eV event) clusters are most interesting; they are triplets found in the world data sets and are located near the supergalactic plane. One should wait for the further high-rate observation to distinguish whether the members of clusters come from a single source or different sources. The $`\mathrm{cos}(\theta _{GC})`$ distribution expected from the Dark Matter Halo model fits the data as well as an isotropic distribution above 2 $`\times `$ 10<sup>19</sup>eV and 4 $`\times `$ 10<sup>19</sup>eV, but is a poorer fit than isotropy above 10<sup>19</sup>eV. The arrival direction distribution of the 10<sup>20</sup>eV cosmic rays is consistent with that of cosmic rays with lower energies and is uniform. It is noteworthy that three of seven 10<sup>20</sup>eV cosmic rays are members of doublets. The BC1 cluster is in the direction of the Cygnus Loop or PSR 2053$`+`$36 region. It is desirable to examine any signals from this direction in other energy band around MJD 50,000. We hope other experiments in TeV – PeV regions to explore the C1 – C5 clusters and 10<sup>20</sup>eV cosmic ray directions. We are grateful to Akeno-mura, Nirasaki-shi, Sudama-cho, Nagasaka-cho, Ohizumi-mura, Tokyo Electric Power Co. and Nihon Telegram and Telephone Co. for their kind cooperation. The authors are indebted to other members of the Akeno group in the maintenance of the AGASA array. The authors are grateful to Prof. V. Berezinsky for his suggestions on the analysis of the Dark Matter Halo hypothesis. M.Takeda acknowledges the receipt of JSPS Research Fellowships. The authors thank Paul Sommers for his valuable suggestion on the preparation of the manuscript.
no-problem/9902/cond-mat9902138.html
ar5iv
text
# Thermodynamics of the incommensurate state in 𝑅⁢𝑏₂⁢𝑊⁢𝑂₄: on the Lifshitz point in 𝐴'⁢𝐴''⁢𝐵⁢𝑋₄ compounds ## I Introduction The orientational ordering of $`BX_4`$ tetrahedra drives a rich sequence of structural phases in ionic $`A^{}A^{\prime \prime }BX_4`$ compounds of $`K_2SO_4`$ type. In the present communication we are interested in the nature of the phase transition from the parent high-symmetry phase $`P6_3/mmc`$ (like $`\alpha `$-$`K_2SO_4`$) to the orthorhombic phase $`Pmcn`$ (like $`\beta `$-$`K_2SO_4`$) that occurs at high temperatures ($`T600800K`$) either directly: $$Pmcn\stackrel{T_c}{}P6_3/mmc$$ (1) (e.g. in $`K_2SO_4`$, $`Rb_2SeO_4`$, $`K_2SeO_4`$) or via an intermediate $`1q`$incommensurate ($`Inc`$) phase: $$Pmcn\stackrel{T_l}{}Inc\stackrel{T_i}{}P6_3/mmc$$ (2) as in molybdates and tungstates $`Rb_2WO_4`$, $`K_2MoO_4`$, $`K_2WO_4`$. The last one has the modulation vector $`𝐪=(0,q_b,0)`$ that can be alternatively directed in two other equivalent directions of the $`120^0`$ star of the hexagonal Brillouin zone. All the transitions are of the order-disorder type and are characterized by the vertical (up/down) orientations of $`BX_4`$ tetrahedra. Other, low temperature transitions in $`A^{}A^{\prime \prime }BX_4`$ compounds that are related with the planar orientation of tetrahedra are beyond our consideration (for details, see Refs. ). From the viewpoint of the Landau theory of phase transitions, only the lock-in $`PmcnInc`$ transition should be of the first order. The $`PmcnP6_3/mmc`$ transition should be of the second order since $`Pmcn`$ is a subgroup of $`P6_3/mmc`$ and in the Landau functional neither third order nor Lifshits terms are present. The transition $`IncP6_3/mmc`$ should also be of the second order as a transition to the incommensurate phase of the type II. The recently proposed hcp Ising model correctly describes the high-temperature phase diagram of $`A^{}A^{\prime \prime }BX_4`$ compounds. In this model the $`PmcnP6_3/mmc`$ and $`IncP6_3/mmc`$ transitions are of the second order. The experimental properties of the $`PmcnP6_3/mmc`$ transitions in various compounds of $`A^{}A^{\prime \prime }BX_4`$ family are collected from Refs. in Table I as function of the geometrical factor $`c/a`$ of their hcp structure. As was shown in our previous study this is the unique parameter that drives the actual phase sequence: for $`c/a>1.26`$ the transition is direct, whereas for $`c/a<1.26`$ the sequence (2) takes place. In disagreement with the theoretical prediction, the direct $`PmcnP6_3/mmc`$ transition is of the first order with a large jump of the molar entropy ($`R\mathrm{ln}2`$) and of the lattice constants ($`2\%`$) . The Incommensurate phase in $`Rb_2WO_4`$, $`K_2MoO_4`$, $`K_2WO_4`$ was relatively poorly studied because of the high hygroscopic nature of these compounds . It is known that the $`IncP6_3/mmc`$ transition reveals a substantial discontinuity of the lattice parameter ($`0.20.7\%`$) . To characterize the thermodynamics of the $`PmcnIncP6_3/mmc`$ transition we performed Differential Scanning Calorimetry (DSC) measurements in $`Rb_2WO_4`$ that are reported in Sec. II. It was found that both transitions are of the first order with entropy jumps of $`0.2R\mathrm{ln}2`$ and $`0.3R\mathrm{ln}2`$. The $`IncP6_3/mmc`$ transition is a rare example of incommensurate transition that occurs discontinuously. At $`c/a1.26`$ the critical temperatures $`T_l`$, $`T_i`$, $`T_c`$ coincide and the $`A^{}A^{\prime \prime }BX_4`$ compounds seem to reveal a triple Lifshitz point, that was found previously only in few experimental systems (for a review see Ref. ). The particular property of this Lifshitz point in $`A^{}A^{\prime \prime }BX_4`$ compounds is that all the incoming transition lines are of the first order. This possibility was theoretically studied in Ref. where the discontinuities were modeled by the negative forth-order terms in the Landau functional. To our knowledge this is also the unique example of Lifshitz point in a system where the modulation vector can be directed in more than one equivalent direction. The main question raised by these systems is why strong discontinuities at $`PmcnP6_3/mmc`$ and $`IncP6_3/mmc`$ transitions appear. Note first that they cannot be ascribed to fluctuation effects that were widely studied last decades in relation with transitions to the modulated phases . In such a case the first order character is attributed to the lack of a stable fixed point as in $`BaMnF_4`$ and the discontinuity is expected to be small due to the smallness of the critical region. We propose that the observed discontinuities are caused by the coupling of the order parameter with elasticity of the crystal that is known to be able to change the order of transition. Introducing in Sec. III the corresponding coupling to the mean-field treatment of the hcp Ising model and comparing the results with the measured jumps of the lattice constant and molar entropy we demonstrate that this coupling can be responsible for the transitions discontinuity. ## II Experiment DSC experiments were performed in $`Rb_2WO_4`$ crystals to characterize the thermodynamics of the $`PmcnIncP6_3/mmc`$ transitions. Due to the very high hygroscopic nature of the material, powder samples were prepared in a special camera in a dry nitrogen atmosphere. DSC experiments have been performed using a Mettler-TA3000 equipment, between room temperature and $`820K`$. The heating/cooling rate was $`5K/min`$. DSC thermograms of the investigated sample show the presence of two reversible enthalpic anomalies at about $`T_l=660K`$ and $`T_i=746K`$, the lock-in and the incommensurate phase transitions, respectively (see Fig. I). The measured molar entropy jumps are $`\mathrm{\Delta }S_{Tl}=1.4J/Kmol`$ and $`\mathrm{\Delta }S_{Ti}=1.8J/Kmol`$ (approximately $`80\%`$ of $`\mathrm{\Delta }S_{Ti}`$ are taken from the $`\delta `$peak of DSC anomaly at $`T_i`$ and other $`20\%`$ from the residual specific heat decrease in the temperature interval of about $`8K`$ below $`T_i`$, see Fig. I). These values are given in the Table I in units of $`R\mathrm{ln}2`$. An hysteresis of $`12.5K`$ was observed for the $`PmcnInc`$ transition, ($`T_l=664\pm 0.5K`$ for heating and $`651.5\pm 0.5K`$ for cooling). In contrast, the $`IncP6_3/mmc`$ transition reveals no hysteresis within the error bar of $`\pm 0.5K`$. This is consistent with extremely small hysteresis of $`1K`$ observed for $`PmcnP6_3/mmc`$ transition in $`K_2SO_4`$ . ## III Discussion The high-temperature order-disorder transitions in $`A^{}A^{\prime \prime }BX_4`$ compounds are described by the in-site averages of the vertical orientation of $`BX_4`$ tetrahedra, $`\sigma _i=<S_i>`$ where the pseudo-spin $`S_i`$ is equal to $`\pm 1`$ for the up/down tetrahedra orientations . The variables $`\sigma _i`$ are equal to zero in the disordered high-temperature phase $`P6_3/mmc`$. In the low-temperature phase $`Pmcn`$, they take the equal amplitudes $`\sigma _i=\pm \sigma `$ and alternate according to $`Pmcn`$ symmetry. In the incommensurate phase a modulation $`\sigma _i=\sigma _q(e^{i\mathrm{𝐪𝐫}_i}+e^{i\mathrm{𝐪𝐫}_i})=2\sigma _q\mathrm{cos}\mathrm{𝐪𝐫}_i`$ occurs. The absolute values of $`\sigma _i`$, and hence of amplitudes $`\sigma `$ and $`2\sigma _q`$ (that define the corresponding order parameters), are smaller than one; the less they are, the more $`BX_4`$ tetrahedra are disordered. Because of the discontinuity of the $`PmcnP6_3/mmc`$ and $`IncP6_3/mmc`$ transitions the amplitudes $`\sigma `$ and $`2\sigma _q`$ have nonvanishing values below the critical temperatures $`T_c`$ and $`T_i`$. We estimate $`\sigma `$ and $`2\sigma _q`$ in the ordered states from the entropy jump at the transition: $$\frac{\mathrm{\Delta }S}{R}=<\frac{1}{2}((1+\sigma _i)\mathrm{ln}(1+\sigma _i)+(1\sigma _i)\mathrm{ln}(1\sigma _i))>_i$$ (3) that is measured experimentally (see Table I). For $`K_2SeO_4`$ and $`K_2SO_4`$, the inequality $`\mathrm{\Delta }S>R\mathrm{ln}2`$ holds, which means that in the low temperature phase the $`BX_4`$ tetrahedra are perfectly ordered ($`\sigma 1`$) and, possibly, other degrees of freedom are involved in the transition. Taking $`\sigma _i`$ in the incommensurate phase of $`Rb_2WO_4`$ as $`2\sigma _q\mathrm{cos}\mathrm{𝐪𝐫}_i`$, from $`\mathrm{\Delta }S=0.3R\mathrm{ln}2`$ we get $`2\sigma _q0.8`$ that again demonstrates the high degree of tetrahedra ordering. In the mean-field approach of the hcp Ising model , the phase transitions from $`P6_3/mmc`$ to $`Pmcn`$ and to $`Inc`$ phases were found to be continuous and the free energy (per molecule) was expanded over the small values of parameters $`\sigma `$ and $`2\sigma _q`$ as: $$f_{com}=\frac{k}{2}(TT_c)\sigma ^2+\frac{kT}{12}\sigma ^4$$ (4) for $`PmcnP6_3/mmc`$ transition, and as: $$f_{inc}=\frac{k}{4}(TT_i)(2\sigma _q)^2+\frac{kT}{32}(2\sigma _q)^4$$ (5) for $`IncP6_3/mmc`$ transition. The critical temperatures $`T_c`$, $`T_i`$ are functions of interaction parameters $`J_{ij}`$. They coincide at the Lifshitz point and are correlated with the geometrical factor $`c/a`$ as follows: $`T_c<T_i`$ when $`a/c<1.26`$ and $`T_c>T_i`$ when $`a/c>1.26`$. To account for the discontinuity of the transitions we propose that coupling of tetrahedra orientation with crystal elasticity is responsible for this phenomenon. Our further consideration is analogous to compressible Ising model, proposed by Domb . For estimation purpose we consider here only the coupling with the strain $`e_3`$ along the hexagonal axis and omit other elastic degrees of freedom. Account of those is difficult because of the absence of experimental data, but it can only improve our estimations. The elastic contribution to the free energy is written as: $$f_{el}=\gamma \sigma ^2e_3+\frac{1}{2}V_{mol}C_{33}e_3^2$$ (6) where $`\gamma \sigma ^2e_3`$ is the coupling of the order parameter with elastic strain and $`V_{mol}C_{33}e_3^2`$ is the proper elastic energy of the crystal (per unit volume $`V_{mol}`$ of the molecule). After minimization we get the strain in the $`Pmcn`$ phase: $`e_3=\mathrm{\Delta }c/c=\gamma \sigma ^2/V_{mol}C_{33}`$. Substituting it back to (6) we find that coupling with elastic strain renormalizes the quartic term in (4) and the total free energy is written as: $`f_{com}+f_{el}`$ $`=`$ $`{\displaystyle \frac{k}{2}}(TT_c)\sigma ^2`$ (8) $`+(kT/12\gamma ^2/2C_{33}V_{mol})\sigma ^4`$ The quartic term becomes negative when the elastic contribution $`\gamma ^2/2C_{33}V_{mol}`$ exceeds the Ising thermal energy $`kT/12`$. The transition then is of the first order and the amplitude of $`Pmcn`$ order parameter is stabilized by the higher order terms. We estimate the value of the coupling constant $`\gamma `$ from the relation $`\gamma `$ = $`\mathrm{\Delta }c/cV_{mol}C_{33}/\sigma ^2`$ that, in $`K_2SO_4`$ with $`C_{33}=5510^9N/m^2`$ , $`\sigma ^21,`$ $`V_{mol}120\AA ^3`$ and $`\mathrm{\Delta }c/c=0.025`$ gives $`\gamma 1.710^{19}J`$. Then, the elastic contribution $`\gamma ^2/2C_{33}V_{mol}2.110^{21}J`$ is indeed larger than $`kT_c/12=10^{21}J`$ that justifies the role of the elastic degrees of freedom in the discontinuity of $`PmcnP6_3/mmc`$ transition. Consider now the $`IncP6_3/mmc`$ transition. Assuming that the elastic coupling is given by $`\gamma <\sigma _i^2>e_3`$= $`\frac{1}{2}\gamma (2\sigma _q)^2e_3`$, we come to the effective functional $`f_{inc}+f_{el}`$ $`=`$ $`{\displaystyle \frac{k}{4}}(TT_i)(2\sigma _q)^2`$ (10) $`+(kT/32\gamma ^2/8C_{33}V_{mol})(2\sigma _q)^4`$ We perform the estimation of the quartic coefficient for $`Rb_2WO_4`$ in an analogous way as the precedent, taking $`2\sigma _q0.8`$ and $`V_{mol}142\AA ^3`$ . Since the elastic constant $`C_{33}`$ is not available we assume it has the same value as in $`K_2SO_4`$. The jump of the lattice parameter $`\mathrm{\Delta }c/c`$ is assumed to be of the same order of $`0.007`$ as in $`K_2MoO_4`$. The calculation gives: $`\gamma =2\mathrm{\Delta }c/cV_{mol}C_{33}/(2\sigma _q)^2210^{19}J`$ and $`\gamma ^2/8C_{33}V_{mol}1310^{22}J`$ that again is larger than the bare forth-order coefficient $`kT_i/32=310^{22}J`$. To conclude, we suggest that the Lifshitz point occurs in $`A^{}A^{\prime \prime }BX_4`$ compounds at $`c/a1.26`$ where three first-order transition lines meet together. One can expect to achieve this point experimentally, either by preparation of solid solution $`Rb_2W_xMo_{1x}O_4`$ or by submitting $`K_2SeO_4`$ or $`Tl_2SeO_4`$ (with $`c/a=1.27`$ and $`1.26`$) to an uniaxial pressure along $`c`$. Analyzing experimental data we demonstrate that the coupling of the order parameter with crystal elasticity can be responsible for the discontinuity of transition. We stress another peculiar feature of the $`PmcnP6_3/mmc`$ and $`IncP6_3/mmc`$ transitions. Despite of the strong entropy jump ($`R\mathrm{ln}2`$), they have a very low hysteresis (less than $`1K`$) that cannot be explained on the basis of the available models . It is interesting to note that the $`PmcnP6_3/mmc`$ transition occurs also in another compound of $`A^{}A^{\prime \prime }BX_4`$ family - $`KLiSO_4`$ that has a large ratio $`c/a=1.69`$. Unlike other cases, this transition is either of the second or of the weak first order with the entropy jump less then $`0.1R\mathrm{ln}2`$ and with no visible jump of the lattice constants . Then, it is quite probable, that the order of transition changes from the first to the second one and that the $`PmcnP6_3/mmc`$ transition line reveals a tricritical point when $`c/a`$ increases. More systematic experiments however are needed to verify this hypothesis. We are grateful to M. A. Pimenta, R. L. Moreira, W. Selke, A. S. Chaves, F. C. de Sá Barreto, J. A. Plascak for helpful discussions and to A. M. Moreira for technical assistance. The work of I. L. was supported by the Brazilian Agency Fundacao de Amparo a Pesquisa em Minas Gerais (FAPEMIG) and by Russian Foundation of Fundamental Investigations (RFFI), Grant No. 960218431a The work of A. J. was supported by the Brazilian Agency Fundação Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES).
no-problem/9902/math9902013.html
ar5iv
text
# Rigidity for periodic magnetic fields ## 1 Introduction It was proved by E. Hopf , that for Riemannian 2-torus with non-zero curvature there always exist geodesics with conjugate points. In this result was finaly generalised to higher dimensions. We refer the reader to , , and for a non-complete list of previous important contributions. It was discovered in , ( see also ) that the nature of E. Hopf rigidity is not entirely Riemannian and can be established for other variational problems. In each case this requires some new integral geometric tool adapted for the system under consideration. In this paper we give a proof of such a rigidity result for the motion of a charged particle on a torus in the presence of the magnetic field, provided the Riemannian metric is conformally flat. This result gives the twisted version of a theorem by A. Knauf and C .Croke-A. Fathi ( and ). We formulate the conjecture that the restriction of conformal flatness is not essential for the result. This, being true, would provide a twisted version of Burago-Ivanov theorem. In the situation we consider, it would be reasonable to think that there always exist a periodic orbit with conjugate points. This fact is, however, much more difficult to approach. It is not proved to the best of our knowledge even in the usual untwisted case. Moreover, it seems that the fact of existence of periodic orbit is still not completely answered for arbitrary magnetic field on the torus. We refer the reader to a survey paper by V. Ginzburg for known results and techniques. ## 2 Main Results Magnetic field on a torus $`T^n`$ is a closed 2-form which we will denote by $`\beta `$. Consider the cotangent bundle $`T^{}T^n`$ and define the symplectic structure twisted by $`\beta `$. $$\omega =\omega _0+\pi ^{}\beta $$ where $`\omega _0`$ is a standard structure and $`\pi `$ is the canonical projection. For a Riemannian metric $`g`$ on $`T^n`$ consider the Hamiltonian flow $`g^t`$ of the function $`H`$ $$H=\frac{1}{2}<p,p>_g$$ computed with the help of the symplectic structure $`\omega `$. Let me remind that no conjugate points condition for the orbit $`\vartheta `$ of $`g^t`$ means that for any two points $`x,y\vartheta `$ $$g_{}^tV(x)V(y)$$ where $`t`$ is a time difference between $`y`$ and $`x`$ and $`V(x)`$, $`V(y)`$ are vertical subspaces at $`x,y`$. ###### Theorem 1 Suppose that the Riemannian metric $`g`$ is conformally flat. Then there always exist orbits of $`g^t`$ on the level of $`\{H=1/2\}`$ with conjugate points unless the 2-form $`\beta `$ vanishes identically. ###### Remark 1 In other words, no conjugate points condition implies $`\beta 0`$ and then it follows from and that the metric is flat too. Actually, we will see it once more by our computations later. The proof which is suggested in this paper follows the original scheme by E. Hopf. However, it might need serious modifications if one tries to generalize the result for arbitrary Riemannian metric. The first ingredient of the proof is to construct measurable field of Lagrangian subspaces $`l(x)T_x(T^{}T^n)`$ for any $`x\{H=1/2\}`$. This field can be constructed by the following limit procedure: $$l(x)=lim_{t+\mathrm{}}g_{}^tV\left(g^t(x)\right)$$ It was first used by E. Hopf and L. Green for Riemannian case. We refer the reader to recent paper for the proof in a general optical case. It follows from the very construction of $`l`$ that the field $`l`$ is invariant under $`g^t`$ and is transversal to the vertical field $`V`$ everywhere. With the construction of $`l`$ Theorem 1 is a corollary of the following ###### Theorem 2 Let $`l`$ be a measurable field of Lagragian subspaces invariant under the flow $`g^t`$. If $`l`$ is transversal to $`V`$ everywhere then $`\beta `$ vanishes identically, and the metric $`g`$ is flat. This last theorem has the following dynamical interpretation. ###### Theorem 3 Suppose that the energy shell $`\{H=1/2\}`$ is smoothly foliated by Lagrangian tori homologous to the zero section of $`T^{}T^n`$. Then the 2-form $`\beta `$ vanishes identically and the metric is flat. ###### Remark 2 In the paper , such a situation is called total integrability and simple examples of totally integrable magnetic geodesic flows are given. Let me remark that the Lagrangian torii in all these examples cannot be homologous to the zero section as it follows from Theorem 3. It is an interesting, completely open problem to characterise totally integrable magnetic geodesic flows. Acknowledgements I was introduced to the subject of magnetic fields by Viktor Ginzburg (UC Santa Cruz). He suggested to me the question on the rigidity for magnetic fields. I am deeply grateful to him for many very useful discussions. The results of this paper were presented at the Symplectic Geometry meeting in Warwick 1998 and on the Arthur Besse Geometry Seminar. I am thankful to D. Salamon, F. Laudenbach and P. Gauduchon for inviting me to speak there. I would also like to thank the EPSRC and Arc-en-Ciel for their support. ## 3 Proofs Let me explain first how Theorem 3 follows from Theorem 2. Since the torii are Lagrangian and homologous to the zero section then the 2-form $`\beta `$ must be exact. Denote by $`\alpha `$ the primitive 1-form. Then one can easily see that the flow $`g^t`$ is equivalent to the Hamiltonian flow $`\stackrel{~}{g}^t`$ of the function $`\stackrel{~}{H}=1/2<p\alpha ,p\alpha >_g`$ with respect to the standard structure $`\omega _0`$. This equivalence is given by the diffeomorphism $`(q,p)(q,p+\alpha )`$ which is fiber preserving. Note that the function $`\stackrel{~}{H}`$ is strictly convex with respect to $`p`$. It follows from generalised Birkhoff theorem (see for its most general form and for the survey and discussions) that all the Lagrangian torii are the sections of the cotangent bundle. But then the distribution of their tangent spaces meets the conditions of Theorem 2. Proof of Theorem 2 We shall work in standard coordinates $`(q,p)`$ on $`T^{}T^n`$, such that the Riemannian metric $`g`$ is given by $$ds^2=\frac{1}{2\lambda }\left(dq_1^2+\mathrm{}+dq_n^2\right).$$ Then $$H(p,q)=\frac{\lambda }{2}\left(p_1^2+\mathrm{}+p_n^2\right)$$ (3.1) Write $`\beta =d\alpha +\gamma `$, where $`\alpha `$ is a 1-form and $`\gamma `$ is a 2-form, having constant coefficients in the coordinates $`(q_1\mathrm{}q_n)`$, $`\gamma =\mathrm{\Sigma }_{i<j}\gamma _{ij}dq_idq_j`$. By the change of coordinates $$(q,p)(q,p+\alpha )$$ we have an equivalent Hamiltonian flow $`\stackrel{~}{g}^t`$ of $$\stackrel{~}{H}(p,q)=H(p\alpha ,q)=\frac{1}{2}\lambda \left((p_1\alpha _1)^2+\mathrm{}+(p_n\alpha _n)^2\right)$$ with respect to the symplectic form $$\stackrel{~}{\omega }=\omega _0+\gamma =\mathrm{\Sigma }_{i=1}^ndp_idq_i+\mathrm{\Sigma }_{i<\gamma }\gamma _{ij}dq_idq_j.$$ Denote by $`\stackrel{~}{l}`$ the invariant distribution of Lagrangian subspaces with respect to $`\stackrel{~}{\omega }`$. At any point $`x=(q,p)\{\stackrel{~}{H}=1/2\}`$ it is given by a matrix $`A(x)`$: $$dp=A(p,q)dq$$ The condition to be Lagrangian is equivalent to $$A^TA=\mathrm{\Gamma }$$ where $`\mathrm{\Gamma }`$ is the skew symmetric matrix of $`\gamma `$. Then $`A`$ is a measurable matrix function and satisfies the following Ricatti equation along the flow $`\stackrel{~}{g}^t`$. $$\dot{A}+(A+\mathrm{\Gamma })\stackrel{~}{H}_{pp}A+(A+\mathrm{\Gamma })\stackrel{~}{H}_{pq}+\stackrel{~}{H}_{qp}A+\stackrel{~}{H}_{qq}=0$$ where $`\stackrel{~}{H}_{pp}`$, $`\stackrel{~}{H}_{pq}`$, $`\stackrel{~}{H}_{qq}`$ are the matrices of second derivatives of $`\stackrel{~}{H}`$. Then it can be written in the form $`\dot{A}`$ $`+`$ $`\left((A+\mathrm{\Gamma })\stackrel{~}{H}_{pp}^{1/2}+\stackrel{~}{H}_{qp}\stackrel{~}{H}_{pp}^{1/2}\right)\left(\stackrel{~}{H}_{pp}^{1/2}A+\stackrel{~}{H}_{pp}^{1/2}\stackrel{~}{H}_{pq}\right)`$ $`+`$ $`\left(\stackrel{~}{H}_{qq}\stackrel{~}{H}_{qp}\stackrel{~}{H}_{pp}^1\stackrel{~}{H}_{pq}\right)=0`$ Introduce the function $`a(p,q)=trA(p,q)`$. Then it follows $$\dot{a}+tr\left(\stackrel{~}{H}_{qq}\stackrel{~}{H}_{qp}\stackrel{~}{H}_{pp}^1\stackrel{~}{H}_{pq}\right)0$$ with equality possible only when $`A=\stackrel{~}{H}_{pp}^1\stackrel{~}{H}_{pq}`$. Integrate now this inequality with respect to the invariant measure $`\stackrel{~}{\mu }`$ on the energy level $`\stackrel{~}{H}=1/2`$. We get the following $$\sigma (\stackrel{~}{H})=tr\left(\stackrel{~}{H}_{qq}\stackrel{~}{H}_{qp}\stackrel{~}{H}_{pp}^1\stackrel{~}{H}_{pq}\right)𝑑\stackrel{~}{\mu }0$$ (3.2) with equality possible only for $`A=\stackrel{~}{H}_{pp}^1\stackrel{~}{H}_{pq}`$ everywhere on the level $`\{\stackrel{~}{H}=1/2\}`$. On the other hand, we shall compute $`\sigma (\stackrel{~}{H})`$ by the following: ###### Lemma 1 For any Hamiltonian function $`H(p,q)`$ convex and symmetric in $`p`$ $`H(p,q)=H(p,q)`$ it follows that $`\sigma (\stackrel{~}{H})=\sigma (H)`$ where $`\stackrel{~}{H}(p,q)=H(p\alpha (q),q)`$ for any 1-form $`\alpha `$. Here $`\sigma (H)`$ is computed by the integral (3.2) with respect to the hamiltonian $`H`$ and the invariant measure $`\mu `$ of $`g^t`$. ###### Lemma 2 If $`H`$ is given by (3.1) then $`\sigma (H)0`$ and equality is achieved only when $`\lambda =const`$. ###### Remark 3 The last Lemma is actually the inequality proved in , but here it appears in the Hamiltonian version. Let me complete the proof of the theorem, postponing ones of the Lemmas. Comparing (3.2) with Lemmas 1 and 2, one concludes that the metric is Euclidean and $$A=\stackrel{~}{H}_{pp}^1\stackrel{~}{H}_{pq},\text{where}\stackrel{~}{H}=\frac{1}{2}\mathrm{\Sigma }_{i=1}^n\left(p_i\alpha _i\right)^2$$ Then $$A=\left(A_{ij}\right)=\left(\stackrel{~}{H}_{p_iq_j}\right)=\left(\frac{\alpha _i}{q_j}\right)$$ (3.3) Compute and plug the second derivatives in the Ricatti equation. We have $$(tr\dot{A})=tr\left(\stackrel{~}{H}_{qq}\stackrel{~}{H}_{qp}\stackrel{~}{H}_{pp}^1\stackrel{~}{H}_{pq}\right)=\mathrm{\Sigma }_{ij=1}^n\left(p_j\alpha _j\right)\frac{^2\alpha _j}{q_i^2}$$ Differentiating $`trA`$ explicitly along the flow $`\stackrel{~}{g}^t`$, one gets another expression $$(tr\dot{A})=\mathrm{\Sigma }_{j=1}^n\frac{(trA)}{q_j}\stackrel{~}{H}_{p_j}=\mathrm{\Sigma }_{ij=1}^n(p_j\alpha _j)\frac{^2\alpha _i}{q_iq_j}$$ Comparing these two expression we obtain $$\frac{(trA)}{q_j}=\mathrm{\Delta }\alpha _j\text{for}j=1,\mathrm{}n.$$ Then the compatibility condition gives $$\mathrm{\Delta }\left(\frac{\alpha _k}{q_j}\frac{\alpha _j}{q_k}\right)=0\text{for any}j,k=1,\mathrm{}n.$$ This implies $$\frac{\alpha _k}{q_j}\frac{\alpha _j}{q_k}const=0\text{and thus}d\alpha =0.$$ Also by (3.3) we have $`A^TA=\mathrm{\Gamma }=0`$. So $`\beta =d\alpha +\gamma =0`$. This completes the proof of the theorem. $`\mathrm{}`$ Proof of Lemma 1 We have to show that $`\sigma `$ is the same for $`H`$ and $`\stackrel{~}{H}=H(p\alpha (q),q)`$. The direct computation gives the following expressions for the matrices of second derivatives. $`\stackrel{~}{H}_{pp}`$ $`=`$ $`H_{pp}`$ $`\stackrel{~}{H}_{pq}`$ $`=`$ $`H_{pp}D\alpha +H_{pq},\stackrel{~}{H}_{qp}=(D\alpha )^TH_{pp}+H_{qp}`$ $`\stackrel{~}{H}_{qq}`$ $`=`$ $`(D\alpha )^TH_{pp}(D\alpha )H_{qp}D\alpha (D\alpha )^TH_{pq}+H_{qq}+M`$ $`\text{where}M`$ $`=`$ $`(M_{ij}),M_{ij}=\mathrm{\Sigma }_{k=1}^nH_{p_k}{\displaystyle \frac{^2\alpha _k}{q_iq_j}}`$ Thus we have after performing change of variables $`(q,p)(q,p\alpha )`$ $$\sigma (\stackrel{~}{H})=\sigma (H)+_{\left\{H=\frac{1}{2}\right\}}\mathrm{\Sigma }_{k,i=1}^nH_{p_k}\frac{^2\alpha _k}{q_i^2}𝑑\mu $$ By the symmetry assumption this additional integral vanishes because the integrand is odd function and the level is even. This completes the proof of the lemma. $`\mathrm{}`$ Proof of Lemma 2 The second derivative for the Hamiltonian $`H=\lambda /2\left(p_1^2+\mathrm{}+p_n^2\right)`$ are given by the formulas $$H_{p_ip_j}=\lambda \delta _{ij},H_{q_iq_j}=\frac{1}{2}(\lambda )_{q_iq_j}\left(p_1^2+\mathrm{}+p_n^2\right),H_{p_iq_j}=\lambda _{q_j}p_i$$ Thus for the points of the energy level $`\left\{H=\frac{1}{2}\right\}`$ $$tr\left(H_{qq}H_{qp}H_{pp}^1H_{pq}\right)=\frac{1}{2\lambda }\mathrm{\Delta }\lambda \frac{1}{2\lambda ^2}(grad\lambda )^2$$ The invariant measure $`\mu `$ on $`\left\{H=\frac{1}{2}\right\}`$ is computed from the condition $$d\mu dH=\omega _0^n$$ and then can be easily computed and equals $$d\mu =\left(\frac{1}{\sqrt{\lambda }}\right)^nd\sigma dq$$ where $`d\sigma `$ is a standard measure on unit sphere in $`p`$ space. Then we have $$\sigma (H)=Vol(S^{n1})\left(\frac{1}{2\lambda }\mathrm{\Delta }\lambda \frac{1}{2\lambda ^2}(grad\lambda )^2\right)\left(\frac{1}{\sqrt{\lambda }}\right)^n𝑑q$$ Integrating by parts we obtain $$\sigma (H)=Vol(S^{n1})\left(\frac{n2}{4}\right)\lambda ^{2\frac{n}{2}}(grad\lambda )^2𝑑q$$ From this expression the assertion follows. $`\mathrm{}`$ ## 4 Some Open Problems Let me discuss here some natural open problems. In the proof of Theorem 3 we noticed first that the form $`\beta `$ must be exact and then showed that it vanishes. It may happen that the condition of existence of invariant distribution of Lagrangian non-vertical subspaces also implies that the form $`\beta `$ is exact. We were not able, however, to find a direct proof of this fact. It is an important problem to study those magnetic fields having integrable twisted geodesic flow. The simplest possible topology of the phase portrait is one for the totally integrable flows. It is not clear how to characterise those flows even for dimension 2. One natural class of examples is for $`S^1`$ symmetric magnetic field $`\beta =\beta (q_1)dq_1dq_2`$. We don’t know if there are other examples. Note that the Lagrangian torii of the foliation can not be homologous to the zero section ( theorem 3) . One would like to generalise our results to the case of any Riemannian metric on $`T^n`$. However, it may happen that twisted version of Burago-Ivanov proof should be found. It should be mentioned that our approach works each time when it is related to a good choice of coordinates in the phase space. It is not clear to me however, if the coordinates of are good for our approach.
no-problem/9902/astro-ph9902001.html
ar5iv
text
# RELATIVISTIC ACCRETION ## 1 Introduction The general relativistic theory of black holes was mostly developed in the decade spanning the discovery of the Kerr (1963) metric to the collation of their properties in the classic text of Misner, Thorne & Wheeler (1973), (which is still a necessary and nearly sufficient reference for most astrophysical purposes). Despite 25 years of perspective, it still seems almost miraculous that equations as complicated as the field equations of general relativity should produce such an elegant solution and that this should have such magical properties. After the relativists did their job so well and assured us that black holes could exist, the astrophysicists were left with the rather messier business of demonstrating that black holes should exist and of calculating (or guessing) their observable characteristics. At the stellar level, the first task is a problem in stellar evolution. Chandrasekhar (1931, in Cambridge) first showed that there was a maximum mass for a white dwarf; Oppenheimer & Volkoff (1939) did the same, in principle, for neutrons stars although we still do not know its precise value. However, assuming that it is comfortably less than $`3`$ M, we know of about eight, securely-measured, compact object masses in binary systems with masses well in excess of the Chandrasekhar and Oppenheimer-Volkov bounds. These are black holes, the argument goes; what else can they be? This is about as strong a degree of “proof” as one typically gets in astronomy and, if one accepts it, the observers have, in turn, done their job. The existence of black holes is no longer an issue. Turning to massive holes, (with masses of millions to billions M), it has long been suspected that these lurk in the nuclei of most normal galaxies, (at least those with luminosities $`LL^{}`$), and that they become active, (and classified as quasars, Seyferts and radio galaxies and so on), when fueled at an appropriate rate. Noting again that the most secure evidence comes from dynamics, “massive, dark objects”, with masses in the range $`2\times 10^6`$ M to $`3\times 10^9`$ M, have been located in the nuclei of over 20 nearby galaxies. Among the astrophysical alternatives, that have been discussed in the past, are “superstars”, which are unstable and would be far too luminous, and clusters of compact objects which are quite contrived from an evolutionary point of view and have very short dynamical lifetimes in the best studied examples. Again, the observers have come through and we can conclude, beyond all reasonable doubt, that most large galaxies contain massive holes in their nuclei. The most pressing current questions now centre around understanding how accreting black holes actually behave in situ. (Evolutionary issues are also quite important and provide some constraints on this behaviour.) This type of research is very difficult to approach deductively from pure physics, because it involves so many non-elementary process - magnetohydrodynamical turbulence, radiative transfer, astrophysical particle acceleration and so on. Therefore, it is prudent to adopt a more phenomenological approach and to try to formulate astrophysical models involving techniques that range from order of magnitude estimates to three dynamical numerical hydrodynamical simulations, that can meet the burgeoning observational database in some middle ground. In what follows, I will first provide a very quick summary of some salient black hole properties and then go on to summarize some properties of Newtonain accretion disks from a slightly idiosyncratic perspective and emphasizing recent progress that has been made in understanding the nature of the internal torque. Next, I will consider some recent ideas on slow accretion onto black holes. These ideas are also relevant to the corresponding problem of fast accretion and here, some possibilities are sketched. Another topic, of contemporary interest, which I briefly discuss, is the role of black hole spin energy in non-thermal emission from, and jet formation by, black holes. ## 2 Summary of Black Hole Properties Astrophysical black holes, (at least those currently observed), form a two parameter family. They are characterized firstly by their gravitational mass $`M`$ (as measured by the orbital period of a distant satellite), which provides a scale of length and time through the combinations $`GM/c^2m`$ and $`GM/c^3`$, respectively. Numerically, $$m1.5\left(\frac{M}{M_{}}\right)\mathrm{km}5\left(\frac{M}{M_{}}\right)\mu \mathrm{s}.$$ (1) It also furnishes a natural scale for luminosity, the Eddington luminosity $`L_{\mathrm{Edd}}=4\pi GMc/\kappa _T`$, where $`\kappa _T`$ is the Thomson opacity and an associated characteristic accretion rate, $`\dot{M}_{\mathrm{Edd}}L_{\mathrm{Edd}}/c^2`$. (Note that we are not concerned with any quantum mechanical features of black holes like Hawking radiation or string entropy. Astrophysical black holes are far too large for these effects to be relevant. They are also far too large for any electrical charge that they may carry to be of gravitational significance.) The second parameter is of both geometrical and dynamical significance. Traditionally, it is chosen to be the spin angular momentum per unit mass of the hole (expressed as a length in units of $`c`$) and denoted by $`a`$. This is the quantity that appears in the Boyer-Lindquist form of the metric (e.g. Misner, Thorne & Wheeler 1973) and is bounded above by $`m`$. Operationally, it can be measured by the precession of a distant gyroscope. However, it is often convenient to use, instead, the angular velocity of the hole, (also measured in units of $`c`$ as reciprocal length) and which I shall call $`\mathrm{\Omega }`$. The relation between these two quantities is given by $$\mathrm{\Omega }=\frac{1(1a^2/m^2)^{1/2}}{2a}<\frac{1}{2m}$$ (2) $`\mathrm{\Omega }`$ is the angular frequency that an observer at infinity would ascribe to experimentalists hovering just outside the event horizon (provided that she could overcome the strong redshift and see him) (eg Shapiro & Teukolsky 19). These two parameters fully characterize the geometry of a black hole where the gravitational effect of the surrounding matter is ignorable and spacetime is asymptotically flat. Given the Kerr metric, we can compute the orbits of material particles (and photons). The simplest case is circular orbits in the equatorial plane and this is relevant to the structure of thin accretion disks. These have period (measured by a distant observer) $$P_K=2\pi (r^{3/2}/m^{1/2}+a)$$ (3) when prograde. These orbits are stable for radii $`r>r_{\mathrm{ms}}`$, the marginally stable circular orbit. The corresponding binding energy of this orbit is denoted $`e_{\mathrm{ms}}`$ and increases from 0.06 to 0.42 $`c^2`$ per unit mass as $`a`$ increases from 0 to $`m`$. It is commonly supposed that gas spirals inward through the discuss towards the horizon under the action of viscous stress releasing this binding energy locally as radiation until $`r=r_{\mathrm{ms}}`$, when it plunges quickly into the hole. For this reason, accreting gas is widely thought to release energy at a rate $`10^{20}`$ erg g<sup>-1</sup>. Non-equatorial orbits are more complex. The most important effect is that their orbital angular momentum will precess about that of the hole with a Lense-Thirring precession frequency, given, to lowest order, by $$\mathrm{\Omega }_{\mathrm{LT}}=\mathrm{\Omega }(r/2m)^3$$ (4) One of the more remarkable features of black holes is that they are not truly black. A sizeable fraction of their mass energy can be associated with their rotation and is extractable both in principle and, I assert, in practice. To make this plausible, (though not actually prove anything), allow our experimentalists hovering just outside the horizon to be sufficiently thoughtful and to change the spin angular momentum of the black hole $`S`$, measured in units of $`G/c^3`$, by sending tiny packets of mass energy across the horizon. This may be in the form of particles, photons, electromagnetic field etc. Now, if the angular momentum is introduced with angular velocity $`\omega `$, we have $$dm=\omega dS=\omega d(am)$$ (5) It seems reasonable that if the observers add this angular momentum with angular velocity $`\omega =\mathrm{\Omega }`$, then there will be no dissipation. (Just consider applying a torque to the surface of a spinning disk.) If we spin up a black hole from rest in this fashion, we can substitute Eq. 2 and integrate differential Eq. 5 to obtain $$\frac{m^2}{2}[1+(1a^2/m^2)^{1/2}]=\mathrm{const}=m_0^2$$ (6) where $`m_0`$ is the initial (or irreducible) mass. The limiting mass to which we can spin up the hole in this manner is $`2^{1/2}m_0`$ when $`a=m`$. Equivalently, a little algebra shows that $$m_0=m(1+4\mathrm{\Omega }^2m^2)^{1/2}2^{1/2}m$$ (7) This change is reversible. As Penrose (1969) first showed, there exist negative energy particle orbits that cross the horizon and particles on them decrease the mass of the hole. It is then possible to reduce the spin to zero and then return the mass to $`m_0`$. All of this becomes more interesting if we use the Kerr metric to compute the area of the horizon and find that it equates to $`A=16\pi m_0^2`$. Therefore, reversible processes are those that leave the area of the horizon unchanged. If we change the angular momentum with $`\omega \mathrm{\Omega }`$, the area increases, consistent with its interpretation as being proportional to the entropy of the hole. (Amusingly, if we define an effective radius $`r_0=(A/4\pi )^{1/2}=2m_0`$, and define a rotational speed $`\beta =\mathrm{\Omega }r_0/c`$, we can derive the quasi-Newtonian relation $`a=r_0^2\mathrm{\Omega }`$ and the quasi-special relativistic identity $`m=m_0(1\beta ^2)^{1/2}`$.) We can therefore extract an energy, up to the difference between the gravitational and the irreducible masses, $`mm_0`$, which can be as large as $`0.29m`$, from a spinning hole, through Penrose-style processes. However, it turns out that this extraction of energy is unlikely to be realized using particles, because it is hard to confine them to the requisite negative energy orbits. The situation is far more promising with ordered magnetic field that is supported by external current (eg Thorne, Price & MacDonald 1986 and references therein). The magnetic field lines can thread the horizon of a spinning black hole. A very strong electromotive force will be induced which will make the vacuum into an essentially perfect conductor, (eg through pair-production by $`\gamma `$-rays), so that the field lines become equipotentials. Currents will flow and angular momentum and energy will be exchanged with the hole. The relevant angular velocity, $`\omega `$, is that with which our experimentalist must move so that the electric field vanishes. (If the experimentalist insists upon maintaining a constant distance from the hole, then this can only be accomplished within a finite range of radial coordinate.) We can think of this as the angular velocity of the magnetic field lines and, in a steady state, it must be constant along a given field line. The actual value of this angular velocity depends upon the boundary conditions. Under typical circumstances it is roughly $`\omega 0.5\mathrm{\Omega }`$. In the frame of an experimentalist hovering above the horizon, with an angular velocity $`\omega <\mathrm{\Omega }`$, a Poynting flux of electromagnetic energy will be seen to enter the hole. However, when we transform into the frame non-rotating with respect to infinity, we must also include the rate of doing work by the electromagnetic torque and we are left with an outwardly directed energy flux that is conserved along a flux tube. Roughly half of the spin energy of a hole may be extracted in this manner; the remainder ending up within the horizon as an increase in the irreducible mass. ## 3 Newtonian Accretion Disks First, we review some principles that can be abstracted from the discussion in, for example, Frank, King & Raine (1992), Shapiro & Teukolsky (1983), Pringle (1981), Kato, Fukue & Mineshige (1998) and Holt & Kallman (1998). Consider a thin disk accretion with angular velocity $`\mathrm{\Omega }`$, inflow speed $`v\mathrm{\Omega }r`$, disk mass per unit radius $`\mu `$ and specific angular momentum $`\mathrm{}`$. (Henceforth, we measure all radii in units of $`m`$.) In assuming that the disk is thin, we are implicitly supposing that the gas can remain cold by radiating away its internal energy. Let the torque that the disk interior to radius $`r`$ exerts upon the exterior disk be $`G(r)`$. The equations of mass and angular momentum conservation are then $$\frac{\mu }{t}=\frac{\mu v}{r};\frac{\mu \mathrm{}}{t}=\frac{\mu v\mathrm{}}{r}\frac{G}{r}.$$ (8) These equations immediately imply $$\frac{G}{r}=\frac{\mu v\mathrm{}}{2r};\frac{\mu }{t}=2\frac{}{r}r^{1/2}\frac{G}{r}$$ (9) where we have assumed the Keplerian relation $`\mathrm{}=r^{1/2}`$ (cf Lynden-Bell & Pringle 1974). We can combine Eq. 9 to obtain an energy equation $$\frac{\mu e}{t}+\frac{(\mathrm{\Omega }G\mu ve)}{r}=G\frac{\mathrm{\Omega }}{r}$$ (10) where $`e=\mathrm{\Omega }\mathrm{}/2`$ is the Keplerian binding energy, the sum of the kinetic and potential energy per unit mass. (Note the presence of a contribution to the energy flux from the rate at which the torque $`G`$, does work on the exterior disk.) The right-hand side represents a radiative loss of energy. Evaluating it, we find that the local radiative flux, in a stationary disk, is three times the rate of local loss of binding energy (Lynden-Bell, Thorne in Pringle & Rees 1972). Next consider the opposite limiting case when the gas cannot cool and there is no extraneous source or sink of energy. Adding thermodynamic terms to the energy equation, we obtain $$\frac{\mu (e+u)}{t}+\frac{(\mathrm{\Omega }G\mu v(e+h))}{r}=G\frac{\mathrm{\Omega }}{r}+\mu T\frac{ds}{dt}$$ (11) where $`u`$ is the vertically-averaged internal energy density, $`h`$ is the enthalpy density, and $`s`$ is the entropy density (e.g. Landau & Lifshitz 1987). As there are no sources or sinks of energy, the right-hand side must vanish: $$\mu T\frac{ds}{dt}=T\left[\frac{\mu s}{t}\frac{\mu vs}{r}\right]=G\frac{\mathrm{\Omega }}{r}.$$ (12) As the gas has pressure, we must also satisfy the radial equation of motion: $$\frac{v}{t}v\frac{v}{r}+\mathrm{\Omega }^2r=\frac{1}{r^2}+\frac{1}{\rho }\frac{P}{r}.$$ (13) ### 3.1 Magnetic torques In order to make further progress, it is necessary to specify the torque, $`G`$. A traditional prescription, (Shakura & Sunyaev 1973), is to suppose that the shear stress acting in the fluid is directly proportional to the pressure, with constant of proportionality $`\alpha `$. In this case, $$G=2\pi \alpha r^2𝑑zP.$$ (14) Traditionally, it has been supposed that $`\alpha 0.010.1`$ on the basis of unconvincing theoretical and observational arguments. However, in recent years a hydromagnetic instability has been rediscovered (Balbus & Hawley 1998, and references therein) and it is clear that ionized disks will generate a dynamically important internal magnetic field on an orbital timescale. The nature of the linear instability can be understood by considering a weak, vertical magnetic field line threading the disk. If gas in the midplane is displaced radially outward, it will drag the magnetic field along with it. (It is a consequence of electromagnetic induction in the presence of an excellent conductor, like an ionized accretion disk, that magnetic field appears to be frozen into the fluid.) As angular momentum is conserved, the displaced fluid element will lag and stretch the magnetic field lines. The magnetic tension associated with the magnetic field will have an azimuthal component which will further increase the angular momentum of the displaced gas and push it further out, amplifying the instability. (A similar effect is exhibited by a tethered, artificial satellite.) The non-linear development of this instability has been investigated numerically and although many uncertainties remain, it appears that the traditional prescription for $`\alpha `$ is not wildly wrong at least as long as the disk is ionized. (Empirically, it appears that predominantly neutral disks, as our found in young stellar objects, for example, exhibit lower values of $`\alpha `$.) A major unsolved problem is the nature of the torque when the accretion rate is large enough for the radiation to be trapped by Thomson scattering, so that the disk fluid becomes radiation-dominated, like the early universe. Under these conditions, we expect the short wavelength modes to be damped by radiation drag and radiative viscosity and the longer wavelength components may escape through buoyancy (Agol & Krolik 1998 and references therein). More numerical simulations, including radiation transfer, are necessary to help us understand what actually happens. As we have just emphasized on general grounds, an internal torque in a shearing medium inevitably leads to dissipation. In the case of MHD torques in an accretion disk, it has been argued that this happens through a hydromagnetic turbulence spectrum which ends up with the ions being heated by a magnetic variant on Landau damping called transit time damping (Quataert & Gruzinov 1998). This is not the only possibility. It is conceivable that magnetic reconnection or non-local dissipation in an active, accretion disk corona may also play a role. ## 4 Slow Accretion ### 4.1 ADAF solution There has been much attention in recent years to the problem of slow accretion. Observationally, this is motivated by the discovery that many local galactic nuclei are conspicuously under-luminous. A good example is our Galactic center, where the mass supply rate may be as high as $`10^{22}`$ g s<sup>-1</sup> and the bolometric luminosity may be as low as $`10^{36}`$ erg s<sup>-1</sup>, giving a net efficiency of $`10^{14}`$ erg g<sup>-1</sup>, $`10^7c^2`$, (and quite unlikely to exceed $`10^4c^2`$), a far cry from the naive expectation of standard disk accretion. As discussed by Narayan & Yi (1994) and Kato et al (1998), and many references therein, one possible resolution of this paradox is that the gas flows in to the hole as an “Advection-Dominated Accretion Flow”, or ADAF for short. In order for this flow to be established, it is necessary that the gas not be able to cool on the inflow timescale. This, in turn, requires that the viscosity be relatively high and that the hot ions, which can achieve temperatures as high as $`100`$ MeV, only heat the electrons by Coulomb interaction. (Ultrarelativistic electrons are very efficient radiators.) The basic idea and assumptions are set out most transparently in Narayan & Yi (1994; cf. also Ichimaru 1977, Abramowicz et al. 1995). In the simplest, limiting case, it is assumed that there is a stationary, one-dimensional, self-similar flow of gas with $`\mu r^{1/2}`$, $`\mathrm{\Omega }r^{3/2}`$, and $`v,ar^{1/2}`$, where $`a=[(\gamma 1)h/\gamma ]^{1/2}`$ is the isothermal sound speed and the radial velocity $`v<<\mathrm{\Omega }r`$. The requirement that $`Pr^{5/2}`$ transforms the radial equation of motion into $$\mathrm{\Omega }^2r^2\frac{1}{r}+\frac{5a^2}{2}=0.$$ (15) Conservation of mass, momentum and energy gives $$\mu v\dot{M}=\mathrm{constant}$$ (16) $$\dot{M}r^2\mathrm{\Omega }G=F_{\mathrm{}}$$ (17) $$G\mathrm{\Omega }\dot{M}Be=F_E$$ (18) where the inwardly directed angular momentum flux, $`F_{\mathrm{}}`$, and the outwardly directed energy flux, $`F_E`$, are constant if there are no sources and sinks of angular momentum or energy. ($`Be=\mathrm{\Omega }^2r^2/21/r+h`$ is the Bernoulli constant.) Now, the terms on the left-hand side of Eq. 17 scale $`r^{1/2}`$ and those of Eq. 18 scale $`r^1`$. Therefore, if we require the flow to be self-similar over several decades of radius, both constants must nearly vanish. In the limit, $`F_{\mathrm{}}=F_E=0`$. Combining equations, we solve for the sound speed $`a`$ and the Bernoulli constant. $$a^2=\left[\frac{3(\gamma 1)}{53\gamma }\right]\mathrm{\Omega }^2r^2=\frac{6(\gamma 1)}{(9\gamma 5)r}$$ (19) $$Be=\mathrm{\Omega }^2r^2.$$ (20) The elementary ADAF solution is then completed by defining an $`\alpha `$ viscosity parameter through, $`G=\dot{M}r^2\mathrm{\Omega }=\alpha \mu ra^2`$, which then implies $`v=\alpha a^2/\mathrm{\Omega }r`$, assuming that $`\alpha (5/3\gamma )^{1/2}`$. There are concerns with this solution, as identified by Narayan & Yi (1995). The most important of these is the worry that the accreting gas may not be bound to the black hole. This can be demonstrated by observing that the Bernoulli constant, $`Be`$ is generically positive due to the viscous transport of energy. This means that an element of gas has enough internal energy, (taking into account the capacity to perform $`PdV`$ work), to escape freely to infinity). In the particular case when the specific heat ratio is $`\gamma 5/3`$, as it will be if only the ions are effectively heated, note that the the self-similar solution is nearly non-rotating. A lot of angular momentum and orbital kinetic energy must be lost at some outer radius, where the ADAF solution first becomes valid. (This is called the transition radius.) As the gas cannot cool here, by assumption, there seems nowhere for the energy to go except in driving gas away. Another precarious part of the ADAF solution is found close to the rotation axis. It is proposed that when the viscous torque is relatively large, that the flow extend all the way to the polar axis (Narayan & Yi 1995). This removes one exposed surface, but it does so at the expense of creating a stationary column of gas, which cannot be supported at its base. It is unlikely to persist. ### 4.2 ADIOS solution For these reasons, Blandford & Begelman (1998) have proposed a variant upon the ADAF solution called an “Advection-Dominated Inflow Outflow Solution”. Here the key notion is that the excess energy and angular momentum is removed by a wind at all radii. Again it is simplest to assume self-similarity. We follow the Narayan & Yi (1984) solution, but supplement it by allowing the mass accretion rate to vary with radius. $$\dot{M}r^p;0p<1.$$ (21) The mass that is lost from the inflow escapes as a wind. If we adopt self-similar scalings, and use the above definitions of the flow of angular momentum and energy, we can write, $$F_{\mathrm{}}=(\dot{M}r^2\mathrm{\Omega }G)=\lambda \dot{M}r^{1/2};\lambda >0.$$ (22) and $$F_E=G\mathrm{\Omega }\dot{M}\left(\frac{1}{2}\mathrm{\Omega }^2r^2\frac{1}{r}+\frac{5a^2}{2}\right)=\frac{ϵ\dot{M}}{r};ϵ>0.$$ (23) where $`\lambda ,ϵ`$, like $`p`$ are constants that can be fixed Equivalently, for the specific angular momentum and energy carried off by the wind, we have $$\frac{dF_{\mathrm{}}}{d\dot{M}}=\frac{\lambda (p+1/2)r^{1/2}}{p};\frac{dF_E}{d\dot{M}}=\frac{ϵ(p1)}{pr}$$ (24) With these modifications, the radial equation of motion becomes $$\mathrm{\Omega }^2r^2\frac{1}{r}+(5/2p)a^2=0.$$ (25) Similarly, the Bernoulli constant becomes $$Be=\frac{\mathrm{\Omega }^2r^2}{2}\frac{1}{r}+\frac{5a^2}{2}=pa^2\frac{1}{2}\mathrm{\Omega }^2r^2$$ (26) and it can now have either sign. (A limit must be taken to recover Eq. 20.) Combining these equations, we obtain $$\mathrm{\Omega }r^{3/2}=\frac{(52p)\lambda }{152p}$$ $$+\frac{[(52p)^2\lambda ^2+(152p)(10ϵ+4p4ϵp)]^{1/2}}{152p}$$ (27) It is a matter of algebra to complete the solution and determine how the character of the solutions depends upon our three independent, adjustable parameters, $`p,\lambda ,ϵ`$. Let us consider some special cases. 1. $`p=\lambda =ϵ=0`$. There is no wind and the system reduces to the non-rotating Bondi solution. 2. $`p=\lambda =0`$, $`ϵ=3(1f)/2`$. This corresponds to flow with no wind but with radiative loss, which carries away energy but not angular momentum. The parameter $`f`$, (Narayan &Yi 1994), is defined by the relation $`\dot{M}TdS/dr=fGd\mathrm{\Omega }/dr`$. 3. $`p=0`$, $`\lambda =1`$, $`ϵ=1/2`$. This describes a magnetically-dominated wind with mass flow conserved in the disk. All of the angular momentum and energy is carried off by a wind with $`dF_E/dF_{\mathrm{}}=\mathrm{\Omega }`$ (cf. Blandford & Payne 1982, Königl 1991). There is no dissipation in the disk, which is cold and thin. 4. $`\lambda =2p[(10ϵ+4p4ϵp)/(2p+1)(4p^2+8p+15)]^{1/2}`$. This corresponds to a gas dynamical wind where $`dF_{\mathrm{}}/d\dot{M}\mathrm{}_W=r^2\mathrm{\Omega }\mathrm{}`$. The wind carries off its own angular momentum at the point of launching and does not exert any reaction torque on the remaining gas in the disk. Any magnetic coupling to the disk implies $`\mathrm{}_W>\mathrm{}`$. 5. $`ra^2=r^3\mathrm{\Omega }^2/2p=1/(p+5/2)`$. This corresponds to a marginally bound flow with vanishing Bernoulli constant. In practice, it is expected that $`Be<0`$. In the limiting case, a single proton at the event horizon can, altruistically, sacrifice itself to allow up to a thousand of its fellow protons to escape to freedom from $`1000m`$. What this exercise demonstrates is that gas can accrete slowly onto a black hole without radiating, provided it uses loses enough mass, energy and angular momentum and that the rate of mass accretion by the black hole may be very much less than the mass supply rate. (This has implications for the rate of balck hole growth due to accretion in the early universe.) In order to go beyond this, we must introduce some additional physics into our discussion of the disk and the wind. ## 5 Fast Accretion The solution, that I have just described, is appropriate when the accretion rate is slow enough that the gas cannot cool radiatively. How slow this must be depends upon microphysical details that are still uncertain. However, it appears that the underlying physical principles are still appropriate in the opposite limiting case of fast accretion. In this limit, as the accretion rate is high the density is sufficiently large to allow the gas to come into local thermal equilibrium and to emit radiation so that the photons dominate the gas pressure. However, the density is also large enough for the gas to become optically thick to Thomson scattering and for the radiation to be trapped. Under such circumstances, photons will random walk relative to the gas with a characteristic speed $`c/\tau _T`$, where $`\tau _T`$ is the optical depth to Thomson scattering. If the density is so large that this speed is less than the bulk speed of the gas, then the photons are essentially trapped and, once again, the gas is prevented from radiative cooling. Typically, this occurs if $`\dot{M}>\dot{M}_{\mathrm{Edd}}`$. It is therefore possible to develop a model of accretion onto a black hole in the limit when $`\dot{M}>>\dot{M}_{\mathrm{Edd}}`$ (Begelman & Blandford in preparation). We treat disk accretion in much the same way as we treat it in the ion-dominated case, with the unimportant difference that the effective specific heat ratio (not the true one) is $`\gamma =4/3`$. In order to define a vertical structure for the disk, we have to make some assumption about the angular velocity distribution. One possibility is that the angular velocity is constant on cylinders. This is equivalent to assuming that the equation of state is barytropic. A much better assumption, and one that has some physical support, is that the disk is marginally unstable to convective overturn (eg Begelman & Meier 1982). This implies that the entropy is constant on surfaces of constant angular momentum - the “gyrentropic hypothesis” (cf Abramowicz & Paczyński 1982, Blandford, Jaroszyński & Kumar 1985). This allows entropy and angular momentum to be freely transported along these surfaces; transport perpendicular to these surfaces requires additional (and presumably magnetic) viscous stress. The attached wind is also radiation-dominated and it is possible to find self-similar solutions that describe a wind that carries off the mass, angular momentum and energy released from the surface of the disk. Eventually, this outflow will become tenuous enough to allow the trapped radiation to escape. There may be a third region, where the flow is optically thin, and where the gas may start to recombine so that it can be accelerated by line radiation pressure. Super-critical accretion flows, with $`\dot{M}>>\dot{M}_{\mathrm{Edd}}`$, almost surely occurs naturally in both Galactic sources like SS433 and GRS 1915+112 and in extragalactic sources like the radio-quiet quasars and the broad absorption line quasars. In both circumstances, it appears that the rate of mass outflow exceeds the Eddington rate by a large factor. Presumably, the same is true of the rate of mass supply. ## 6 The Importance of Spin As emphasized above, there are two potential power sources, the binding energy released by accreting gas and the spin energy of the central black hole. It is natural to associate the former with “thermal” emission and the latter with “non-thermal” emission and this separation has provided the basis for a variety of “unified”, (and “grand unified”) models of AGN over the past twenty years. It is apparent that the electromagnetic extraction of energy and angular momentum from the black hole can occur in principle. A question of recent interest is “How much does this occur in practice?”. Clearly, there are two requirements beyond the presence of the black hole, spin and magnetic flux, and on these we can only speculate. It is widely assumed that freshly formed holes, and those that have recently undergone major mergers, spin rapidly in the sense that $`\mathrm{\Omega }m0.20.5`$. However, subsequent addition of angular momentum (eg through rapid episodes of accretion or minor mergers) may be stochastic, as opposed to the mass which increases secularly. This can lead to a spin down. Alternative a strong dynamical interaction without a surrounding warped disk (as proposed by Natarajan & Pringle, 1998, preprint) may lead to a very rapid de-spinning of the hole without the creation of much non-thermal energy.) It has also been suggested (Ghosh & Abramowicz 1998, Livio, Ogilvie & Pringle 1998, cf Blandford & Znajek 1977) that the total electromagnetic power that derives from the hole is very small compared with that which derives from the disk. The basis of this argument is that the strength of the magnetic field that threads the hole is likely to be no larger than that threading the disk and the area of the disk is much larger than that of the hole. This, indeed, may be the case in the majority of active nuclei (in particular radio quiet-quasars and Seyfert galaxies) which are not non-thermal objects. However, it is not guaranteed that these limits always apply. For example, the strength of the magnetic field interior to the disk is really only limited by the Reynolds’ stress of the orbiting gas just as is supposed to occur at the Alfvén surface surrounding an accreting, magnetized neutron star. Alternatively, the magnetic field associated with the disk may be predominately closed with little radial component (as it is being strongly sheared) so that it does not extract much energy, but can provide pressure. Under either of these circumstances the non-thermal power extracted from the hole can be dominant. What actually occurs depends upon issues of stability and supersonic reconnection. There is a strong observational incentive to consider these processes. It has become clear that some sources are spectacularly non-thermal. Bulk Lorentz factors in excess of 10 are required to explain some superluminal motion and, perhaps, much larger relativistic speeds are indicated by the intraday variable sources. The $`\gamma `$-ray jets discovered by the EGRET instrument on Compton Gamma Ray Observatory can be prodigiously energetic and in some cases, seem to transport far more energy, even allowing for relativistic beaming, than is observed in the remainder of the electromagnetic spectrum (Hartmann et al 1996). The rapidly variable X-rays produced by the Galactic superluminal sources are far too energetic to be the result of black body-emission from the surface of an accretion disk. To this reviewer, at least, it is unlikely that this power can derive solely from an active disk corona. There is ample spin energy associated with the hole to account for the observations and an environment where thermalization will be very difficult. The assocation of the jet power and the high energy emission with the black hole is surely as suggestive as, historically, was the association of the Crab Nebula with PSR 0531+21. There is another interesting possibility (Blandford & Spruit 1998, in preparation, cf also Livio, these proceedings). This is that magnetic field attached to the inner disk may also connect to the black hole. Now if $`\mathrm{\Omega }>0.093/m`$, the hole rotates faster than the gas in the marginally stable circular orbit and even faster than all the gas beyond this. Therefore, unless the hole is very slowly rotating, a magnetic connection will transport angular momentum radially outward. As the hole has a much higher effective resistivity than the disk, we can regard the field lines as being effectively transported by the disk. Therefore, they will only do mechanical work on the disk with no direct dissipation. If this interaction is strong enough the increase in the angular momentum of the gas can be enough to reverse the accretion flow, driving some gas radially outward while a fraction falls inward. This can happen in a quasi-cyclical manner and it is tempting to associate some of the quasi-periodic behavior observed in Galactic black hole binaries, notably GRS 1915+112 with just this sort of process. ## 7 Conclusion I hope that this somewhat cursory description of recent developments is sufficient to persuade the reader that the study of black holes both in our Galaxy, in the nuclei of nearby galaxies and in distant quasars, is on the ascendant. Measurements of mass and (possibly) spin are helping make astrophysical arguments quantitative. Observations of quasi-periodicity, notably by RXTE (Greiner, Morgan & Remillard 1996), are strongly suggestive of non-linear dynamical processes in the curved spacetime close to the black hole. Direct measurement of iron lines and their profiles explore the surfaces of inner accretion disks. Jets, are now observed to be commonplace in accreting systems and, in the case of black hole systems must derive from close to the hole (as close as $`60m`$ inthe case of M87, Junor & Biretta 1995). The best is yet to come. There is a suite of X-ray telescopes planned for launch in the coming years, AXAF, XMM and ASTRO E. There are ambitious plans for superior instruments, like GLAST and CONSTELLATION-X to be launched during the coming decade and for space-based VLBI to be developed in earnest. On an even longer timescale, the space-based gravitational radiation detection, LISA has the projected capability to detect merging black holes from cosmological distances and to provide direct quantitative tests of strong field general relativity for the first time. On the theoretical front, the numerical capability to perform large scale, three dimensional, hydromagnetic simulations is already here and the incorporation of radiative transfer and credible dissipative processes is a not so distant hope. This is a good time for a student to start research in black hole astrophysics. ## Acknowledgments I am indebted to Mitch Begelman and Henk Spruit for recent collaboration and to Charles Gammie, Andy Fabian, John Hawley, Mario Livio, Ramesh Narayan, Eve Ostriker, Jim Pringle and Martin Rees for stimulating discussions. Support under NSF contract AST 95-29170 and NASA contract 5-2837 and the Sloan Foundation is gratefully acknowledged. I thank John Bahcall and the Institute for Advanced Study for hospitality during the completion of this paper. ## References Abramowicz, M. A. & Paczyński, B. 1982 ApJ 253 897 Agol, E. & Krolik, J. 1998 ApJ in press Balbus, S. A. & Hawley, J. F. 1998 RMP 70 1 Begelman, M. C. & Meier, D. L. 1982 ApJ 253 873 Blandford, R. D. & Begelman, M. C. 1998 MNRAS in press Blandford, R. D. Jaroszyński, M. & Kumar, S. 1985 MNRAS 225 667 Blandford, R. D. & Payne, D. G. 1982 MNRAS 199 883 Blandford, R. D. & Znajek, R. L. 1977 MNRAS 179 433 Chandrasekhar, S. 1931 ApJ 74 81 Frank, J., King, A. & Raine, D. 1992 Accretion Power in Astrophysics Cambridge: Cambridge University Press Ghosh, P. & Abramowicz, M. A. 1997 MNRAS 292 887 Greiner, J., Morgan, E. H. & Remillard, R. A. 1996 ApJ 473 L107 Hartmann, R. C. et al. 1996 ApJ 461 698 Holt, S. S. & Kallman, T. R. ed. 1998 Accretion Processes in Astrophysical Systems: Some Like it Hot New York: AIP Junor, W. & Biretta, J. A. 1995 AJ 109 500 Kato, S., Fukue, J. & Mineshige, S. 1998 Black Hole Accretion Disks Kyoto: Kyoto University Press Kerr, R. P. 1963 PRL 11 237 Landau, L. D. & Lifshitz, E. M. 1987 Fluid Mechanics Oxford: Butterworth-Heinemann Lynden-Bell, D. & Pringle, J. E. 1974 MNRAS 168 603 Misner, C. W. Thorne, K. S. & Wheeler, J. A. 1973 Gravitation San Francisco: Freeman Narayan, R. & Yi, I. 1994 ApJ 428 L13 Narayan, R. & Yi, I. 1995 ApJ 444 231 Oppenheimer, J. R. & Volkoff, G. M. 1939 Phys Rev 55 374 Penrose, R. 1969 Riv Nuovo Cim 1 252 Pringle, J. E. 1991 ARAA 19 137 Pringle, J. E. & Rees, M. J. 1972 A & A 21 1 Quataert, E. & Gruzinov, A. 1998 ApJ in press Shakura, N. I. & Sunyaev, R. A. 1973 A & A 24 337 Shapiro, S. L. & Teukolsky, S. A. 1983 Black Holes, White Dwarfs and Neutron Stars New York: Wiley Thorne, K. S., Price, R. M. & MacDonald 1986 Black Holes: The Membrane Paradigm New Haven: Yale University Press
no-problem/9902/cond-mat9902216.html
ar5iv
text
# Stochastic Resonance in Washboard Potentials ## I Introduction Stochastic Resonance( S.R) is an important phenomena with considerable implications in all branches of science . The enhanced response of a nonlinear system to input signals at the expense of and as a function of noise is termed as Stochastic Resonance. It is generally accepted that SR can be observed provided certain essential conditions are fulfilled. Attempts are being continually made to reduce the number and stringency of these constraints for the realization of the phenomenon. The simplest and the minimal ( currently accepted) conditions under which conventional SR can be observed are, the presence of a) a bistable system, b) a tunable Gaussian white noise and c) a time varying periodic signal (force). Recently, Hu suggested that the last ingredient may hopefully be replaced by a constant force and, by implication, SR may be observed, e.g, in the drift velocity ( mobility) of an overdamped Brownian particle in a tilted periodic potential as a function of noise strength. Unfortunately the suggestion was proved to be incorrect . However, Marchesoni, by analyzing the work of Risken , observed that SR can be observed in the drift velocity of Brownian particles in a tilted periodic (washboard) potential only in the underdamped situation where friction acts as surrogate to the external periodic drive. In the present work we show that SR can be observed in the mobility of even an overdamped Brownian particle in a tilted periodic potential (without the presence of oscillating field) but in the presence of a space-dependent periodic friction coefficient( i.e., in an inhomogeneous medium). The space dependence of friction coefficient does not affect the equilibrium properties such as the equilibrium probability distribution. However, it does affect the dynamical (nonequilibrium) properties of the system( such as the relaxation rates). The space dependence of friction $`\eta (q)`$ can be microscopically modeled through the nonlinear( space-dependent) coupling between the particle degrees of freedom and the thermal bath( characterized by its equilibrium temperature) . In this work we find that the mobility of overdamped particles in a sinusoidal potential subject to a sinusoidal friction coefficient but with a phase difference $`\varphi `$ shows a peak as a function of noise strength( temperature of the bath) in the presence of only a constant force field $`F`$. By properly choosing $`\varphi `$ we obtain this SR behaviour in the mobility( defined as drift velocity divided by $`F`$) even for a small constant external force field ( when the barrier for the particle motion in the tilted periodic potential remains finite). The motion of an overdamped particle, in a periodic potential $`V(q)`$ and subjected to a space-dependent friction coefficient $`\eta (q)`$ and an additional constant force $`F`$, at temperature $`T`$, is described by the Fokker-Planck equation . $$\frac{P}{t}=\frac{}{q}\frac{1}{\eta (q)}[k_BT\frac{P}{q}+(V^{}(q)F)P]$$ (1) One can calculate the probability current $`j`$, for the potential function $`V(q)`$ with $`V(q+2\pi )=V(q)`$, as $$j=\frac{k_BT(1exp(\delta ))}{_0^{2\pi }exp(\psi (y))𝑑y_y^{y+2\pi }\eta (x)exp(\psi (x))𝑑x},$$ (2) where $`\psi (q)={\displaystyle ^q}{\displaystyle \frac{V^{}(x)F}{k_BT}}𝑑x`$ $`={\displaystyle \frac{V(q)Fq}{k_BT}}`$ and $`\delta =\psi (q)\psi (q+2\pi )=\frac{2\pi F}{k_BT}`$. The mobility( defined as the ratio of the drift velocity, $`\frac{dq}{dt}=2\pi j`$ divided by the applied force $`F`$), $`\mu =\frac{\frac{dq}{dt}}{F}=\frac{2\pi j}{F}`$. We take $`V(q)=\mathrm{sin}(q)`$ and $`\eta (q)=\eta _0(1\lambda \mathrm{sin}(q+\varphi ))`$, with $`0\lambda <1`$. Clearly, $`j0`$ as $`F0`$, but the mobility $`\mu `$ remains finite as $`F0`$, for nonzero temperature $`T`$. However, for $`F1`$ as $`T0`$, $`\mu 0`$. Also, as $`F,T\mathrm{},\mu \frac{1}{\eta _0}`$. However, for given $`\lambda `$ and $`\varphi `$, in order to find the variation of $`\mu `$ at intermediate $`T`$ and $`F`$, one needs to evaluate the double integral in the denominator of eqn. (2) numerically . ## II The Results The variation of mobility $`\mu `$ in the parameter space of $`(T,F,\lambda ,\varphi )`$ provides a very rich structure where the phase lag $`\varphi `$ plays an important role. However, in this paper we report only the variation of $`\mu `$ with temperature ( noise strength) $`T`$ at a few carefully selected values of $`(F,\lambda ,\varphi )`$ in order to highlight the observability of S.R. From eqn. (2) one can find that, even though $`j(F)j(F)`$, except for phase lag $`\varphi =0`$ or $`\pi `$, but yet $`\mu (F,\varphi )=\mu (F,\varphi )`$. Therefore, we need to explore only $`F>0`$. Also in order to observe SR the choice of larger values of $`\lambda `$ is found to be more appropriate. So we take $`\lambda =0.9`$ and explore the full range of $`\varphi [0,2\pi ]`$. We find that there is a range of values of $`\varphi `$ within which SR is observed. For example, for $`\varphi =0.8\pi ,\lambda =0.9`$, SR is observed but the peaks are very broad. And for this $`\varphi `$, SR is more prominent for $`F<1`$ than $`F>1`$. For smaller $`\varphi `$ it is harder to observe the peaks as they become still broader. However, as $`\varphi `$ is increased the peaks become sharper. Fig.1 shows the mobility $`\mu `$( in dimensionless units $`\eta _0\mu `$) as a function of $`T`$( in dimensionless units) for $`\varphi =0.9\pi `$ and $`\lambda =0.9`$. Here the peaks are larger for $`F<1`$ than for $`F>1`$. Fig. 2, shows the mobility $`\mu `$ as a function of $`T`$ for $`\varphi =1.44\pi `$, $`\lambda =0.9`$. It can be seen that the peaks are almost flat for $`0<F<1`$, but are prominent for the intermediate temperature range. From Fig. 3, we do not observe S.R for any $`F>0`$. But the figure prominently shows that for $`F>1`$ and for small $`T`$, the mobility decreases instead of increasing with temperature. This in itself is a novel feature. The above mentioned features result from a subtle combined effect of the periodic space dependent friction and the periodic potential in the presence of a constant applied force. The effect can be observed only when there is a phase difference between the potential and the frictional profile; the phase difference makes the mobility asymmetric with respect to the reversal of the field $`F`$. The mobility shows many other interesting features as a function of other parameters, $`F,\lambda `$ and $`\varphi `$. ## III Conclusion We observe the occurrence of stochastic resonance in the the mobility of an overdamped Brownian particle in a sinusoidal potential tilted by a constant force and subjected to a Gaussian white noise but, of course, in an inhomogeneous system with space-dependent friction coefficient. Thus the time dependent external oscillating field, which is generally considered as an essential ingredient for the observability of SR can be replaced by a constant force field concomitant with a space-dependent(periodic) friction coefficient of a spatially extended periodic system. We would like to point out here that the correct high friction Langevin equation in the space dependent frictional medium involves a multiplicative noise along with a temperature drift term . ## IV Acknowledgement M. C. M thanks the Institute of Physics, Bhubaneswar, for financial assistance and hospitality. M. C. M and A. M. J acknowledge partial financial support from the B. R. N. S project, D. A. E, India. REFERENCES FIGURE CAPTIONS. Fig. 1. Mobility $`\eta _0\mu `$ as a function of temperature $`T`$ for $`\varphi =0.9\pi `$ and $`\lambda =0.9`$ for various values of F. The inset is inserted to highlight the peaks. Fig. 2. Mobility $`\eta _0\mu `$ versus $`T`$ for $`\varphi =1.44\pi `$, $`\lambda =0.9`$ for various values of F. The inset highlights the peaks. Fig. 3. Mobility $`\eta _0\mu `$ versus $`T`$ for $`\varphi =1.6\pi `$, $`\lambda =0.9`$ for various values of F. The inset highlights the minima.
no-problem/9902/hep-ex9902008.html
ar5iv
text
# An electromagnetic shashlik calorimeter with longitudinal segmentation ## 1 Introduction In recent years the “shashlik” technology has been extensively studied to assess its performance at $`e^+e^{}`$, $`ep`$ and $`pp`$ accelerator experiments CMS ; heraB ; LHCB ; STIC . Shashlik calorimeters are sampling calorimeters in which scintillation light is read-out via wavelength shifting (WLS) fibers running perpendicularly to the converter/absorber plates fessler ; atojan . This technique offers the combination of an easy assembly, good hermeticity and fast time response. In many applications it also represents a cheap solution compared to crystals or cryogenic liquid calorimeters. Shashlik calorimeters are, in particular, considered to be good candidates for barrel electromagnetic calorimetry at future linear $`e^+e^{}`$ colliders TESLA . In this context, the physics requirements impose $`\sigma (E)/E0.1/\sqrt{E(GeV)}+0.01`$, at least three longitudinal samplings, transversal segmentation of the order of $`0.9^o\times 0.9^o`$ ($`3\times 3`$ cm<sup>2</sup>) and the possibility of performing the read-out in a 3 T magnetic field. The present shashlik technology can satisfy these requirements, except for the optimization of longitudinal segmentation which still needs development. The solution proposed in this paper consists of thin vacuum photodiodes inserted between adjacent towers in the front part of the calorimeter. They measure the energy deposited in the initial shower development that allows for longitudinal sampling and $`e/\pi `$ separation. A prototype detector was exposed to a beam with the aim of measuring the sampling capability and demonstrating that the insertion of diodes neither deteriorates critically the energy response nor produces significant cracks in the tower structure. ## 2 The prototype detector The tested prototype had 25 Pb/scintillator towers, assembled in a $`5\times 5`$ matrix. Each tower consisted of 140 layers of 1 mm thick lead and 1 mm thick scintillator tiles, resulting in a total depth of $`25X_0`$. The sampling was the finest ever used with the shashlik technique. The transversal dimension of each tower was $`5\times 5`$ cm<sup>2</sup>. In the first $`8X_0`$ the tiles had a smaller transverse dimension to provide room for the housing of the diodes. Plastic scintillator consisting of polystyrene doped with 1.5% paraterphenyl and 0.05% POPOP was used. Optical insulation between the towers was provided by white Tyvek paper. As it is custumary in shashlik technique, the blue light produced in the scintillator was carried to the photodetector at the back of the calorimeter by means of plastic optical fibers doped with green WLS. The 1 mm diameter fibers crossed the tiles in holes drilled in the lead and scintillator plates and they were uniformly distributed with a density of 1 fiber/cm<sup>2</sup>. In the scintillator tiles the holes were 2 mm larger (4 mm in the lead) than the fiber diameter. The light transmission between the plastic scintillator and the fibers was in air. All the fibers from the same tower were bundled together at the back and connected to photodetectors. Two types of fibers were tested: Bicron BCF20 fibers and Kuraray Y11. In both cases, the emission peak was at about 500 nm. Light collection was increased by aluminizing the fiber end opposite to the photodetector by sputtering. The light from the fibers was viewed after a 5 mm air gap by 1“ Hamamatsu R2149-03 phototetrodes. Each tetrode was placed inside an aluminium housing, containing a charge sensitive JFET preamplifier and a high voltage divider. The differential output signals were shaped with a shaping time of 1.500 $`\mu `$s and digitized. Four towers were read-out with Hamamatsu Avalanche Photodiodes instead of tetrodes. A plexiglass light guide was used to match the smaller APD sensitive area to the fiber bundle. Preamplifiers and voltage dividers were housed in the same mechanical structure as the tetrodes. Two types of vacuum photodiodes, viewed with a bialcali photocathode, were produced by EMI <sup>1</sup><sup>1</sup>1 EMI vacuum photodiode prototype D437. (Hamamatsu <sup>2</sup><sup>2</sup>2 Hamamatsu vacuum photodiode prototype SPTXC0046. ) with a rectangular (squared) front surface of $`9\times 5`$ cm<sup>2</sup> ( $`5\times 5`$ cm<sup>2</sup>) and a thickness of 5 mm. The diodes were installed in the first part of the towers in order to sample the energy deposited in the first 8 $`X_0`$. They were in optical contact with the lateral side of the scintillator tiles and the light emitted in first part of the detector was therefore read-out twice since the photons crossing the lateral scintillator surface were collected by the diode while those reaching the fibers, either directly or after reflections, were seen by the tetrodes. Due to the direct coupling, the light collection efficiency of the diodes was much larger than that of the tetrodes/APD’s and this compensated for the absence of gain in the diodes. Most of the cells were equipped with EMI vacuum photodiodes. One diode prototype from Hamamatsu, sampling only 4 $`X_0`$, was successfully tested during the last part of the data taking. Technical characteristics of these devices are listed in table 1. The Hamamatsu prototype dimensions are such that it is possible to house two diodes in the same tower in order to obtain three longitudinal samplings. For all diodes, the same front-end electronics and read-out chain as for the tetrodes were used. The read-out electronics was positioned above the tower stacks (cfr.fig.1). ## 3 Testbeam setup The prototype was tested at the X5 beam in the CERN West Area. Electrons ranging from 5 to 75 GeV and pions of 20, 30 and 50 GeV were used. The prototype (CALO in fig.2) was installed on a moving platform whose position was controlled at the level of $`220`$ $`\mu `$m. In order to avoid particles from channeling through fibers or diodes, the calorimeter was tilted by 3 degrees in the horizontal plane with respect to the beam direction. The absolute impact position of the incoming particle was measured by means of two Delay Wire Chambers (DWC1 and DWC2) with a 2 mm wire pitch and a spatial resolution of 200 $`\mu `$m, positioned at 0.5 and 1 m from the calorimeter frontface. External trigger was provided by a layer of scintillators installed near DWC2. A calibration of each tower was carried out by exposing the calorimeter to a 50 GeV electron beam at the beginning of each of the two data taking periods. The diode signals were calibrated with 50 GeV electrons as well. Pedestal runs were taken periodically to monitor the noise of the electronic amplification chain. ## 4 Results ### 4.1 Energy resolution The energy response is expected to depend on the impact point since the nearer the fiber the higher the light collection efficiency. The high fiber density was used in order to reduce the non uniformity in light response to a level of a few percent. This effect was however not achieved with BCF20 fibers, due to a small scintillating component deteriorating the energy resolution. KY11 fibers, on the other hand, had a non uniformity at the level of $`\pm 1.5\%`$. Fig.3 shows the energy response for 50 GeV electrons in towers equipped with Kuraray fibers and tetrode redout. The energy resolution achieved with KY11 fibers and tetrode read-out as function of the beam energy is shown in fig.4 and can be parameterized as<sup>3</sup><sup>3</sup>3Alternatively, by adding the constant term in quadrature: $`\frac{\sigma (E)}{E}=\frac{10.1\%}{\sqrt{E}}1.3\%\frac{0.130}{E}`$ $$\frac{\sigma (E)}{E}=\sqrt{\left(\frac{9.6\%}{\sqrt{E}}+0.5\%\right)^2+\left(\frac{0.130}{E}\right)^2}$$ (1) where $`E`$ is expressed in GeV. The last term corresponds to the electronic noise contribution and was measured from pedestal runs. A Geant Monte Carlo simulation of the shower development in a 1-mm-lead/1-mm-scintillator sampling gave a smaller value ($`6\%/\sqrt{E}`$) for the first term of the energy resolution. Therefore the dominant contribution to the measured resolution was attributed to the photoelectron statistics. The use of phototetrodes is not ideal for barrel calorimetry at $`e^+e^{}`$ colliders. Tetrodes have a rather long longitudinal dimension and must be kept at a small angle with respect to the magnetic field in order to operate with a maximum gain. The installation of Avalanche Photodiodes has been proposed by the CMS collaboration prop\_CMS as an alternative solution. Given their very good quantum efficiency ($`80\%`$), APD should also ensure a better energy resolution when the photoelectron statistics contribution dominates. Four APD’s were installed in the prototype, as described in section 2, but unfortunately no towers were equipped with APD and KY11 fibers. In fig.5 the energy response for 50 GeV electrons impinging on a tower equipped with APD is shown. The high energy tail coming from events reconstructed near the BCF20 fibers is evident. ### 4.2 Linearity Fig. 6 shows the reconstructed energy versus the nominal electron beam energy when the beam was centered in towers equipped with KY11 fibers. No significant deviations from linearity were observed up to 75 GeV which was the highest energy measured. ### 4.3 Spatial resolution A position scan along the towers was done using 50 GeV electrons to establish the precision in the impact point reconstruction. The shower position reconstruction was based on center of gravity method corrected for the detector granularity with the algorithm suggested by akopdjanov . The barycenter $$X_b=2\mathrm{\Delta }\underset{i}{}iE_i/\underset{i}{}E_i$$ (2) ($`\mathrm{\Delta }`$ is the half-width of the tower and $`E_i`$ the energy deposited in tower $`i`$), was modified according to $$X_c=b\text{arcsinh}\left(\frac{X_b}{\mathrm{\Delta }}\text{sinh}\delta \right)$$ (3) where $`b`$ is a parameter describing the transversal shower profile and $`\delta \mathrm{\Delta }/b`$. Since the shower profile was not described by a single exponential, a two steps procedure was followed: in the first step $`X_c^{}`$ was determined with $`b=0.85`$ cm and in the second one the value of $`b`$ was recomputed in the interval $`0.45<b<0.85`$ according to $`X_c^{}`$. $`X_c`$ was linear in most of the impact point range, showing non-linearities only near the diode housing as depicted in fig.7. The non-linear behaviour around the diode was corrected for by using the diode signal itself. In particular, in the range of $`X_c`$ close to the distortion region, a diode-based estimator was introduced so that $$X^{}=X_c+X_d$$ (4) where $$X_d=b^{}\mathrm{log}\frac{1}{2}\left(1+\frac{E_{diode}^{max}}{E^{max+1}}\right)+c^{};$$ (5) here $`E_{diode}^{max}`$ is the diode energy in the tower with maximum signal, $`E^{max+1}`$ represents the energy (seen by tetrodes/APD’s) in the tower closest to the reconstructed impact position and the parameters $`b^{}`$ and $`c^{}`$ were determined with 50 GeV electrons and are $`b^{}=0.2`$ cm and $`c^{}=0.3`$ cm. The position resolution of the prototype at the cell center was 1.6 mm with 50 GeV electrons and had the following energy dependence: $$\sigma _X(E)=\sqrt{\left(\frac{0.9}{\sqrt{E}}\right)^2+\left(0.1\right)^2}\mathrm{cm}.$$ (6) ### 4.4 Energy leakage to the diode The dead zone between two adjacent towers due to the diode affected only a limited portion of the calorimeter and was always followed by a sufficiently long ($`>15X_0`$) part of active detector. Therefore no complete cracks existed in the calorimeter. Nevertheless an energy loss for showers developing near the diode was visible. It was easily corrected for by using the reconstructed shower impact point. The energy response as a function of the distance $`y`$ of the reconstructed position from the two tower border was parametrized as $$E(y)=E_0(1ae^{\frac{y^2}{2\sigma _\pm ^2}})$$ (7) where $`a=0.075`$, $`\sigma _+=0.45`$ cm for $`y>0`$ and $`\sigma _{}=1.19`$ cm for $`y<0`$. Fig.8 shows the energy response, before and after the correction, as function of the reconstructed position for 50 GeV electrons. Once the correction was introduced, the remaining non uniformity in the energy response was due to the difference in light collection near fibers. ### 4.5 Diode response The EMI and the Hamamatsu diode responses to 50 GeV electrons and pions are shown in Fig. 9. The widths of both distributions were dominated by the fluctuations in the shower development. Due to the different sampling seen by the two detectors, the light signal was larger for the EMI and the fluctuations were more important in the case of the Hamamatsu prototype. On the other hand the smaller capacitance of the latter ensured a much lower electronic noise giving a comparable energy equivalent contribution as indicated in table 1. Since the showers were not contained in the part of the calorimeter read-out by diodes and the longitudinal shower development depends on the energy, the response at different electron energies was not linear as shown in Fig. 10. ### 4.6 $`e/\pi `$ separation Separation of electrons from pions was performed using discriminating variables based either on purely calorimetric data or involving also external information like the beam energy, known from the settings of main deflection magnet, which would be replaced by the momentum estimation from the tracking in a collider experiment. The fraction $$\chi _E=\frac{E_{cal}}{E_{beam}}$$ (8) can be combined with pure calorimeter variables like the fraction of energy seen by the diodes $$\chi _D=\frac{E_{diode}}{E_{cal}}$$ (9) and the lateral development of the shower $$\chi _S=\frac{_{i=1}^NE_ir_i^2}{_{i=1}^NE_i}$$ (10) where $`N`$ is the number of towers with signal and $`r_i`$ the distance of the tower from the reconstructed impact position. Fig.11 shows $`E_{diode}`$ versus $`E_{cal}`$ for pions and electrons at 20 GeV. The discriminating power of the different variables in terms of pion contamination for 90% electron efficiency, at energies ranging from 20 to 50 GeV is shown in Fig.12. In most of the cases, purely calorimetric variables improve the overall separation capability with a factor $`2`$ compared with $`\chi _E`$ by itself. At 50 GeV the pion contamination for 90% electron efficiency is $`(4.0\pm 1.5)\times 10^4`$. ## 5 Conclusions The present test has demonstrated the technical feasibility of longitudinally segmented shashlik calorimeters in which lateral sampling is performed by vacuum photodiodes. Due to the small dimension of the diodes and to the tilt of fibers and diodes with respect to the incoming particles, no significant cracks or dead zones are introduced. Performance in terms of energy resolution, impact point reconstruction and $`e/\pi `$ separation seem to be adequate for applications at future $`e^+e^{}`$ collider experiments. ## 6 Aknowledgements The IHEP workshop staff has been essential for the construction of the prototype: we are greatly indebted with A. Kleschov, P. Korobchuk and A. Tukhtarov. We wish to thank also C. Fanin for the mechanical project, V. Giordano and R. Cereseto for the careful work in the realization of the prototype, G. Rampazzo for the invaluable effort in the construction of the read-out chain and all the staff and technical support of the SL-EA group for the smooth operation of the accelerator during the testbeam. A special thank to C. Luci for providing the simulation code at the early stage of this work and to M. Pegoraro and G. Zumerle for usefull suggestions and for granting the use of EMI diodes.
no-problem/9902/hep-th9902185.html
ar5iv
text
# Untitled Document INRNE-TH-99/1 On the Generalised Pauli–Villars Regularization of the Standard Model Michail Nicolov Stoilov<sup>*</sup><sup>*</sup> e-mail: mstoilov@inrne.bas.bg Institute for Nuclear Research and Nuclear Energy Boul. Tzarigradsko Chaussee 72, 1784 Sofia, Bulgaria Abstract. We show that the regularization of the Standard Model proposed by Frolov and Slavnov describes a nonlocal theory with quite simple Lagrangian. 22 February 1999 The construction of gauge invariant regularization for the chiral theories (even for the anomaly free ones) was an open problem for a long time. Only recently such a regularization was proposed for the Standard Model ; some generalisations and important insights on the issue can be found in Refs.–. The new regularization (called generalised Pauli–Villars regularization) is a version of the standard gauge-invariant Pauli–Villars one where one regularizes entire loops, but not separate propagators. The main difference is that infinitely many regulator fields are used. Therefore, in order to specify the regularization completely, one needs for any divergent diagram a recipe for how to handle the infinite sum of the terms due to regulator fields. In this letter our aim is to show that the contribution of the regulator fields can be calculated on the Lagrangian level, so as to give a nonlocal theory. We begin with a short description of the generalised Pauli–Villars regularization of the Standard Model. In Ref. a construction is used, where all one-generation matter fields are combined into a single chiral $`SO(10)`$spinor $`\psi _+`$ (which is also a chiral Lorentzian spinor) and all gauge fields — into an $`SO(10)`$gauge field. The gauge field Lagrangian is regularized by the higher covariant derivative method and is not considered in (and neither is here). In addition to the original fields an infinite set of commuting and anticommuting Pauli–Villars fields ($`\varphi _r`$ and $`\psi _r`$ respectively, $`r1`$) is added. These new fields are simultaneously chiral Lorentzian spinors and non-chiral $`SO(10)`$ones. The explicit form of the mass terms for the regulator fields is determined by the requirement that they are nonzero, real, $`SO(10)`$and Lorentzian scalars and by the chirality properties of the fields. As a result the Lorentz and $`SO(10)`$charge conjugation matrices ($`C_D`$ and $`C`$) have to be used. (Basic feature of any charge conjugation matrix $`𝒞`$ is that $`𝒞\psi `$ transforms under the conjugate representation to that of $`\psi `$.) A list of properties of $`C_D`$ and $`C`$ used in this work is given in Appendix. The one-generation matter field regularized Lagrangian of the Standard Model reads $$\begin{array}{cc}\hfill L_{reg}& =\overline{\psi }_+i\overline{)}D\psi _+\hfill \\ & +\overline{\psi }_ri\overline{)}D\psi _r+\frac{1}{2}M_r(\psi _r^TC_DC\mathrm{\Gamma }_{11}\psi _r+\overline{\psi }_rC_DC\mathrm{\Gamma }_{11}\overline{\psi }_r^T)\hfill \\ & +\overline{\varphi }_r\mathrm{\Gamma }_{11}i\overline{)}D\varphi _r\frac{1}{2}M_r(\varphi _r^TC_DC\varphi _r\overline{\varphi }_rC_DC\overline{\varphi }_r^T).\hfill \end{array}$$ Here $`\overline{)}D`$ is the covariant derivative with respect to the $`SO(10)`$gauge field $`𝒜`$, $`\overline{)}D=\gamma ^\mu (_\mu ig𝒜_\mu ^{ij}\sigma _{ij})`$, $`\sigma _{ij}`$ are the $`SO(10)`$generators; $`M_r=Mr`$, where $`M`$ is a (large) mass parameter (Pauli–Villars mass) and a summation over $`r1`$ is assumed. This form of $`M_r`$ is crucial for the convergence of the diagrams in the model while the concrete sign of $`M_r`$ does not matter. Introducing the projectors on the irreducible spinor representations: $$\mathrm{\Pi }_\pm =\frac{1}{2}(1\pm \gamma ^5)\mathrm{and}P_\pm =\frac{1}{2}(1\pm \mathrm{\Gamma }_{11})$$ the chirality properties of the fields read: $$\begin{array}{cc}\hfill \mathrm{\Pi }_{}\psi _+& =\mathrm{\Pi }_{}\psi _r=\mathrm{\Pi }_{}\varphi _r=0,\hfill \\ \hfill P_{}\psi _+& =P_\pm \psi _r=P_\pm \varphi _r=0,\hfill \end{array}$$ where $`\psi _r=\psi _{r+}+\psi _r`$ and analogously for $`\varphi _r`$. Any $`SO(10)`$gauge model is anomaly free, and so it is not a big surprise that (1) could be rewritten in a vector-like form. Following we introduce variables $$\begin{array}{cc}\hfill \mathrm{\Psi }_r& =\psi _{r+}+CC_D\overline{\psi }_r^T,\hfill \\ \hfill \mathrm{\Phi }_r& =\varphi _{r+}+CC_D\overline{\varphi }_r^T.\hfill \end{array}$$ Both these new fields are $`SO(10)`$chiral and Lorentzian non-chiral spinors contrary to the original ones. $$\begin{array}{cc}\hfill P_{}\mathrm{\Psi }_r& =P_{}\mathrm{\Phi }_r=0,\hfill \\ \hfill \mathrm{\Pi }_+\mathrm{\Psi }_r=\psi _{r+};& \mathrm{\Pi }_{}\mathrm{\Psi }_r=CC_D\overline{\psi }_r^T,\hfill \\ \hfill \mathrm{\Pi }_+\mathrm{\Phi }_r=\varphi _{r+};& \mathrm{\Pi }_{}\mathrm{\Phi }_r=CC_D\overline{\varphi }_r^T.\hfill \end{array}$$ Using definitions (1) and the ones following from them $`\overline{\mathrm{\Psi }}_r=\overline{\psi }_{r+}\psi _r^TCC_D`$ and $`\overline{\mathrm{\Phi }}_r=\overline{\varphi }_{r+}\varphi _r^TCC_D`$, eq.(1) takes the form $$L_{reg}=\overline{\psi }_+i\overline{)}D\psi _++\overline{\mathrm{\Psi }}_r(i\overline{)}DM_r)\mathrm{\Psi }+\overline{\mathrm{\Phi }}_r(i\overline{)}D+M_r)\mathrm{\Phi }.$$ The Berezian corresponding to the change of variables (1) is $`1`$, which guarantees that Lagrangians (1) and (1) describes one and the same theory. Now we want to reformulate (1) as a higher derivative theory. Following our first step is to replace the commuting Pauli–Villars fields by anticommuting ones. The idea is to consider instead of $$L=\overline{\mathrm{\Phi }}(i\overline{)}DM)\mathrm{\Phi }$$ the following one $$L=(\overline{\mathrm{\Phi }}+\overline{\chi })(i\overline{)}DM)(\mathrm{\Phi }+\chi ),$$ where $`\chi `$ is an additional dynamical field. This Lagrangian has a very large Stuckelberg-type gauge symmetry. Its fixing produces Faddeev–Popov ghosts $`(\eta \mathrm{and}\overline{\eta })`$ which have statistics, opposite to $`\mathrm{\Phi }`$, i.e. they are normal anticommuting spinors. A particular gauge choice brings (1) into (1) plus ghosts terms trivially decoupled from the dynamics; another gauge choice leaves only $$L^{}=\overline{\eta }(i\overline{)}DM)\eta $$ (plus decoupled $`\mathrm{\Phi }`$-terms). Thus eqs.(1) and (1) describe the same physics. This is true provided there are no sources for the field $`\mathrm{\Phi }`$ (and $`\eta `$) in the model and this is exactly the situation with the Pauli–Villars fields. Applying the procedure described above to all $`\mathrm{\Phi }_r`$ the Lagrangian (1) could be rewritten as: $$L_{reg}=\overline{\psi }_+i\overline{)}D\psi _++\overline{\mathrm{\Psi }}_r(i\overline{)}D+M_r)\mathrm{\Psi }\overline{\eta }_r(i\overline{)}DM_r)\eta ,$$ where all fields are anticommuting now. Our next step is to combine different terms in (1) into a single higher derivative Lagrangian. It was shown in that the Lagrangian $$L=g\overline{\psi }(i\overline{)}Dm_1)(i\overline{)}Dm_2)\psi $$ after suitable Legendre transformation could be put into the form $$L=\frac{g}{|g|}\left(\overline{\psi }_1(i\overline{)}Dm_1)\psi _1\overline{\psi }_2(i\overline{)}Dm_2)\psi _2\right)$$ and vice-versa, provided $`m_1>m_2`$. Note that up to a sign the last Lagrangian is independent of the coupling constant $`g`$. Such decomposition into a sum of first order Lagrangians holds for any higher derivative Lagrangian $`L=\overline{\psi }_i(i\overline{)}Dm_i)\psi `$, if $`m_im_ji,j:ij`$. Here we use these results to bring together different terms in (1). We combine Pauli–Villars terms with equal $`r`$ and obtain $$L_{reg}=\overline{\psi }_+i\overline{)}D\psi _+\overline{\mathrm{\Psi }}_r(1\frac{\overline{)}D^2}{M^2r^2})\mathrm{\Psi },$$ where we have used the freedom to the choose coupling constant to be $`g_r=(Mr)^2`$. In this way we have introduced a small parameter $`(1/M)`$ which counts the order of the derivatives in the Lagrangian. Such a parameter is needed in any higher derivative theory . It automatically produces the so called perturbative constraints which exclude the unwanted negative norm states (those of Pauli–Villars fields in our case). Then we “add” one after the other the new second derivative terms to the matter field term and get $$L=\overline{\psi }i\overline{)}D\underset{r=1}{\overset{\mathrm{}}{}}\left(1\frac{\overline{)}D^2}{M^2r^2}\right).$$ The solution of the equation of motion following from the Lagrangian (1) is $`\psi =_{\mathrm{}}^{\mathrm{}}\psi _r`$ where $`(i\overline{)}DMr)\psi _r=0`$. It describes the full set of the matter and Pauli–Villars fields. The only remnant from the perturbative constraints is $$\mathrm{\Pi }_{}|\mathrm{physical}>=0$$ which excludes the Pauli–Villars fields from the physical ones. Finally, using the formula $$\mathrm{sin}(x)=x\underset{r=1}{\overset{\mathrm{}}{}}\left(1\frac{x^2}{\pi ^2r^2}\right)$$ we obtain $$L_{reg}=M\overline{\psi }\mathrm{sin}\left(\frac{i\overline{)}D}{M}\right)\psi .$$ Eq.(1) together with the constraint (1) give us the nonlocal version of the regularized Lagrangian of the Standard Model. A final note: Lagrangian (1) could be rewritten as $`L_{reg}=\overline{\mathrm{\Psi }}i\overline{)}Df(𝒟^2/M^2)\mathrm{\Psi }`$, where $`f(x^2)=\mathrm{sin}x/x`$. In this form it is closely related to the interpretation of the generalised Pauli–Villars regularization discussed in . Appendix Lorentz charge conjugation matrix . In $`4`$-dimensional space-time with metrics $`g^{\mu \nu }=diag(1,1,1,1)`$ the gamma matrices $`\gamma ^\mu ,\mu =0,1,2,3`$ are such, that $`(\gamma ^\mu )^{}=\gamma _\mu `$. The matrix $`\gamma ^5`$ is defined as $`\gamma ^5=i\gamma ^0\gamma ^1\gamma ^2\gamma ^3`$ and so $`(\gamma ^5)^{}=(\gamma ^5)^T=\gamma ^5`$. The charge conjugation matrix $`C_D`$ (the one used here is inverse to that in ) has the properties: $$\begin{array}{cc}\hfill C_D^1(\gamma ^\mu )^TC_D& =\gamma ^\mu ,\mu =0,\mathrm{},3\hfill \\ \hfill C_D^1\gamma ^5C_D& =\gamma ^5,\hfill \\ \hfill C_D=\overline{C}_D=C_D^{}& =C_D^T=C_D^1,\hfill \\ \hfill detC_{D}^{}{}_{}{}^{2}& =1.\hfill \end{array}$$ $`SO(10)`$charge conjugation matrix . Corresponding gamma matrices $`\mathrm{\Gamma }_i,i=1,\mathrm{},10`$ are $`32\times 32`$ matrices, such that $`\{\mathrm{\Gamma }_i,\mathrm{\Gamma }_j\}=\delta _{ij}`$ and $`\mathrm{\Gamma }_i^{}=\mathrm{\Gamma }_ii`$. $`\mathrm{\Gamma }_{11}`$ matrix is defined as $`\mathrm{\Gamma }_{11}=i\mathrm{\Gamma }_1.\mathrm{}.\mathrm{\Gamma }_{10}`$, so that $`\mathrm{\Gamma }_{11}^{}=\mathrm{\Gamma }_{11}^T=\mathrm{\Gamma }_{11}`$. The generators of the $`SO(10)`$algebra in the spinorial representation are $`\sigma _{ij}=\frac{1}{2}i[\mathrm{\Gamma }_i,\mathrm{\Gamma }_j],ij,i,j=1,\mathrm{},10`$. The charge conjugation matrix $`C`$ has the following properties: $$\begin{array}{cc}\hfill C^1\mathrm{\Gamma }_i^TC& =\mathrm{\Gamma }_i,i=1,\mathrm{},11\hfill \\ \hfill C^1\sigma _{ij}^TC& =\sigma _{ij},\hfill \\ \hfill C=\overline{C}=C^{}& =C^T=C^1,\hfill \\ \hfill detC^2& =1.\hfill \end{array}$$ References relax S.A. Frolov and A.A. Slavnov, Phys. Lett. B309 (1993) 344. relax R. Narayanan and H. Neuberger, Phys. Lett. B302 (1993) 62. relax S. Aoki and Y. Kikukawa, Mod. Phys. Lett. A8 (1993) 3517. relax K. Fujikawa, Nucl. Phys. B428 (1994) 169. relax F. Wilczek and A. Zee, Phys. Rev. D25 (1982) 553-565. relax M.N. Stoilov, Ann. Physik 7 (1998) 1 relax X. Jaén, J. Llosa, A. Molina, Phys. Rev. D34 (1986) 2302. relax N.N. Bogoliubov and D.V. Shirkov, Introduction in Quantum Field Theory, Moscow, Nauka, 1973
no-problem/9902/hep-ph9902236.html
ar5iv
text
# The small 𝑥 nuclear shadowing at DIS ## Acknowledgments MBGD acknowledges F. Halzen for useful discussions. This work was partially financed by CNPq and by Programa de Apoio a Núcleos de Excelência (PRONEX), BRAZIL.
no-problem/9902/gr-qc9902029.html
ar5iv
text
# Does the generalized second law hold in the form of time derivative expression? ## I Introduction One of the most interesting developments in black hole physics is a discovery of the analogy between certain laws of black hole mechanics and the ordinary laws of thermodynamics . According to this analogy, Bekenstein introduced the concept of the black hole entropy as a quantity proportional to the surface area of the black hole (the proportionality coefficient was fixed by Hawking’s discovery of black hole radiance later) and conjectured that the total entropy never decrease in any process, where the total entropy is the sum of the black hole entropy and the ordinary thermodynamic entropy of the matter outside the black hole. This is known as the generalized second law(GSL) of thermodynamics and it is important to check the validity of this conjecture because the validity strongly supports that the ordinary laws of thermodynamics can apply to a self-gravitating quantum system containing a black hole. Especially, it strongly suggests the notion that $`A/4`$ ($`A`$ is the surface area of the black hole) truly represents the physical entropy of the black hole. In order to interpret $`A/4`$ as the black hole entropy, it would be necessary to derive $`S_{BH}=A/4`$ from a statistical mechanical calculation by counting the number of internal states of the black hole. The microscopic derivation of the black hole entropy along this line achieved some results in the recent progress in superstring theory . However, general arguments for the validity of the second law of thermodynamics for ordinary systems are based on notions of the “fraction of time” a system spends in a given macroscopic state. Since the nature of time in general relativity is drastically different from that in nongravitational physics, it is not clear how the GSL will arise even if $`A/4`$ represents a measure of the number of internal states of the black hole. Therefore, it is important to examine the validity of the GSL by itself, in order to understand the connection between quantum theory, gravitation and thermodynamics further. Historically, gedanken experiments have been done to test the validity of the GSL. The most famous one is that in which a box filled with matter is lowered to near the black hole and then dropped in . Classically, a violation of the GSL can be achieved if one lowers the box sufficiently close to the horizon. However, when the quantum effects are properly taken into account, it was shown by Unruh and Wald that the GSL always holds in this process. On the other hand, there are some people who tried to prove the GSL under several assumptions for more general situations. Frolov and Page proved the GSL for an eternal black hole by assuming that (i) the process in the investigation is quasistationary which means that the change in the black hole geometry are sufficiently small compared with the corresponding background quantities, (ii) the state of matter fields on the past horizon $`^{}`$ is a thermal state with the Hawking temperature, (iii) initial set of radiation modes on the past horizon $`^{}`$ and that on the past null infinity $`^{}`$ are quantum mechanically uncorrelated, and (iv) the Hilbert space and the Hamiltonian of modes at $`^+`$ are identical to those of modes at $`^{}`$. But these assumptions are questionable for the black hole formed by a gravitational collapse. That is, the assumptions (iii) and (iv) break down due to the correlation between modes at $`^{}`$ and modes at $`^{}`$ located after the horizon formation and the violation of time reversal symmetry, respectively. So we think that their proofs should be improved to realistic black holes formed by gravitational collapse. The GSL for the black hole formed by gravitational collapse was studied by Sorkin and Mukohyama , making use of the nondecreasing function in a Markov process. The proof finally come to showing that the matter fields in the black hole background have a stationary canonical distribution with its temperature equal to that of the black hole and the canonical partition function remains a constant. But there are several problems in their proofs. Mukohyama showed that the canonical distribution with temperature equal to the black hole is stationary by calculating the transition matrix between states at the future null infinity $`^+`$ and states at the portion of the past null infinity $`^{}`$ after the formation of the event horizon $`^+`$. But, in collapsing cases, the assumption (i) can not be justified in general. By contrast, since Sorkin argued any process occurring between two adjacent time slices, the assumption (i) is valid. He concluded that the canonical distribution of matter fields are stationary because the Hamiltonian does not change between the two adjacent time slices, thanks to the time translation invariance in the background. Althougth he assumed implicitly the existence of the Killing time slices that do not go through the bifurcate point, and derived his result, there would not exist such Killing time slices that he had taken. If we take the Killing time slices, there is no energy flux across the event horizon. It means that we cannot see evaporating black hole by the Killing time slices. In order to satisfy the assumption (i), we consider the infinitesimal time development of total entropy in two dimensional theories of gravity. Although we think two dimensional spacetime, it is worth to investigate the GSL in two dimensional black hole spacetime if there also exists the same black hole physics as those in four dimensional one (causal structure, Hawking radiation and so on). Because, in this case, we can expect that the essential point of the four dimensional physics would not be lost. Using the Russo-Susskind-Thorlacius (RST) model , Fiola, et.al. discussed the infinitesimal time development of total entropy and showed that the GSL in the model is valid under suitable conditions. Although their investigation is beyond the quasistationary approximation and takes account of quantum-mechanical back-reaction effects, their argument is restricted to the very special (RST) model and it is too hasty in concluding that the GSL generally holds even in two dimensional spacetime. Because if “black hole entropy” truly represents the physical entropy of a black hole, it would be necessary to confirm the validity of the GSL for the more general models which possess the black hole mechanics. The purpose of this paper is to investigate the GSL in any two dimensional black hole spacetime with the first law of black hole mechanics, irrespective of models. In fact, the existence of the first law is guaranteed for the wide class of gravitational theories by using the Noether charge method . First, we write the change in total entropy between two adjacent time slices in terms of quantities of matter fields, using the assumption (i) and the first law of black hole mechanics. Thus, our task is to calculate the energy-momentum tensor and the entanglement entropy of matter fields. These are obtained easily for conformal fields in two dimensional spacetime. After these calculations, we will demonstrate that the GSL does not always hold for conformal vacuum states in a two dimensional black hole for two reasons. The first is that the GSL is violated by the decrease of the entanglement entropy of the field associated with the decrease of the size of the accessible region. But it might be possible to subtract this term by some physical procedure and define the new entropy. The second is that the GSL for the new entropy would be violated for some class of the vacuum states. It might suggest that even the GSL for the new entropy does not hold as long as there does not exist a physical reason that exclude these vacuum states. In this sense, there seem to exist two difficulties to rescue the GSL. This paper is organized as follows. In Sec.II we present two dimensional black holes that we will consider in this paper and formulate a time derivative form of the GSL Since we express the change in total entropy in terms of physical quantities of matters, our task is to calculate a time evolution of matter fields in a fixed black hole background. In Sec.III we calculate the change in energy of matter fields for general conformal vacuum states. In Sec.IV the entanglement entropy is obtained. By way of illustration, these results are applied to the typical two vacuum states; the Hartle-Hawking state and the Unruh one. Then in Sec.V, it is shown that the time derivative form of the GSL does not always hold for the general situations. We will also give a physical interpretation of our result. Sec.VI is devoted to summary and discussion about our results. In particular, we propose a new entropy and argue the validity of the GSL for this quantity. ## II Two dimensional black holes and the GSL ### A Two dimensional black holes Four dimensional gravitational theories have many degrees of freedom and inherent complexity. So it would be useful to consider a toy model in which greater analytic control is possible. In our analysis, we consider any two dimensional theories of gravity which satisfy the following two assumptions; (1) the theory allows a stationary black hole solution, and (2) there exist black hole physics similar to those in four dimensional gravitational theories. Since we want to examine the validity of the GSL in the two dimensional eternal black hole background, we assume by assumption (1) that the spacetime possesses an event horizon and a timelike Killing vector. Since we can always take spacelike hypersurface which is orthogonal to the orbits of the isometry in two dimensional spacetime, this means that the theory has a static black hole solution as $`ds^2`$ $`=`$ $`\xi ^2(r)dt^2+{\displaystyle \frac{dr^2}{\xi ^2(r)}}`$ (1) $`=`$ $`\xi ^2dx^+dx^{},`$ (2) where $`\xi ^\mu =(/t)^\mu `$ is the timelike Killing vector which is normalized s.t. $`\xi ^21`$ as $`r\mathrm{}`$, $`x^\pm =t\pm r_{}`$ and $`r_{}=𝑑r/\xi ^2`$, respectively. The position of the horizon $``$ is specified by $`r`$ with $`\xi (r)=0`$ and the surface gravity of the black hole is given by $`\kappa =_+\mathrm{ln}\xi ^2|_{}`$. By assumption (2), we require that the black hole satisfies the first law of black hole mechanics. This is necessary to formulate the GSL in the form of the next subsection and to keep the essential features of four dimensional black hole physics in our toy model. In fact, Wald derived a first law of black hole mechanics for any diffeomorphism invariant gravitational theories in any dimensions relied on the Noether charge associated with the diffeomorphism invariance of the action <sup>*</sup><sup>*</sup>* In the Euclidean method, we can get the Hawking temperature $`T_{BH}=\kappa /2\pi `$ by requiring the nonsingularity of the Euclidean metric . Thus, we regard the quantity $`\kappa /2\pi `$ as the Hawking temperature in the Noether charge method. . His technique is a quite general approach for a stationary black holes with a Killing horizon, and reproduces a known result for Einstein gravity with ordinary matter actions. Since gravitational theories are generally defined from a diffeomorphism invariant action, this assumption seems to hold naturally for a very broad class of gravitational theories. One evidence to justify these assumption is the existence of an interesting toy model which satisfy these assumptions. It is known that the CGHS model has a static black hole solution which evaporates by the Hawking effect, semiclassically. Moreover, the thermodynamical nature of this solution had been investigated by Frolov and shown that black holes in the CGHS model also satisfy the three laws (including the first law of black hole mechanics) similar to “standard” four dimensional black hole physics. Therefore, it is quite reasonable to think that these two assumptions hold for a wide class of gravitational theories. ### B The GSL under the quasistationary approximation Before examining whether the GSL holds, we formulate a precise statement of the GSL in the quasistationary approximation. As stated in the introduction, we foliate our black hole spacetime by spacelike time slices that are across the event horizon and do not cross one another at the horizon. We take two adjacent time slices among them to consider a quasistationary process (to justify the assumption (i) in the introduction) and consider the change in total entropy between two time slices. Under the situation satisfying the assumption (i), by making use of the first law of black hole mechanics ($`\mathrm{\Delta }S_{BH}=\mathrm{\Delta }E_{BH}/T_{BH}\beta _{BH}\mathrm{\Delta }E_{BH}`$), we can rewrite the change in total entropy in terms of quantities of matter fields alone, $`\mathrm{\Delta }S_{total}`$ $`=`$ $`\mathrm{\Delta }S_M+\mathrm{\Delta }S_{BH}`$ (3) $`=`$ $`\mathrm{\Delta }S_M+\beta _{BH}\mathrm{\Delta }E_{BH}`$ (4) $`=`$ $`\mathrm{\Delta }S_M\beta _{BH}\mathrm{\Delta }E_M,`$ (5) where we used the energy conservation law ($`\mathrm{\Delta }E_{BH}=\mathrm{\Delta }E_M`$) in the last line. Thus, our task is to calculate the change in energy and entropy of matter fields between two adjacent time slices in the black hole background. We will use the entanglement entropy of matter fields outside the horizon as the quantity $`S_M`$. These are obtained easily for massless conformal fields in two dimensional spacetime. Further if we define the free energy of matter fields $`F_M`$ by $`F_ME_M\beta _{BH}^1S_M`$, we can write $$\mathrm{\Delta }S_{total}=\beta _{BH}\mathrm{\Delta }F_M,$$ (6) and to prove the GSL in the quasistationary approximation is equivalent to show that the free energy $`F_M`$ is a monotonically decreasing function of time. So we will be concerned with examining the change in free energy of matter fields immersed in the black hole background as a heat bath. ## III The calculation of the energy-momentum tensor For two dimensional conformal fields, we can obtain the energy-momentum tensor $`T_{\mu \nu }`$ by using its transformation law under conformal transformations and the trace anomaly formula, which is for a scalar field, $$T=\frac{R}{24\pi }.$$ (7) When the spacetime metric in interest is given by $`ds^2=\widehat{\mathrm{\Omega }}^2d\widehat{U}d\widehat{V},`$ (8) $`T_{\mu \nu }`$ for a conformal scalar field is written as $`T_{\mu \nu }[g_{ab}]=T_{\mu \nu }[\eta _{ab}]+\theta _{\mu \nu }+{\displaystyle \frac{T[g_{ab}]}{2}}g_{\mu \nu },`$ (9) $`\theta _{\widehat{U}\widehat{U}}={\displaystyle \frac{1}{12\pi }}\widehat{\mathrm{\Omega }}_{\widehat{U}}^2\widehat{\mathrm{\Omega }}^1,`$ (10) $`\theta _{\widehat{V}\widehat{V}}={\displaystyle \frac{1}{12\pi }}\widehat{\mathrm{\Omega }}_{\widehat{V}}^2\widehat{\mathrm{\Omega }}^1,`$ (11) $`\theta _{\widehat{U}\widehat{V}}=\theta _{\widehat{V}\widehat{U}}=0,`$ (12) where the null coordinate $`(\widehat{U},\widehat{V})`$ is given by $`\widehat{U}=\widehat{U}(x^{}),\widehat{V}=\widehat{V}(x^+)`$, and $`g_{ab}=\widehat{\mathrm{\Omega }}^2\eta _{ab}`$, respectively. Note that if we take the conformal vacuum, then the first term on the R.H.S. of Eq.(9) vanishes. Now we apply this result to a two dimensional black hole in the previous section. We assume that we take the conformal vacuum associated with the $`\widehat{U}`$ and $`\widehat{V}`$. Due to the existence of the timelike Killing vector $`\xi ^\mu =(/t)^\mu `$, the quantity $`E_{\widehat{\mathrm{\Omega }}}=_\mathrm{\Sigma }𝑑\mathrm{\Sigma }^\mu T_{\mu \nu }\xi ^\nu `$ is a function of the boundary of $`\mathrm{\Sigma }`$. We put the inner boundary $`P_0=(x_0^+,x_0^{})`$ of $`\mathrm{\Sigma }`$ on the future horizon $`^+`$ and fix the outer boundary $`P_1=(x_1^+,x_1^{})`$ apart from the black hole. And then, we consider the change in $`E_{\widehat{\mathrm{\Omega }}}(x_0^+)`$ by moving the inner boundary $`P_0`$ along $`^+`$. This is given by $$\frac{dE_{\widehat{\mathrm{\Omega }}}(x_0^+)}{dx_0^+}=T_{++}|_^+.$$ (14) These relations are sketched in Fig.1. Then, we can rewrite $`T_{++}`$ by using the relations Eq.(8) as $`T_{++}`$ $`=`$ $`(_+\widehat{V})^2T_{\widehat{V}\widehat{V}}`$ (15) $`=`$ $`{\displaystyle \frac{(_+\widehat{V})^2}{12\pi }}\widehat{\mathrm{\Omega }}_{\widehat{V}}^2\widehat{\mathrm{\Omega }}^1`$ (16) $`=`$ $`{\displaystyle \frac{1}{12\pi }}\left[{\displaystyle \frac{_+^2\widehat{\mathrm{\Omega }}}{\widehat{\mathrm{\Omega }}}}{\displaystyle \frac{_+\xi ^2}{\xi ^2}}{\displaystyle \frac{_+\widehat{\mathrm{\Omega }}}{\widehat{\mathrm{\Omega }}}}\right].`$ (17) and can express the change in $`E_{\widehat{\mathrm{\Omega }}}(x_0^+)`$ in terms of the conformal factor $`\widehat{\mathrm{\Omega }}`$; $`{\displaystyle \frac{dE_{\widehat{\mathrm{\Omega }}}(x_0^+)}{dx^+}}`$ $`=`$ $`T_{++}|_^+`$ (18) $`=`$ $`{\displaystyle \frac{1}{12\pi }}\left[{\displaystyle \frac{_+^2\widehat{\mathrm{\Omega }}}{\widehat{\mathrm{\Omega }}}}|_^+\kappa {\displaystyle \frac{_+\widehat{\mathrm{\Omega }}}{\widehat{\mathrm{\Omega }}}}|_^+\right].`$ (19) Next, we apply the above result to the typical two vacuum states; the Hartle-Hawking one ($`HH`$) and the Unruh one ($`U`$). For simplicity, we consider the case in which the horizons are not degenerate. If there exist several horizons, we take the most outer one and proceed to our arguments. In this case, we can rewrite the metric as follows; $`ds^2=\mathrm{\Omega }_{HH}^2dUdV=\mathrm{\Omega }_U^2dUdx^+,`$ (20) $`\mathrm{\Omega }_{HH}^2={\displaystyle \frac{\xi ^2}{\kappa ^2|UV|}},`$ (21) $`\mathrm{\Omega }_U^2\mathrm{\Omega }_{HH}^2{\displaystyle \frac{dV}{dx^+}}=\mathrm{\Omega }_{HH}^2e^{\kappa x^+},`$ (22) where the null coordinate $`(U,V)`$ is a Kruskal like one which is regular at the horizon $`^\pm `$ given by $`|U|=e^{\kappa x^{}}/\kappa `$ and $`|V|=e^{\kappa x^+}/\kappa `$. The Hartle-Hawking state represents a state in equilibrium with the black hole and is uniquely characterized by its global nonsingularity and its isometry invariance under the Killing time . We substitute $`\widehat{\mathrm{\Omega }}=\mathrm{\Omega }_{HH}`$ in Eq.(19) and use $`(U,V)`$ for $`(\widehat{U},\widehat{V})`$, respectively. Using the relation $`dV_0=\mathrm{exp}(\kappa x_0^+)dx_0^+`$ and the fact that $`^+`$ is a Killing horizon ($`_+\kappa =0`$), we obtain the change in energy for the Hartle-Hawking state as $$\frac{dE_{HH}(x_0^+)}{dx_0^+}=T_{++}|_^+=0.$$ (23) This result is expected one from the fact that the energy flow lines of the Hartle-Hawking state are along the orbits of the Killing vector because the Hartle-Hawking state is stationary state with respect to the Killing time. On the other hand, the Unruh state represents, on the eternal black hole, a state which is in a gravitationally collapsing spacetime. Substituting $`\widehat{\mathrm{\Omega }}=\mathrm{\Omega }_U`$, and using $`(U,x^+)`$ for $`(\widehat{U},\widehat{V})`$, respectively, we get $`T_{++}|_^+=\kappa ^2/(48\pi )`$ at the future event horizon, and obtain the result $$\frac{dE_U(x_0^+)}{dx_0^+}=+\frac{\kappa ^2}{48\pi }=+\frac{\pi }{12}(T_{BH})^2,$$ (24) where we used the fact that the Hawking temperature is given by $`T_{BH}=\frac{\kappa }{2\pi }`$. This represents the energy density of a right-moving one dimensional massless gas with the temperature $`T_{BH}`$ and can be interpreted as the outgoing energy flux due to the Hawking radiation (Appendix A). ## IV The calculation of the entanglement entropy In this section, we will calculate the entanglement entropy $`S_M`$ for the state of the fields outside the event horizon. The concept of the quantum entanglement entropy is associated with the notion of coarse graining associated with a division of the Hilbert space of a composite system. The division may be introduced by dividing the whole degrees of freedom into accessible(system in interest) and inaccessible ones(environment). For instance, in a spacetime with black holes, it is natural to take degrees of freedom outside the event horizons as the accessible ones. The density matrix appropriate to the system in interest is obtained by tracing the whole density matrix $`\widehat{\rho }_{whole}`$ over the environment $$\widehat{\rho }_{sys}=Tr_{env}\left(\widehat{\rho }_{whole}\right).$$ (25) This reduced density matrix no longer describes a pure state generally, even though the whole system is pure. And then, we define the entropy of the system in interest as $$S_{sys}=Tr_{sys}[\widehat{\rho }_{sys}\mathrm{ln}\widehat{\rho }_{sys}].$$ (26) The quantity $`S_{sys}`$ describes correlations between the system in interest and the environment, and measures the information which is lost by tracing over the environment. Note that when the whole system is in a pure state, there is a symmetry with respect to an exchange of the system in interest for the environment; the two density matrices obtained by tracing over accessible degrees of freedom or inaccessible ones give the same entropy . When the whole system is in a mixed state, this symmetry hold no longer generally. We are interested in the entanglement entropy of a local quantum field associated with the division of degrees of freedom by partitioning a time slice $`\mathrm{\Sigma }`$ into accessible region $`D`$ and inaccessible one $`\mathrm{\Sigma }D`$. Then, the entanglement entropy is invariant for local deformations of the time slice which keep the boundary $`D`$ fixed in the spacetime. This fact follows the unitary evolution of the whole system and the local causality because, in this case, the unitary evolution operator for the whole becomes a product of two commutable unitary operators, each of which is the evolution operator associated with the deformation of the time slice in the accessible or the inaccessible region. In this sense, the entanglement entropy of any local field is a quantity connected with the boundary of the accessible region. ### A Entanglement entropy in flat spacetime First, we consider the Minkowski vacuum of a massless conformal scalar field in flat two dimensional spacetime and calculate the entanglement entropy. The metric of the flat spacetime is written by $$ds^2=dUdV.$$ (27) We want to compute the entanglement entropy of the Minkowski vacuum when the accessible region is given by the interval between the point (inner boundary) $`P_0=(U_0,V_0)`$ and the point (outer boundary) $`P_1=(U_1,V_1)`$. To proceed this calculation, we need to expand the Minkowski vacuum by the states which live in the Hilbert space associated with the accessible region and the ones which live in the Hilbert space associated with the inaccessible region. Such decomposition can be achieved by introducing the Rindler chart such that the point $`P_0`$ corresponds to the bifurcate point. This is given by Fiola, et.al. , following Unruh’s calculation . Then, we can derive the entanglement entropy by standard procedure using Eq.(25) and Eq.(26). The result is given by $$S=\frac{1}{6}\left[\left(\mathrm{ln}D\mathrm{ln}ϵ_0\right)+\left(\mathrm{ln}D\mathrm{ln}ϵ_1\right)\right],$$ (28) where $`D`$, $`ϵ_0`$ and $`ϵ_1`$ are the size of the accessible region defined by $`D^2=|(V_1V_0)(U_1U_0)|`$, the short distance cutoff at $`P_0`$ and $`P_1`$, respectively. Next, we examine the change of entropy between two adjacent time slices. We consider the case that the outer boundary $`P_1`$ is fixed and the inner boundary $`P_0`$ moves along the null line $`U_0=\text{constant}`$. Then, we get $`{\displaystyle \frac{dS}{dV_0}}|_{U_0=const.}={\displaystyle \frac{1}{6}}{\displaystyle \frac{1}{V_1V_0}}<0,`$ (29) where the proper lengths $`ϵ_0`$ and $`ϵ_1`$ are fixed. This decrease of the entanglement entropy can be understood as a result of the decrease of the size of the accessible region (see Sec.V for the more intuitive explanation of this result.) ### B Entanglement entropy in curved spacetime In this subsection, we will generalize the result in the previous subsection to the case of curved spacetime. Since any two dimensional spacetime is conformally flat, we can write the metric as $$ds^2=\widehat{\mathrm{\Omega }}^2d\widehat{U}d\widehat{V}=\widehat{\mathrm{\Omega }}^2d\widehat{s}^2.$$ (30) When we take conformal vacuum associated with the $`\widehat{U}`$ and $`\widehat{V}`$, as the quantum state, the expression Eq.(28) can be used as it is, because the spacetime with the metric $`d\widehat{s}^2`$ is flat. Assuming that the accessible region is the interval between $`\widehat{P}_0=(\widehat{U}_0,\widehat{V}_0)`$ and $`\widehat{P}_1=(\widehat{U}_1,\widehat{V}_1)`$, the entanglement entropy is given by $`S_{\widehat{\mathrm{\Omega }}}`$ $`=`$ $`{\displaystyle \frac{1}{6}}\mathrm{ln}{\displaystyle \frac{\left|\left(\widehat{V}_1\widehat{V}_0\right)\left(\widehat{U}_1\widehat{U}_0\right)\right|}{\widehat{ϵ}_0\widehat{ϵ}_1}}`$ (31) $`=`$ $`{\displaystyle \frac{1}{6}}\mathrm{ln}\widehat{\mathrm{\Omega }}_0+{\displaystyle \frac{1}{6}}\mathrm{ln}\widehat{\mathrm{\Omega }}_1+{\displaystyle \frac{1}{6}}\mathrm{ln}\left|\left(\widehat{V}_1\widehat{V}_0\right)\left(\widehat{U}_1\widehat{U}_0\right)\right|{\displaystyle \frac{1}{3}}\mathrm{ln}ϵ,`$ (32) where in the second line, we rewrite the short distance cutoffs $`\widehat{ϵ}_0`$ and $`\widehat{ϵ}_1`$ in the unphysical spacetime with the metric $`d\widehat{s}^2`$ in terms of proper lengths, that is $`ϵ_i=\widehat{\mathrm{\Omega }}_i\widehat{ϵ}_i`$ ($`i=0,1`$) and set $`ϵ_0=ϵ_1=ϵ`$. Thus, the change in entropy as we move the inner boundary $`\widehat{P}_0`$ along the future horizon $`^+`$ with the proper length $`ϵ`$ and the outer boundary $`\widehat{P}_1`$ fixed is given by $$\frac{dS_{\widehat{\mathrm{\Omega }}}}{dx_0^+}=\frac{1}{6}\frac{_+\widehat{\mathrm{\Omega }}}{\widehat{\mathrm{\Omega }}}|_^+\frac{1}{6}\frac{_+\widehat{V}_0}{\widehat{V}_1\widehat{V}_0}.$$ (33) We apply the above result Eq.(33) to the Hartle-Hawking state and the Unruh one. For the Hartle-Hawking state, we substitute $`\widehat{\mathrm{\Omega }}=\mathrm{\Omega }_{HH}`$ in Eq.(33) and use the relation $`dV_0=\kappa V_0dx_0^+`$, we obtain the result $$\frac{dS_{HH}}{dx_0^+}=\frac{1}{6}\frac{\kappa V_0}{V_1V_0}<0.$$ (34) Against our intuition, this result shows that the entropy $`S_{HH}`$ decreases as time elapses, because of the decrease of the size of the accessible region. For the Unruh state, substituting $`\widehat{\mathrm{\Omega }}=\mathrm{\Omega }_U`$ in Eq.(33) and using $`x^+`$ for $`\widehat{V}`$, we obtain $`{\displaystyle \frac{dS_U}{dx_0^+}}`$ $`=`$ $`{\displaystyle \frac{\kappa }{12}}{\displaystyle \frac{1}{6}}{\displaystyle \frac{1}{x_1^+x_0^+}}`$ (35) $`=`$ $`{\displaystyle \frac{\pi }{6}}T_{BH}{\displaystyle \frac{1}{6}}{\displaystyle \frac{1}{x_1^+x_0^+}}.`$ (36) As the interpretation of Eq.(24), the first term in Eq.(36) can be understood as the entropy density of right moving one dimensional massless gas with temperature $`T_{BH}`$ and can be interpreted as the entropy production rate due to the Hawking radiation (Appendix A). Since the second term gives a negative contribution, the R.H.S. of Eq.(36) can not have a definite sign. In fact, this term grows without bound as the time slice approaches to the null one. Note that the first term in Eq.(36) which has a natural interpretation as the Hawking radiation appears by fixing the proper short distance cutoff $`ϵ`$, not but fixing the cutoff $`\widehat{ϵ}`$ in the unphysical spacetime. ## V Physical interpretation of our result We summarize the results for the typical two vacuum states in terms of the change in free energy. Using the relations $`\beta _{BH}=2\pi /\kappa `$ and the definition $`F_ME_M\beta _{BH}^1S_M`$, the change in free energy is given by: $$\frac{dF_{HH}}{dx_0^+}=\frac{dE_{HH}}{dx_0^+}\frac{\kappa }{2\pi }\frac{dS_{HH}}{dx_0^+}=+\frac{\kappa ^2}{24\pi }\frac{V_0}{V_1V_0}>0$$ (37) for the Hartle-Hawking state, $$\frac{dF_U}{dx_0^+}=\frac{dE_U}{dx_0^+}\frac{\kappa }{2\pi }\frac{dS_U}{dx_0^+}=\frac{\kappa ^2}{48\pi }+\frac{\kappa }{24\pi }\frac{1}{x_1^+x_0^+}$$ (38) for the Unruh state. Where, if we take the limit that our accessible region extends to the spatial infinity, i.e., $`V_1,x_1^+\mathrm{}`$, we obtain the desired result for the GSL. However, provided that we hold the accessible region finite, the above results show that the free energy (total entropy) does not necessarily decrease (increase) and the time derivative form of the GSL does not always hold for two dimensional eternal black hole background. Note that the second term in Eq.(38) can grow unboundedly as time evolves. We can recognize that the violation of the GSL is caused by the decrease of the entanglement entropy of fields associated with the decrease of the size of the accessible region. The change in the entanglement entropy comes from two parts: one is associated with ultraviolet divergent term which represents short distance correlations between the modes near the horizon; and the other is associated with the size of the accessible region which contains long distance correlations between the modes far inside and far outside the horizon (Hereafter, we call this term infrared divergent term.). The ultraviolet divergent term can be interpreted as the entropy production due to the Hawking radiation for the Unruh state and it can be thought to give non-negative contribution to the change in total entropy. On the other hand, the infrared divergent term gives negative contribution due to the decrease of the size outside the event horizon and it is just this behavior that causes the violation of the GSL. When the outer boundary exists at a finite distance, we do not observe the whole external region outside the horizon. Therefore one might think that the violation of the GSL is brought about by interaction with the field degrees of freedom in the rest external region that we do not observe. That is, one suspect that the composite system composed of the black hole and the field degrees of freedom in the accessible region might not be isolated. However, since we fix the outer boundary, there is no exchange of heat, work and so on, by the interaction between the composite system and the rest one. Therefore, the composite system is isolated in substance and our results Eqs.(37) and (38) point to the violation of the GSL. Fiola, et.al. considered the GSL for a black hole formed by gravitational collapse using the RST model and concluded that the GSL holds in the RST model under suitable conditions. Note that our conclusion is different from theirs in spite of the fact that the entanglement entropy of fields are used as the quantity $`S_M`$ for both cases. Of course, there are various differences between these works. Especially, a main difference is the contribution of the third term in Eq.(32) (the last term in their Eq.(75)) to the expression $`dS_M/dx_0^+`$. That is, the behavior of the infrared divergent term aforementioned: the contribution in their case is always positive, while one in ours is negative. The origin which causes this difference comes from the existence of the reflecting boundary, i.e., the difference of the boundary conditions. In the RST model, they need to impose reflecting boundary condition at “central point”, beyond where the dilaton becomes an imaginary value. Therefore, their quantum state of the scalar field has correlation between the right moving modes and the left moving ones. In eternal black hole background, this corresponds to the case in which we make initial state to have correlation between the modes on $`^{}`$ and ones on $`^{}`$, while the states in our investigation have no correlation between them. This brings about the difference between the dependence of $`dS_M/dx_0^+`$ on $`P_0`$ in each case. (So if we impose the suitable boundary condition, we could reproduce the result corresponding to theirs.) Next, we will give a more intuitive explanation of this difference by using quantum correlation between two wave packets of matter fields. We first consider the case for an eternal black hole. It is one example of the spacetime without a reflecting boundary. We take the conformal vacuum states and consider two adjacent time slices $`\mathrm{\Sigma }_1`$ and $`\mathrm{\Sigma }_2`$ and notice one of the most correlated pairs See for the example of the Minkowski spacetime. The same argument can be applied to the black hole by choosing suitable basis: we can make the most entangled pairs located at the equal null distance from $`v=v_0`$ (or $`V=0`$) for a black hole with a boundary (or without a boundary). ($`A`$, $`\overline{A}`$) of the left-moving modes that are specified by the equal null distance $`\mathrm{\Delta }V`$ from $`V=0`$ (Fig.2). While we can access one of them ($`A`$) on $`\mathrm{\Sigma }_1`$, we can access neither of them on $`\mathrm{\Sigma }_2`$ due to the existence of the horizon $`^+`$. Thus, the contribution of this pair to the entanglement entropy exists on $`\mathrm{\Sigma }_1`$ and vanishes on $`\mathrm{\Sigma }_2`$. The same argument can also be applied to all the other pairs. Since we move the inner boundary $`P_0`$ of $`\mathrm{\Sigma }`$ along the future horizon $`^+`$, this fact reflects as an decrease of entanglement entropy as time evolves. Note that right-moving mode does not influence to the change in entanglement entropy. After all, the decrease of the entanglement entropy for an eternal black hole can also be understood by the decrease of the number of the pairs which contributes to the entanglement entropy. Subsequently, we apply the above arguments to the black hole formed by gravitational collapse with a reflecting boundary Note that we can consider the black hole formed by gravitational collapse without a reflecting boundary: The shock wave solution in CGHS model, for example.. Note that, different from the no boundary case, the correlation between the right moving modes and the left moving ones is induced by the existence of the boundary in this case. This produces an important difference from no boundary case. We introduce the null coordinates ($`u`$, $`v`$) and suppose that the formation of the event horizon $`^+`$ is at $`v=v_0`$. We notice one of the most correlated pairs ($`A`$, $`\overline{A}`$) of the ingoing modes that are specified by the equal null distance $`\mathrm{\Delta }v`$ from $`v=v_0`$, and consider two adjacent time slices $`\mathrm{\Sigma }_1`$ and $`\mathrm{\Sigma }_2`$ (Fig.3). Since we can access both modes ($`A`$, $`\overline{A}`$) on $`\mathrm{\Sigma }_1`$ , the contribution of this pair to the entanglement entropy is zero. But on $`\mathrm{\Sigma }_2`$, since we can access only one of them ($`\overline{A}`$), nonzero contribution is produced. As the same argument can be repeated for all the other pairs, the entropy is increasing as time evolves in the collapsing model with a boundary. After all, we can understand the behavior of the infrared divergent term intuitively: it gives a positive contribution to the change in entropy in the cases with a boundary, and gives a negative contribution in the ones without a boundary. In other words, we can say that our work gives the situation where the entanglement entropy increase (or decrease) in time. ## VI Summary and Discussion In this paper, we examined the validity of the GSL for a black hole in two dimensional gravitational theories under the quasistationary approximation. In order to satisfy the quasistationary approximation, we considered the infinitesimal time development of the total entropy of the black hole and the field degrees of freedom outside the horizon. Our approach can be applied to test the validity of the GSL in any two dimensional stationary black hole spacetime which possesses the first law of black hole mechanics, irrespective of models. Making use of the fact that the change in total entropy is equal to minus the change in free energy of the fields outside the horizon under the quasistationary approximation, we calculated the change in free energy for the conformal vacuum states. In particular, we applied the result to the Hartle-Hawking state and the Unruh one in the eternal black hole background. And then, we showed the differential form of the GSL to be invalid when our accessible region is finite and to be valid for infinite accessible region. We recognized that the origin of the violation of the GSL is the decrease of the entanglement entropy of fields associated with the decrease of the size of the accessible region. However, the behavior of this term is something curious in our intuitive terms. Because it is usually thought that the total entropy does not change for the Hartle-Hawking state in eternal black hole background, which describes the thermal equilibrium state between the black hole and the surrounding matter field. So, it is natural to expect that the entropy production for the Hartle-Hawking state does not occur. Myers and Hirata, et.al. examined the validity of the GSL by using the Noether charge method and taking into account 1-loop quantum back-reaction and showed that it is satisfied for both the RST model and the wide class of the CGHS model. In their analysis, the third term in Eq.(32) does not appear. This might suggest that the infrared divergent term could be dropped by some physical reasons, though, further argument about excluding this term is necessary <sup>§</sup><sup>§</sup>§ Since we consider the massless field, the term proportional to $`\mathrm{ln}\widehat{D}`$ appears in the result (32). However, if we consider the massive field, this term would not appear (inverse of the fixed mass $`1/m`$ of the field enters into the expression instead of $`\widehat{D}`$). . Anyway, it is necessary to subtract this term and define the total entropy to rescue the GSL. Therefore, from now on, we suppose that this subtraction is performed systematically, and argue the GSL for this new quantity. That is, we define the new entropy by $`S_{\widehat{\mathrm{\Omega }}}^{}`$ $`=`$ $`S_{\widehat{\mathrm{\Omega }}}{\displaystyle \frac{1}{6}}\mathrm{ln}\left({\displaystyle \frac{\widehat{D}}{ϵ}}\right),`$ (39) where $`\widehat{D}`$ is the size of the accessible region in the unphysical spacetime in which the conformal vacuum is defined and $`ϵ`$ is a proper length of short distance cutoff. Then, $$\frac{dS_{\widehat{\mathrm{\Omega }}}^{}}{dx_0^+}=\frac{1}{6}\frac{_+\widehat{\mathrm{\Omega }}}{\widehat{\mathrm{\Omega }}}|_^+,$$ (40) and we obtain the final result $`{\displaystyle \frac{dF_{\widehat{\mathrm{\Omega }}}^{}}{dx_0^+}}`$ $`=`$ $`{\displaystyle \frac{\lambda }{2\pi }}{\displaystyle \frac{dS_{\widehat{\mathrm{\Omega }}}^{}}{dx_0^+}}+{\displaystyle \frac{dE_{\widehat{\mathrm{\Omega }}}}{dx_0^+}}`$ (41) $`=`$ $`{\displaystyle \frac{1}{12\pi }}{\displaystyle \frac{_+^2\widehat{\mathrm{\Omega }}}{\widehat{\mathrm{\Omega }}}}|_^+`$ (42) Therefore, the validity of the GSL depends only on the signature of the $`_+^2\widehat{\mathrm{\Omega }}`$ at the future horizon $`^+`$. If we apply this result to the typical two states aforementioned, we obtain the desired result: $`dF_{HH}^{}=0`$ for the Hartle-Hawking state and $`dF_U^{}<0`$ for the Unruh state. However, since we can choose the vacuum state as we like (in other words, we can perform conformal transformation freely), we can violate the GSL by choosing the suitable conformal vacuum which satisfy $`_+^2\widehat{\mathrm{\Omega }}|_^+<0`$ even if we can subtract the infrared divergent term by some physical procedure successfully. Now we consider whether the violation of the GSL occurs or not for the wide range in the following sense: the state that violate the GSL is not a special one; and the violation occurs sufficiently long time. In our case, there seem to exist a lot of states which satisfy both of these conditions. Of course, since the vacuum states that we prepare should be a physically reasonable ones, there would be some requirement from physical nature. For example, the expectation value of the energy-momentum tensor should be finite at $`^+`$. But, in practice, this condition can not impose any condition on $`_+^2\widehat{\mathrm{\Omega }}|_^+`$, and we can not remove the cases that violate the GSL by this criterion. Further, noting that since $`\widehat{\mathrm{\Omega }}`$ is a function of the spacetime point, it is possible to violate the GSL during sufficiently long time interval, by choosing a suitable form of the function. One of such examples is the case $`\widehat{\mathrm{\Omega }}^2=\xi ^2/\left[\kappa |U|\mathrm{cosh}(\kappa x^+)\right]`$. In this case, the behavior of the energy momentum tensor is regular at $`^+`$ and $`^\pm `$, so it seems to be a physically reasonable state. And then, it approaches the Hartle-Hawking state asymptotically at $`^+`$ (as $`x^+\mathrm{}`$) while keeping $`_+^2\widehat{\mathrm{\Omega }}|_^+<0`$. Thus, the time duration during which the violation of the GSL occurs can be taken arbitrary long. It would not seem that this is a special case, because we can find a lot of examples which give a similar behavior as this one. Therefore, it seems that the violation of the GSL occurs for a rather wide range of vacuum states unless there exists a “selection rule” that all physically acceptable states should satisfy the condition $`_+^2\widehat{\mathrm{\Omega }}|_^+>0`$. Then, considering that our analysis is independent of models, we would have only two choice to resolve the violation of the GSL. One is that the states which satisfy $`_+^2\widehat{\mathrm{\Omega }}|_^+<0`$ like an above example should be excluded by some physical requirement that we can not find now. It means that all the physically reasonable states are restricted to ones that satisfy the condition $`_+^2\widehat{\mathrm{\Omega }}|_^+>0`$. The other is that we improve the entropy formula $`S^{}`$ further. That is, the violation of the GSL was thought to be caused by the wrong definition of the entropy. In either cases, further investigation would be necessary and resolution of the difficulty must be brought over the future works. ###### Acknowledgements. The authors would like to thank M. Shibao for valuable comments and stimulating discussions. We also appreciate Professor A. Hosoya for continuous encouragement. ## A We can interpret the results, Eqs.(24) and (36) as the energy and entropy production rate due to the Hawking radiation. An observer at the null infinity observes quanta distributed per mode and per unit time by $$<n_\omega >=\frac{\mathrm{\Gamma }_\omega }{\mathrm{exp}(\omega /T_{BH})1},$$ (A1) in a two dimensional black hole background with the temperature $`T_{BH}`$. The graybody factor for an massless conformal scalar field is $`\mathrm{\Gamma }_\omega =1`$ because of no scattering. Therefore, the energy production rate is given by $$\frac{dE_{rad}}{dt}=\frac{1}{2\pi }_0^{\mathrm{}}𝑑\omega \omega \frac{\mathrm{\Gamma }_\omega }{\mathrm{exp}(\omega /T_{BH})1}.$$ (A2) Assuming the canonical distribution, the entropy per mode is given by $$S_\omega =(1+<n_\omega >)\mathrm{ln}(1+<n_\omega >)<n_\omega >\mathrm{ln}<n_\omega >,$$ (A3) and then, the entropy production rate of the emitted radiation is given by $$\frac{dS_{rad}}{dt}=\frac{1}{2\pi }_0^{\mathrm{}}𝑑\omega \left[(1+<n_\omega >)\mathrm{ln}(1+<n_\omega >)<n_\omega >\mathrm{ln}<n_\omega >\right].$$ (A4) The above quantities become for a massless conformal scalar field ($`\mathrm{\Gamma }_\omega =1`$) as $`{\displaystyle \frac{dE_{rad}}{dt}}`$ $`=`$ $`{\displaystyle \frac{T_{BH}^2}{2\pi }}{\displaystyle _0^{\mathrm{}}}𝑑x{\displaystyle \frac{x}{e^x1}}={\displaystyle \frac{\pi }{12}}T_{BH}^2,`$ (A5) $`{\displaystyle \frac{dS_{rad}}{dt}}`$ $`=`$ $`{\displaystyle \frac{T_{BH}}{2\pi }}{\displaystyle _0^{\mathrm{}}}𝑑x\left[{\displaystyle \frac{x}{e^x1}}\mathrm{ln}(1e^x)\right]={\displaystyle \frac{\pi }{6}}T_{BH}.`$ (A6) These coincide with Eq.(24) and the first term in Eq.(36).
no-problem/9902/astro-ph9902203.html
ar5iv
text
# Relativistic Outflows in Gamma Ray Bursts ## 1 Introduction The approximate consensus on gamma ray bursts (GRBs) can be reduced to a few brief statements: – GRBs are cataclysmic events with an energy release $`10^{51}`$erg in $`\gamma `$-rays (assuming isotropic emission) at cosmological distances. – The primary event is a coalescence of two compact objects of stellar origin (neutron stars and black holes, Blinnikov et al. 1984; Paczyński 1992) or an exotic explosion of a single stellar object (hypernova, Paczyński 1998) – All we see are effects associated with an expanding blast wave (fireball), or propagating jet, or multiple colliding shocks of dimensions and time scales of a few order of magnitude larger than the scales of the primary event (which is invisible in itself at the present level of sensitivity, see Mészáros & Rees 1993). There exist other points of view, of course, e.g., GRBs as sporadic microblazars (Shaviv & Dar 1995a; for criticism of fireball models, see Dar 1998). I will use the fireball paradigm, keeping in mind a jet geometry as an alternative. The principal problems arising in the inhomogeneous fireball and the jet scenarios as well as the possible underlying physical processes are similar. There are two classes of GRBs (which could be different phenomena or different modes of the same phenomenon) – short ($`<1`$ s ) and long ($`>`$ 1 s). The discussion below concerns the long GRBs. Due to the large intensity of many bursts we have very rich hard X-ray/soft $`\gamma `$-ray GRB data: excellent light curves and good spectra. However the data are so diverse and sometimes puzzling, that usually new good data complicate the problem rather than clarify it. I will start with the time variability data and try to review possible conclusions inferred from temporal properties. Then I will review the spectral properties and discuss possible regimes of emission. ## 2 Time Variability, Phenomenology GRB temporal properties which are worth to emphasize are the following: 2.1. Bimodality. The duration distribution of GRBs extending over 5 orders of magnitude has two humps (Meegan et al. 1998) which are believed to correspond to different classes of GRBs: short and long GRBs, separated by a minimum around 2 s. These could also just be different modes of the same phenomenon (sometimes a precursor looking like a short burst is followed by a long burst, Fig. 1e). Our discussion concerns mainly long bursts which constitute 70 - 75 % of GRBs 2.2. Diversity. Some events consist of a single smooth pulse of almost standard shape (Fig. 1a), others are very complex and chaotic (Fig. 1f), and some are a combination of smooth pulses and chaotic intervals (Fig. 1c). No distinct morphological classes are found. At first sight, GRB light curves obey no rules. 2.3. Large amplitude of variations. There are strong events with emission episodes separated by quiet intervals. The upper limit for emission between episodes is below $`10^3`$ of the peak flux in some events (see Figs. 1h, 2h). In other terms, the emission can turn off to a very low level and then turn on again. 2.4. Composite structure. Any event is the sum of elementary pulses which are additive and can overlap. This statement is difficult to prove. It is, however, a stable impression. This is more or less obvious for events consisting of a few pulses (Fig. 1b) and seems be a reasonable generalization for chaotic events. The most erratic events could consist of $`1000`$ pulses (Stern & Svensson 1996). For attempts to decompose bursts into single pulses, see Norris et al. (1996) 2.5. Absence of a starting mark. There is no typical feature in the light curves that could be associated with the primary event. A burst can begin in very different ways - a slow smooth rise (Fig. 1a,b,f), a sharp abrupt rise (Fig. 1h), a weak precursor separated by tens of seconds from the main event (Fig. 1d), etc. 2.6. Weak and slow time evolution. The direct time evolution of complex bursts from their beginning to the end is slow and weak (Fig. 1c,f). There are only slow statistical trends: hard-to-soft evolution is more frequent than soft-to-hard (Ford et al. 1995) and the highest peak of the burst has a statistical tendency to appear at the beginning. A direct time dependence should exist, but it is not easy to extract and its typical characteristic scale exceeds 100 s. Summarizing 2.5 and 2.6 we can state that the primary event leaves no mark and we cannot define the “zero time” for the event. These are phenomenological facts which one can derive from just looking at many time profiles or plotting the simplest distributions. In the next section, I will consider a more quantitative description of the time variability. ## 3 Time Variability: Wide Range of Time-scales and Self-Similarity Stern (1996) found that the average peak aligned profile of all BATSE GRBs has a stretched exponential (SE) shape: $$I=I_0\mathrm{exp}\left[(t/t_0)^{1/3}\right],$$ (1) where $`t`$ is time since the highest peak of the event, and $`t_0`$ is a time constant $``$ 0.5 s. This dependence extends over 3 orders of magnitude in time ($`0.2200`$ s) and over 2.5 orders in the amplitude of the average signal (Fig. 3). It is worth to note that the similar distribution for solar flares is not such a good SE. If one tries to describe the average time profile of solar flares with a SE one obtains an index close to 1/2 instead of 1/3 (Stern 1996). Stretched exponentials are quite common in complex dynamical systems with a wide spectrum of variations. An example which will be discussed below is turbulence where some distributions have an SE shape (Ching 1991; Jensen, Paladin, & Vulpiani 1992). We can speculate that SEs are associated with near-critical systems where the criticality is not complete. In the case of exact criticality, the characteristic scale (e.g. the time constant) disappear and all distributions should convert into a power law. The SE does contain the time constant $`t_0`$. It does not mean that we have found a characteristic time scale. An SE can be associated with a truncated power law spectrum. Then $`t_0`$ is some function of $`t_{\mathrm{min}}`$ and $`t_{\mathrm{max}}`$ at which the system changes its behavior. Indeed, the average power density spectrum (PDS) of long bursts is a truncated power law, $`P(f)f^{1.67}`$ (Beloborodov, Stern & Svensson 1999), extending between 0.02 and 2 Hz (Fig. 4). The amusing fact is that the average PDS has exactly the same slope $`(5/3)`$ as the Kolmogorov spectrum describing the energy distribution in developed turbulence. This will be discussed below. The low frequency turnover is associated with the well known turnover in the duration distribution of GRBs ($``$ 30 s) which in turn is associated with some global properties of the phenomenon. The high frequency turnover is something new. It should be associated with some nonlinearity in the physical processes appearing at a certain scale of the emitting systems. Maybe this is related to the compactness parameter of local events associated with the emission of separate pulses. What can we conclude from all these facts? * We probably deal with a complex dynamical system which generates a wide spectrum of features exhibiting some scaling invariance (self-similarity) over at least 2 orders of magnitude. * Regular extended distributions indicate that all bursts, despite their diversity, can be considered as different random realizations of the same stochastic process. * The underlying stochastic process is close to a near-critical regime. This is what we need in order to observe a huge diversity of GRBs. Otherwise we would have to assume very different conditions in different bursts. Near-criticality provides large fluctuations under stable conditions. These conclusions are speculative, of course, and can hardly be formulated quantitatively. Nevertheless, a toy pulse avalanche model of Stern & Svensson (1996) constructed on the basis of these assumptions gives a successful quantitative statistical description of GRBs (including the stretched exponential average profile and the power-law average PDS). The model in a near critical regime reproduces the diversity of GRBs for the same set of parameters. The success of the model does not mean that the pulse avalanche model is valid and is the only possibility. It rather means that the approach based the above conclusions is reasonable. ## 4 Underlying Scenario: A Recurrent Central Engine versus a Turbulent or Inhomogeneous Fireball What physics can be behind the stochastic process discussed in the previous section? The best studied scenario of GRB emission is based on a relativistic expanding fireball (Cavallo & Rees 1978) energized by the merging of two compact objects. If the baryon loading of the fireball is small then it must be ultrarelativistic (Pacyński 1990). In an early stage, the fireball cannot emit efficiently just because the radiation is trapped due to the very large optical depth and almost all energy goes into kinetic energy (see Mészáros & Rees 1993). It agrees with fact 2.5 - we do not see any marker of the primary event. And later, when the fireball becomes optically thin and interacts with the interstellar medium, it emits a GRB through shock particle acceleration. This scenario satisfies the energy requirements and can reproduce a proper time scale of tens of seconds at Lorentz factors $`100300`$. What is missing in this scenario in its straightforward version is the complex stochastic time behavior with the properties summarized above. Fenimore, Madras, & Nayakshin (1996) and Fenimore, Ramires, & Summers (1998) found an evident controversy in the simplest model of the single expanding shell. If the expanding relativistic homogeneous shell emits an instantaneous flash, the observer will see an extended pulse with a characteristic width $`tt_0`$, where $`t`$ is the observation time of the beginning of the pulse and $`t_0`$ is the observation time of the primary event (if it were observed) producing the expanding shell. Therefore, if we associate a pulse in a GRB with a flash of a single relativistic shell, we should see nothing earlier than $`t\mathrm{\Delta }t`$ where $`\mathrm{\Delta }t`$ is the characteristic time scale of the pulse. This is apparently not the case in many complex GRBs. Another argument against a single explosion is the low filling factor (i.e., the ratio of the area of emitting regions to the total fireball surface) derived from the time variability (Fenimore et al. 1998). The low filling factor leads to a low efficiency (Piran & Sari 1997). These problems gave support to “recurrent central engine” models which have become very popular (e.g., Rees & Mésźaros 1994; Sari & Piran 1997). The recurrent central engine is usually described as a long-living (up to hundreds of seconds) accreting system where an accretion disc is formed by a disrupted neutron star. The system emits relativistic shocks that collide producing pulses of gamma ray emission (Kobayashi, Piran, & Sari 1997; Daigne & Mochkovich 1998). How can we then reproduce a wide power-law PDS from the central engine? Probably there is no way to do this with a straightforward internal shock model. Light curves simulated with internal shocks have nothing common with real bursts. They have an intrinsic time constant and a very different Fourier PDS with a power law asymptotic with the wrong slope: $`P(f)=\mathrm{const}`$ instead of $`P(f)=f^{5/3}`$. In principle, an accreting system can provide a power law PDS, e.g., the Cyg X-1 PDS is a power law $`P(f)=f^1`$ over 1.5 decades (from 0.03 Hz to 1 Hz, see Belloni & Hasinger 1990). However, we cannot see the time profile produced by the central engine itself as the history of accretion will be reprocessed by internal shocks. The Kolmogorov PDS can hardly be obtained straightforwardly with internal shocks because too much power has to be transferred to low frequencies. Maybe one can invent some rule for the ejection of internal shocks to reproduce the Kolmogorov slope. However, this would be something farfetched. The long-living central engine helps to solve some problems such as a very slow (if existing) evolution of temporal and spectral properties in complex bursts. Nevertheless, we need something else, more complicated than shock collisions, to produce the self-similar behavior over 2 decades of time scales. As was emphasized in the previous section, we need a complex dynamical process for this. I would suggest that we should search for such a process in the shock evolution rather than in collisions of internal shocks. We can suggest at least two suitable dynamical processes: MHD turbulence, which is very natural in an relativistic outflow and dynamical instabilities, most probably the Rayleigh - Taylor instability. Both can generate a wide range of irregularities with a high energy density contrast. Reconnection of the magnetic field generated by a turbulent dynamo is certainly a very efficient way to dissipate the energy into gamma rays. Some arguments in favor of this scenario can be borrowed from solar flares. Their time behavior resembles GRBs (while it still differs from GRBs at a quantitative level – solar flares have a different average time profile and a different average PDS) and we do know that solar flares results from reconnection of a magnetic field with a complex structure. Lu and Hamilton (1991) described power law distributions of flare energy release with a cellular automata model which is also a kind of a near-critical pulse avalanche. Summarizing the issue: The time variability of GRBs can hardly appear straightforwardly as a result of internal shock collisions or as a consequence of variations of the external medium. The time behavior should be associated with a dynamical process that makes the outflow strongly inhomogeneous in a wide range of scales giving rise to a kind of fractal pattern. The inhomogeneous structure of the outflow removes the main objections against a single explosion scenario. An argument in favor of the recurrent central engine is the absence of evident evolution of long events (Fenimore 1999). However, some evolution probably exists (e.g., Ford et al. 1995) and this argument can hardly be used as a proof. ## 5 Lorentz Factor, Compactness and Emission Regime The cosmological origin of GRBs unavoidably implies a relativistic motion of the emitting region towards the observer. Let us consider an emission episode with a luminosity, $`L=10^{50}`$ erg/s, and a characteristic variability time scale, 1 s. The size of the emitting region should not be greater than 1 light second, i.e., $`r310^9`$cm. Then, assuming no relativistic motion we obtain: A compactness parameter: $$\mathrm{}=\frac{L\sigma _T}{m_ec^3r}10^{12}$$ An equilibrium (blackbody) temperature $`T=(L/4\pi r^2\sigma )^{1/4}30`$ keV. We can hardly see anything except the 30 keV Planck spectrum using this assumption and such a system cannot be stable - it should explode. Now let us describe the emission region as a blob, quasi-spherical in the comoving frame, moving towards the observer with a Lorentz factor $`\mathrm{\Gamma }`$. Then the comoving luminosity is $`L_c=L\mathrm{\Gamma }^4`$ ($`\mathrm{\Gamma }^2`$ from angular collimation, $`\mathrm{\Gamma }^1`$ from time transformation and $`\mathrm{\Gamma }^1`$ from blueshift), where $`L`$ is the apparent luminosity (assuming isotropy). For the size of the emission region we take $`r_c=r\mathrm{\Gamma }`$. Then the comoving compactness is: $$\mathrm{}_c=\mathrm{}\mathrm{\Gamma }^5=10^{12}\mathrm{\Gamma }^5$$ (2) If we want to deal with simple linear physics describing the gamma ray emission, we should take $`\mathrm{\Gamma }>100`$. Then we have no problem with intense pair production and can apply the optically thin synchrotron-self Compton models (see, e.g., Panaitescu & Mészáros 1998). This is the most popular approach and the constraint $`\mathrm{\Gamma }>100`$ is generally accepted. For other comoving values we have: The energy density at the emitting surface: $$ϵ_c310^{19}\mathrm{\Gamma }^6\mathrm{erg}/\mathrm{cm}^3.$$ (3) The equipartition magnetic field: $$H_c310^{10}\mathrm{\Gamma }^3\mathrm{G}$$ (4) The equilibrium temperature: $$T_c30\mathrm{\Gamma }^{3/2}\mathrm{keV}.$$ (5) And the temperature, blueshifted to the observer frame: $$T30\mathrm{\Gamma }^{1/2}\mathrm{keV}.$$ (6) The global size of the relativistic fireball (or the distance from the source, having in mind a jet geometry), for a characteristic emission time of 300 s: $$R10^{13}\mathrm{\Gamma }^2\mathrm{cm}.$$ (7) One can obtain a large variety of physical conditions depending on the Lorentz factor. On the other hand, there are arguments for a small dispersion of the Lorentz factor in different GRBs (e.g., a sharp break in the average PDS, Beloborodov, Stern & Svensson 1999). What is the typical Lorentz factor? This is one of the most important issues in the whole GRB problem. At a huge Lorentz factor, $`\mathrm{\Gamma }1000`$, the blast wave passes a distance of order of a parsec during the emission phase. This value was assumed in the model of Shaviv & Dar (1995b) describing GRB emission as upscattering of the star light by a $`\mathrm{\Gamma }1000`$ blast wave crossing a globular star cluster. The model gives a wrong description of the GRB time variability (e.g., fact 2.3 can not be explained). It seems that we do not need such a Lorentz factor for any other purposes and taking into account some other problems (e.g., the requirement of a good vacuum, $`n<10^5\mathrm{cm}^3`$ for long events), we will not consider this possibility seriously. The main choice is between large ($`\mathrm{\Gamma }100300`$) and moderate ($`\mathrm{\Gamma }1050`$) Lorentz factors. This choice will define the emission regime: in the first case this should be optically thin synchrotron (synchrotron - self Compton), in the second case, intensive pair production should take place and we have a much more complicated nonlinear, optically thick emitting system. ## 6 $`\mathrm{\Gamma }100300`$ versus $`\mathrm{\Gamma }1050`$ or Optically Thin versus Optically Thick Emission A large Lorentz factor ($`\mathrm{\Gamma }300`$) is attractive because it can explain the GRB emission as a result of the interaction of the blast wave with the interstellar medium. Indeed, the kinetic energy of the interstellar gas swept up by the fireball with a Lorentz factor $`\mathrm{\Gamma }`$ at the observer time $`t`$ is $$E_{KE}=610^{49}t_{100}^3\mathrm{\Gamma }_{100}^7n\mathrm{erg},$$ (8) where $`n`$ is the gas density in cm<sup>-3</sup>, $`t_{100}=t/100`$ s, and $`\mathrm{\Gamma }_{100}=\mathrm{\Gamma }/100`$. Accepting the value $`t=100`$ s for the emission phase (for a recurrent central engine model one can afford a slightly smaller $`t`$; for a single explosion model one must take $`t>100`$ s in some cases) and $`n=0.1`$ cm<sup>-3</sup> we obtain $`E_{KE}10^{52}`$ erg for $`\mathrm{\Gamma }=300`$. Under such conditions we should see a strong energy dissipation from the interaction between the fireball and the interstellar medium within the first 100 s. If $`n=10^4`$ cm<sup>-3</sup> (a GRB in a galactic halo) then one can slightly adjust $`T`$ and $`\mathrm{\Gamma }`$ to obtain a considerable deceleration of the fireball in a reasonable time. The interaction between the fireball and the external medium solves the free energy problem. The free energy source for the gamma ray emission is just the bulk kinetic energy of the fireball. The most popular scheme of the emission is shock particle acceleration and synchrotron - self Compton radiation (e.g., Tavani 1996; Panaitescu & Mészáros 1998; Dermer 1998) One can see from Eq. (2) that pair production is negligible at such high $`\mathrm{\Gamma }`$ and the electron scattering optical depth is small. Therefore we deal with optically thin linear emission. The involved physics is well studied and easy to work with. However we have a number of very serious problems with this simple linear physics. The first difficult question is “what causes the specific time variability of GRBs?”. Is it inhomogeneities of the external medium? How can one then explain the stretched exponential average time profile and the power law PDS? With a fractal structure of the interstellar medium? Hhow can we then explain the huge amplitude of variations (Fig. 2)? We certainly need some essentially nonlinear system to produce rapid variations by 3 orders of magnitude. A process which can produce both a large dynamical range of variations and a wide range of time scales is magnetic reconnection (we note that it works in a similar way in solar flares). An equipartition magnetic field with a complex geometry can be generated by a turbulent dynamo. Then such a field can gain additional energy with compression in the deceleration stage and reconnect. This could be a solution of the problem of time variability for $`\mathrm{\Gamma }100300`$ (we are unfortunately not able to solve this problem at a quantitative level). The next problem arises from the GRB spectra. In both variants of energy release at $`\mathrm{\Gamma }100300`$, shock acceleration and magnetic reconnection, the gamma-ray emission is blueshifted optically thin synchrotron radiation. Real GRB spectra are well approximated by the Band expression (Band et al. 1993) consisting of two asymptotic power laws: $$dN/dEE^\alpha $$ at small $`E`$, $$dN/dEE^\beta $$ at large $`E`$ and $$dN/dEE^\alpha e^{E/E_p}$$ in the intermediate range. $`E_p`$ parameterizes the break energy. At large negative $`\beta `$, this expression resembles the hard X-ray spectra of AGNs, especially if one subtracts the reflection hump (see Zdziarski et al. 1997). $`E_p`$ is associated with the pair temperature in that case. In AGNs, we have $`\alpha `$ close to $`2`$ ($`1.9`$ is the most typical value, and $`E_p60150`$ keV). The high energy spectrum, $`EE_p`$, in AGNs cannot be reconstructed because of poor photon statistics. In GRBs the soft part is considerably harder: $`\alpha `$ varies between $`2`$ and $`+1`$ (Band et al. 1993). There are some fits of spectra with $`\alpha +1`$ but they have large errors, a short fitting interval, and a low $`E_p`$ (Crider et al. 1997). The largest $`\alpha `$ that one can trust is near 0 (Preece 1998, private communication). $`E_p`$ is also larger than that for AGNs and variable within a single burst. The highest values of $`E_p`$ is above the BATSE range ($`1.5`$ MeV), the lowest is below the BATSE range ($`30`$ keV) and for the main fraction of spectra, 100 keV $`<E_p<`$ 500 keV (Band et al. 1993). The typical hard energy slope is $`2.8<\beta <1.7`$ (clustering around $`\beta 2.1`$, Preece et al. 1996), sometimes much steeper, consistent with a pure exponential cutoff ($`\beta \mathrm{}`$). Summarizing the GRB spectral phenomenology: \- The GRB low energy (hard X-ray) spectra are considerably harder then the AGN spectra and have a break at a higher energy. \- The GRB spectra are much more diverse than the AGN spectra, nevertheless they have a typical shape: a harder low energy power law, an exponential break, and a softer high energy power law. \- The spectra evolve during a single pulse. A pulse starts with a maximum $`E_p`$, then $`E_p`$ decreases, sometimes by factor of a few (Ford et al. 1995). How do optically thin synchrotron models fit this spectral pattern? The first problem appears with the low energy spectra. A synchrotron model cannot give a spectrum with $`\alpha >2/3`$, while there are considerably harder spectra, $`\alpha =0`$, at least. This issue is studied by Preece et al. (1998). The second problem is the spectral break, sharp enough to be fitted with an exponential (Band et al. 1993). It implies a very sharp electron energy distribution, which remains sharp during rapid evolution (note that the synchrotron photon energy is proportional to the square of the electron Lorentz factor). From my point of view these problems are fatal for the synchrotron shock models. We should search for a less linear and less trivial physics to explain GRB emission (especially if we want to explain the nontrivial time variability at the same time). I started the discussion with a comparison between GRB and AGN spectra and this is more motivated than it could seem at first sight. There is a number of arguments that in the case of GRBs as well as in the case of AGNs that we deal with an equilibrium $`e^+e^{}`$ pair plasma. It is surprising that while there exist a large number of works on synchrotron shock models, we know of very few attempts to describe GRB spectra with a Comptonizing pair plasma. I can only mention the works of Ch. Thompson (see Thompson 1998 and references therein) and Ghisellini & Celotti (1999). Liang (1997) and Liang et al. (1997) studied optically thick thermal Comptonization in application to GRBs taking the temperature and the optical depth as external parameters. It is a well known fact that the pair plasma is a good thermostat and is able to produce spectra with a stable break (which also can be sharp) in the X-ray range (Svensson 1984). The break results from quasi-thermal Comptonization. Its position is defined by the pair equilibrium and depends on the compactness parameter and on the type of the energy supply: pure thermal (direct heating of Maxwellian electrons), nonthermal (heating of the relativistic tail of electron energy distribution), or hybrid (both). In the pure thermal case, the pair temperature is self-adjusted in a way to support pair production at the tail of the photon energy distribution. The resulting temperature decreases logarithmically with increasing compactness, at $`\mathrm{}1000`$ the pair temperature is $`40`$ keV and the peak in $`\nu F_\nu `$ distribution appears at $`80`$ keV. This is the energy in the comoving frame, and it implies a too small Lorentz factor as the average observable $`E_p`$ is 300 – 400 keV. A smaller pair temperature can be achieved in nonthermal or hybrid model. To demonstrate that it is in principle possible to reproduce GRB spectra with optically thick pair plasma, I made a series of simulations with a large particle nonlinear Monte-Carlo code (Stern et al. 1995) for high compactnesses ($`\mathrm{}10002000`$). Figure 5 demonstrates the result of one attempt that can be considered as more or less successful. The break position at 20 – 30 keV is consistent with a Lorentz factor 10 – 20 which implies a higher compactness (see eq. 2) which in turn would give cooler pairs. The simulation at $`\mathrm{}1000`$ is technically difficult. The impression is that consistency with data can be achieved at $`\mathrm{}=10^4`$ and $`\mathrm{\Gamma }2030`$. The recipe how to obtain a proper spectral shape can be formulated as follows: The main condition to obtain a hard spectrum below $`E_p`$ is photon starvation, i.e., only a small number of soft photons enters the Compton upscattering process (see Zdziarski, Coppi, & Lamb 1990). The main source of soft photons is synchrotron radiation of the nonthermal pair component. To get rid of it one should restrict the nonthermal tail to the energy range for which the synchrotron radiation is reabsorbed by thermal pairs. In the presented example, this condition is fulfilled and the resulting low energy spectral slopes, $`\alpha =1.1`$ for the steady-state spectrum and $`\alpha =0.95`$ for the decaying state, are typical for GRBs. To obtain a harder spectrum one should take a larger magnetic field and energy density to have a higher reabsorption energy. A rising pair optical depth and energy density will eventually lead to a Planck spectrum with the temperature estimated by Eq. (6). An optically thick pair plasma provides another advantage: a nonlinearity which can give rapid variations with a large amplitude. A large pair optical depth can be generated during a few $`r/c`$ and can annihilate on the same time scales, i.e., when the emitting system turns off, it does not just cool down – it disappears. The only objection against a moderate Lorentz factor and a large compactness is associated with the GeV photon emission detected in some bursts. At a high compactness, high energy photons should be absorbed through photon-photon pair production. This constraint can be easily avoided by assuming that the hundred keV – MeV emission and the GeV emission originate from different processes in different places, e.g., the latter could result from shock acceleration, the former from magnetic reconnections behind the shock. Summarizing the issue: A large Lorentz factor ($`\mathrm{\Gamma }100300`$) naturally enables the conversion of the fireball kinetic energy into radiation through interaction with external matter. It implies a simple, linear mechanism of gamma ray emission which does not seem to satisfy the data. At a moderate Lorentz factor ($`\mathrm{\Gamma }1030`$), more interesting physics appear: a nonlinear system of an optically thick pair plasma and radiation at a high compactness. This regime is much more difficult to study. Nevertheless, this case can hopefully provide a welth on nonlinear phenomena that could explain many puzzling properties of GRBs. ###### Acknowledgements. Author is grateful to Juri Poutanen and Roland Svensson for useful discussions. I thank Jana Tikhomirova for assistance. This work is supported by the Wennergren Foundation for Scientific Research, a Nordita Nordic Project, and the Swedish Royal Academy of the Sciences.
no-problem/9902/astro-ph9902003.html
ar5iv
text
# EGRET Gamma-Ray Blazars: Luminosity Function and Contribution to the Extragalactic Gamma-Ray Background ## 1 Introduction EGRET has detected a total of 66 active galactic nuclei (AGN) in high energy ($`>100`$ MeV) gamma rays since the launch of CGRO in April 1991 (Hartman et al. 1999). These sources all appear to be members of the blazar class of AGN (BL Lac objects, highly polarized ($`>`$ 3%) quasars (HPQ), and optically violently variable (OVV) quasars) and are radio-loud sources with flat-spectrum at radio bands. Many of the blazars exhibit variability in their $`\gamma `$-ray flux on timescales of several days to months (McLaughlin et al. 1996, Mukherjee et al. 1997). The photon spectra of the blazars in the energy range 30 MeV to 30 GeV are generally well represented by power laws in energy with photon spectral indices in the range 1.4 to 3.0. The sources have non-thermal continuum spectra with the $`\gamma `$-ray luminosity exceeding those at other frequencies in most cases. The high $`\gamma `$-ray luminosities of the blazars suggest that the emission is likely to be beamed and, therefore, Doppler-boosted along the line of sight. The spectral energy distribution of blazars can be modeled as follows: the radio to UV emission can be explained as synchrotron emission from relativistic electrons in a uniform relativistically moving plasma. The high energy emission is due to the inverse Compton scattering of seed photons off the relativistic electrons, although the source of the soft photons still remains unresolved (see Hartman et al. 1997 for a review). In this article we summarize the luminosity function and evolution properties of the EGRET blazars and use the results to examine the contribution of the $`\gamma `$-ray-loud AGN to the diffuse extragalactic background. ## 2 Luminosity function of EGRET blazars The evolution and luminosity function of the EGRET blazars was calculated by Chiang & Mukherjee (1998) using data from the Phase 1 through Cycle 4 CGRO observations. Inclusion in the 1 Jy catalog of Kühr et al. (1981) of the EGRET blazars was used to account for possible biases introduced by missing optical identifications. A $`V/V_{\mathrm{max}}`$ test was used to show evidence of evolution. Here $`V`$ is the minimum volume that contains an object with redshift $`z`$; $`V_{\mathrm{max}}`$ is the largest volume that could contain an object with the same luminosity, and still be detected at the given flux limit. For a limiting significance of detection of $`4\sigma `$, a value of $`V/V_{\mathrm{max}}=0.7`$ was obtained, which means that we are preferentially detecting more sources at larger redshifts. No evidence of a density evolution of EGRET blazars was found. The evolution is consistent with pure luminosity evolution. (That is, the luminosity of the object is changing with time (i.e. redshift), while the co-moving number density remains the same). The luminosity of a given object as a function of redshift $`z`$ can be described by $`L(z)=L_0f(z)`$ where $`L_0=L(z=0)`$. Chiang & Mukherjee (1998) have discussed several different forms for the luminosity evolution function, including the power-law and exponential forms. The redshift distribution of EGRET blazars was used to characterize the low end of the luminosity function better. The high end of the luminosity function was fixed by the non-parametric estimate mentioned above. The redshift distribution of the EGRET data was used to fit both the break luminosity and power law index of the low end of the luminosity function. A likelihood function of the redshift distribution was constructed. The probability density for the redshift of a given blazar was computed and normalized assuming the flux limit derived for that blazar. The data were best fit with a single power law at high luminosities and a luminosity cutoff of $`1.1\times 10^{46}`$ ergs s<sup>-1</sup>. Using the lower limit of the de-evolved luminosity function, the $`\gamma `$-ray loud AGN contribution to the extragalactic $`\gamma `$-ray flux is estimated to be $`4.0_{0.9}^{+1.0}\times 10^6`$ photons cm<sup>-2</sup> s<sup>-1</sup> sr<sup>-1</sup>. The sky-averaged flux contribution of identified EGRET blazars is $`1\times 10^6`$ photons cm<sup>-2</sup> s<sup>-1</sup> sr<sup>-1</sup>. Contribution to the diffuse background by unresolved blazars, therefore, is $`3.0_{0.9}^{+1.0}\times 10^6`$ photons cm<sup>-2</sup> s<sup>-1</sup> sr<sup>-1</sup>. The extragalactic diffuse flux for $`E>100`$ MeV estimated by Sreekumar et al. (1998) is $`1.36\times 10^5`$ photons cm<sup>-2</sup> s<sup>-1</sup> sr<sup>-1</sup>. We therefore find that blazars cannot account for all of the diffuse extragalactic $`\gamma `$-ray background at the energies considered. ## 3 Summary The luminosity function and evolution properties of $`\gamma `$-ray-loud blazars imply that only $``$ 25% of the diffuse extragalactic emission measured by SAS-2 and EGRET can be attributed to unresolved $`\gamma `$-ray blazars. This is contrary to other estimates which assume a linear correlation between the measured radio and $`\gamma `$-ray fluxes (e.g., Stecker & Salamon 1996). However, we note that our result is consistent with recent work by Mücke & Pohl (1998) where the extragalactic diffuse contribution from blazars is synthesized using a specific blazar emission model (Dermer & Schlickeiser 1993) and an extrapolation of the observed log$`N`$–log$`S`$ distribution of EGRET blazars. As in our study, Mücke & Pohl make no assumptions regarding supposed correlations between the $`\gamma `$-ray fluxes with any other spectral band. Our results lead to the exciting conclusion that other sources of diffuse extragalactic $`\gamma `$-ray emission must exist. The spectrum of the measured extragalactic emission implies that the average quiescent energy spectra of these sources extend to at least 50 GeV and maybe up to 100 GeV, without a significant change in the slope. If gamma-ray blazars continue to make a significant contribution to the diffuse emission at these energies, then the spectra of the parent relativistic particles in blazars which produce the gamma-rays must also remain hard to even higher energies.
no-problem/9902/hep-ph9902425.html
ar5iv
text
# Models of inflation and their predictions ## Abstract Taking field theory seriously, inflation model-building is difficult but not impossible. The observed value of the spectral index of the adiabatic density perturbation is starting to discriminate between models, and may well pick out a unique one in the forseeable future. I shall summarise the present status of inflation model-building, and its comparison with observation. This is already a substantial area of research, and with the advent of new observations in the next few years it will become a major industry. For an extensive review with references, see review99 . I focus on the simplest paradigm, which following the usual scientific practice should be tested to destruction before complications are entertained. There is a slowly-rolling, single-component inflaton field, which experiences Einstein gravity and drives the observable Universe into a spatially flat condition. The gaussian adiabatic density perturbation, generated by the vacuum fluctuation of the inflaton field $`\varphi `$, is solely responsible for the origin of structure. During inflation, the potential $`V(\varphi )`$ of the inflaton field $`\varphi `$ satisfies the flatness conditions $$ϵ1,|\eta |1,$$ (1) where $`ϵ\frac{1}{2}M_\mathrm{P}^2(V^{}/V)^2`$ and $`\eta M_\mathrm{P}^2V^{\prime \prime }/V`$. The inflaton field satisfies the slow-roll approximation $`3H\dot{\varphi }=V^{}`$ where $`H`$ is the Hubble parameter given by $`3H^2=V/M_\mathrm{P}^2`$. To work out the predictions one needs the number of $`e`$-folds $`N`$ between a given epoch and the end of slow-roll inflation (with the inflaton $`\varphi `$). Its small change is defined by $`dNHdt(=d\mathrm{ln}a)`$, which with the slow-roll approximation leads to $$N(\varphi )=_{\varphi _{\mathrm{end}}}^\varphi M_\mathrm{P}^2\frac{V}{V^{}}𝑑\varphi .$$ (2) Here $`\varphi _{\mathrm{end}}`$ marks the end of slow-roll inflation, caused by the failure of the flatness conditions or by the destabilization of a non-inflaton field. Often, the integral is dominated by the other limit $`\varphi `$ in which case the predictions are independent of $`\varphi _{\mathrm{end}}`$. The vacuum fluctuation of the inflaton field generates a gaussian adiabatic primordial density perturbation, whose conventionally-defined spectrum is given by $$\delta _H^2(k)=\frac{1}{75\pi ^2M_\mathrm{P}^6}\frac{V^3}{V^2}.$$ (3) The right hand side is evaluated at the value $`\varphi (k)`$ which corresponds to horizon exit $`k=aH`$. It satisfies $`d\mathrm{ln}k=dN(\varphi )`$ and therefore $`\mathrm{ln}(k_{\mathrm{end}}/k)=N(\varphi )`$, where $`k_{\mathrm{end}}`$ is the scale leaving the horizon at the end of slow-roll inflation. With Eq. (2), this determines $`\varphi (k)`$ provided that we know the value of $`N(\varphi )`$ when some reference scale leaves the horizon. This scale is conveniently taken to be the central scale probed by COBE, $`k_{\mathrm{COBE}}7.5H_0`$ where $`H_0`$ is the present Hubble parameter. Depending on the history of the Universe, one has $$N_{\mathrm{COBE}}60\mathrm{ln}(10^{16}\text{GeV}/V^{1/4})\frac{1}{3}\mathrm{ln}(V^{1/4}/T_{\mathrm{reh}})\mathrm{\Delta }N,$$ (4) where $`T_{\mathrm{reh}}`$ is the reheat temperature and $`\mathrm{\Delta }N>0`$ allows for matter domination and thermal inflation between reheating and nucleosynthesis (and any continuation of inflation after the epoch $`\varphi _{\mathrm{end}}`$). Differentiating Eq. (3) with the aid of Eq. (2), the spectral index is $$\frac{n(k)1}{2}\frac{d\delta _H}{d\mathrm{ln}k}=\eta 3ϵ.$$ (5) If $`n`$ is constant then $`\delta _H^2k^{n1}`$. Inflation also generates gravitational waves with primordial spectrum $`𝒫_{\mathrm{grav}}(k)=\frac{2}{M_\mathrm{P}^2}\left(\frac{H}{2\pi }\right)^2`$. No gravitational wave signal is seen in the cmb anisotropy, which translates into a bound $`ϵ\text{ }<0.1`$. The signal will probably never be seen unless $`ϵ\text{ }>10^3`$. At $`k_{\mathrm{COBE}}`$, the COBE observations give the accurate normalization (ignoring gravitational waves) $`\delta _H=1.91\times 10^5`$, which corresponds to $$V^{1/4}/ϵ^{1/4}=.027M_\mathrm{P}=6.7\times 10^{16}\text{GeV}.$$ (6) The present bound $`ϵ\text{ }<0.1`$ on gravitational waves implies $`V^{1/4}\text{ }<3.6\times 10^{16}\text{GeV}`$. Gravitation waves will never be detectable if $`V^{1/4}\text{ }<1\times 10^{16}\text{GeV}`$, and most inflation models give a lower value when normalized to satisfy Eq. (6). Over the range of cosmological scales, say $`H_0<k<10^4H_0`$, there is an observational bound on the scale-dependence of $`\delta _H`$. Until recently uncertainties in the cosmological parameters allowed only the weak result $`|n1|\text{ }<0.2`$, but new data give a preliminary result $`|n1|<0.05`$ dick . After Planck flies we shall probably know $`n(k)`$ with an uncertainty of $`\pm 0.01`$. At the most primitive level, a model of inflation consists of a form for $`V(\varphi )`$, plus a prescription for $`\varphi _{\mathrm{end}}`$ if the latter is not determined by $`V`$ as happens in some hybrid inflation models. From a field theory viewpoint, one expects $`V(\varphi )`$ to be schematically of the following form<sup>1</sup><sup>1</sup>1The form is more restrictive if $`\varphi `$ is a pseudo-Goldstone boson, but that hypothesis has not so far lead to an attractive model of inflation. $`V`$ $`=`$ $`V_0+{\displaystyle \frac{1}{2}}m^2\varphi ^2+M\varphi ^3+{\displaystyle \frac{1}{4}}\lambda \varphi ^4`$ (7) $`+`$ $`(\stackrel{~}{m}^4+2g\varphi ^2+g^2\varphi ^4)\mathrm{ln}(g\varphi /Q)`$ $`+`$ $`{\displaystyle \underset{d=5}{\overset{\mathrm{}}{}}}\lambda _dM_\mathrm{P}^{4d}\varphi ^d`$ $`+`$ $`[\mathrm{\Lambda }^{4+\alpha }\varphi ^\alpha \stackrel{~}{\mathrm{\Lambda }}^{4\pm \beta }\varphi ^\beta ].`$ In the first line are renormalizable tree-level terms, with the origin is chosen so that $`V^{}=0`$; the coefficients can have either sign. In the second line is the one-loop correction due to a particle with mass $`\stackrel{~}{m}`$ and coupling $`g`$, valid if $`g\varphi \text{ }>\stackrel{~}{m}`$. (It is suppressed at smaller $`\varphi `$.) The renormalization scale $`Q`$ should be fixed at a typical relevant value of $`g\varphi `$ to minimize higher loop contributions. One sums over particles with a plus/minus sign for bosons/fermions, and unbroken global supersymmetry would make the total vanish. During inflation susy is broken, but the $`\varphi ^4`$ term still vanishes, and the $`\varphi ^2`$ term may vanish, but one expects no cancellations between the contributions to the the constant term. The third line contains the non-renormalizable terms which summarise unknown Planck scale physics; the coefficients $`\lambda _d`$ are generically of order 1, but supersymmetry can make a finite number of them tiny. (By appealing to a continuous global symmetry it can make them all tiny, but no such symmetry comes out of string theory.) The fourth line contains a $`\varphi ^\alpha `$ term that might come from dynamical symmetry breaking, and a $`\varphi ^\beta `$ term that might come from mutated hybrid inflation. These terms will be present only in exceptional cases, unlike the others which are generic. One who presumes to use field theory ought to take this expression seriously, and when that is done the flatness conditions Eq. (1) turn out to be extraordinarily difficult to satisfy. The non-renormalizable terms are obviously dangerous. So are the loop corrections, especially in the context of hybrid inflation where some coupling has to be substantial. Less obviously, a generic supergravity theory gives a prediction of the form $`M_\mathrm{P}^2V^{\prime \prime }/V=1+\mathrm{}`$, in which case there has to be some cancellation whose origin is at present obscure. The simplest proposed model, usually called chaotic inflation, is a monomial $`V\varphi ^p`$ with $`p`$ usually 2 or 4. Inflation takes place at $`\varphi >\varphi _{\mathrm{end}}pM_\mathrm{P}`$, giving $`n1=(2+p)/(2N)`$ and significant gravitational waves ($`ϵ=p/(5N)`$). If non-renormalizable terms are there, they kill the above model. To live with them one needs $`\varphi \text{ }<M_\mathrm{P}`$ or $`\varphi M_\mathrm{P}`$. (The latter case is preferable, but one has to watch the loop correction which generates a term $`V^{\prime \prime }\varphi ^2`$.) Making the reasonable assumption that only one term of Eq. (7) is relevant, Eq. (1) then requires $`VV_0`$. Predictions for the spectral index are given in the Tables. An inflaton field with negligible interaction ($`V=V_0\pm \frac{1}{2}m^2\varphi ^2`$) gives a constant $`n1`$, which can be positive or negative and is typically not extremely small or interactions would be significant. One significant interaction term typically gives $`n`$ close to 1, with weak scale-dependence. A dramatic exception is the case ewanloop where a $`\varphi ^2\mathrm{ln}\varphi `$ loop correction dominates the mass term, as shown in the second line of Table 1. The correct COBE normalization Eq. (6) is obtained with a reasonable value $`c10^1`$ to $`10^2`$ of the coupling $`c`$.<sup>2</sup><sup>2</sup>2To calculate this normalization one has to take into account higher loop corrections by using a renormalization-group-improved potential, but the order of magnitude is unaffected. Furthermore, such a coupling allows $`n`$ to pass through 1 on cosmologically interesting scales! The observed value of $`n(k)`$ will become an increasingly powerful discriminator in the future. If one were to take it seriously, the preliminary result $`|n1|<0.05`$ would already rule out the cubic self-interaction in Table 2. It would also strongly constrain the parameters $`c`$ and $`\sigma `$ in the case just mentioned, perhaps demanding physically unreasonable values for them.
no-problem/9902/cond-mat9902141.html
ar5iv
text
# Directed Percolation and Generalized Friendly Walkers ## Abstract We show that the problem of directed percolation on an arbitrary lattice is equivalent to the problem of $`m`$ directed random walkers with rather general attractive interactions, when suitably continued to $`m=0`$. In 1+1 dimensions, this is dual to a model of interacting steps on a vicinal surface. A similar correspondence with interacting self-avoiding walks is constructed for isotropic percolation. preprint: XXXX The problem of directed percolation (DP), first introduced by Broadbent and Hammersley , continues to attract interest even though it has so far defied all attempts at an exact solution, even in two dimensions. Although the problem was originally formulated statically on a lattice with a preferred direction, when the latter is interpreted as time the universal behavior close to the percolation threshold is also believed to describe the transition from a noiseless absorbing state to a noisy, active one, which occurs in a wide class of stochastic processes . It also maps onto reggeon field theory, which describes high-energy diffraction scattering in particle physics . Some time ago, Arrowsmith, Mason and Essam argued that the pair connectedness probability $`G(r,r^{})`$ for directed bond percolation on a two-dimensional diagonal square lattice can be related to the partition function for the weighted paths of $`m`$ ‘friendly’ walkers which all begin at $`r`$ and end at $`r^{}`$, when suitably continued to $`m=0`$. These are directed random walks which may share bonds of the lattice but do not cross each other (see Fig. 1). In fact, Arrowsmith et al represented these configurations in other ways: either as vicious walkers, which never intersect, by moving the friendly walkers each one lattice spacing apart horizontally; or as integer flows on the directed lattice, to be defined explicitly below. Arrowsmith and Essam showed that $`G(r,r^{})`$ is also related to a partition function for a $`\lambda `$-state chiral Potts model on the dual lattice, on setting $`\lambda =1`$, thus generalizing the well-known result of Fortuin and Kasteleyn for ordinary percolation. In a more recent paper, Tsuchiya and Katori have considered instead the order parameter of the DP problem, and have shown that in d=2 it is related to a certain partition function of the same $`\lambda =1`$ chiral Potts model, and also that, for arbitrary $`\lambda `$, the latter is equivalent to a partition function for $`m=(\lambda 1)/2`$ friendly walkers. It is the purpose of this Letter to describe a broad generalization of these results. We demonstrate, in particular, a direct connection between a general connectedness function of DP and a corresponding partition function for $`m`$ friendly walkers when continued to $`m=0`$. We show that this holds on an arbitrary directed lattice in any number of dimensions, and for all variant models of DP, whether bond, site or correlated. Moreover the weights for a given number of walkers passing along a given bond or through a given site may be chosen in a remarkably arbitrary fashion, still yielding the same result at $`m=0`$. We now describe the correspondence between these two problems in detail. A directed lattice is composed of a set of points in $`𝐑^d`$ with a privileged coordinate $`t`$, which we may think of as time. Pairs of these sites $`(r_i,r_j)`$ are connected by fixed bonds, oriented in the direction of increasing $`t`$, to form a directed lattice. In the directed bond problem, each bond is open with a probability $`p`$ and closed with a probability $`1p`$, and in the site problem it is the sites which have this property. In principle the probabilities $`p`$ could be inhomogeneous, and we could also consider site-bond percolation and situations in which different bonds and sites are correlated. Our general result applies to all these cases, but for clarity we shall restrict the argument to independent homogeneous directed bond percolation. The pair connectedness $`G(r,r^{})`$ is the probability that the points $`r`$ and $`r^{}`$ (with $`t<t^{}`$) are connected by a continuous path of bonds, always following the direction of increasing $`t`$. On the same lattice, let us define the corresponding integer flow problem. Assign a non-negative integer-valued current $`n(r_i,r_j)`$ to each bond, in such a way that it always flows in the direction of increasing $`t`$, and is conserved at the vertices. At the point $`r`$ there is a source of strength $`m1`$, and at $`r^{}`$ a sink of the same strength. There is no flow at times earlier than that of $`r`$ or later than that of $`r^{}`$. Such a configuration may be thought of as representing the worldlines of $`m`$ particles, or walkers, where more than one walker may share the same bond. The configurations are labeled by distinct allowed values of the $`n(r_i,r_j)`$, so that they are counted in the same way as are those of identical bosons. Alternatively, in $`1+1`$ dimensions, we may regard the walkers as distinct but with worldlines which are not allowed to cross. In the partition sum, each bond is counted with a weight $`p(n(r_i,r_j))`$. In the simplest case we take $`p(0)=1`$ and $`p(n)=p`$ for $`n1`$ (although we shall show later that this may be generalized). Since $`p>p^n`$ for $`n>1`$, there is an effective attraction between the walkers, leading to the description ‘friendly’. The partition function is then $$Z(r;r^{};m)\underset{\mathrm{allowed}\mathrm{configs}}{}\underset{(r_i,r_j)}{}p(n(r_i,r_j))$$ This expression is a polynomial in $`m`$ and so may be evaluated at $`m=0`$. The statement of the correspondence between DP and the integer flow problem for the case of the pair connectedness is then $$G(r;r^{})=Z(r;r^{};0).$$ Note that since the weights $`p(n)`$ behave non-uniformly as $`n0`$, the continuation of $`Z(r;r^{};m)`$ to $`m=0`$ is not simply the result of taking zero walkers (which would be $`Z=1`$): rather it is the non-trivial answer $`G`$. Similar results hold for more generalized connectivities. For example, if we have points $`(r_1^{},r_2^{},\mathrm{},r_l^{})`$ all at the same time $`t^{}>t`$, we may consider the probability $`G(r;r_1^{},r_2^{},\mathrm{},r_l^{})`$ that all these points, irrespective of any others, are connected to $`r`$. The corresponding integer flow problem has a source of strength $`ml`$ at $`r`$, and sinks of arbitrary (but non-zero) strength at each point $`r_j^{}`$. In this case $$G(r;r_1^{},r_2^{},\mathrm{},r_l^{})=(1)^{l1}Z(r;r_1^{},r_2^{},\mathrm{},r_l^{};m=0)$$ where the partition function is defined with the same weights as before. Since the order parameter for DP may be defined as the limit as $`t^{}t\mathrm{}`$ of $`P(t^{}t)`$, the probability that any site at time $`t^{}`$ is connected to $`r`$, and this may be written using an inclusion-exclusion argument as $$P(t^{}t)=\underset{r^{}}{}G(r;r^{})\underset{r_1^{},r_2^{}}{}G(r;r_1^{},r_2^{})+\mathrm{}$$ (where the sums over the $`r_j^{}`$ are all restricted to the fixed time $`t^{}`$), we see that it is in fact given by the $`m=0`$ evaluation of the partition function for all configurations of $`m`$ walkers which begin at $`r`$ and end at time $`t^{}`$. This generalizes the result of Tsuchiya and Katori to an arbitrary lattice. Although this continuation to $`m=0`$ is reminiscent of the replica trick, it is in fact quite different. Moreover it is mathematically well-defined, since, as we argue below, $`Z`$ is a finite sum of terms, each of which, with the simple weights given above, is a polynomial in $`m`$. We now give a summary of the proof, which is elementary. The connectedness function $`G(r;r_1^{},r_2^{},\mathrm{},r_l^{})`$ is given by the weighted sum of all graphs $`𝒢`$ which have the property that each vertex may be connected backwards to $`r`$ and forwards to at least one of the $`r_j^{}`$. (Alternatively, $`𝒢`$ is a union of directed paths from $`r`$ to one of the $`r_j^{}`$.) Each such graph is weighted by a factor $`p`$ for each bond and $`(1)`$ for each closed loop. A simple example is shown in Fig. 2. A given graph corresponds to summing over all configurations in which the bonds in $`𝒢`$ are open, irrespective of all other bonds in the lattice. The factors of $`(1)`$ are needed to eliminate double-counting. It is useful to decompose vertices in $`𝒢`$ with coordination number $`>3`$ by inserting permanently open bonds into them in such a way that the only vertices are those in which two directed bonds merge to form one ($`21`$), and vice versa. This does not affect the connectedness properties. We may then associate the factors of $`(1)`$ with each $`12`$ vertex in $`𝒢`$, as long as we incorporate an overall factor $`(1)^{l1}`$ in $`G`$. With each graph $`𝒢`$ we associate a restricted set of integer flows, called proper flows, such that $`n1`$ for each bond in $`𝒢`$, and $`n=0`$ on each bond not in $`𝒢`$. Those corresponding to the graphs in Fig. 2 are shown in Fig. 3. Note that the last graph corresponds to $`m1`$ configurations of integer flows, which gives precisely the required factor of $`(1)`$ when we set $`m=0`$. In general, summing over all allowed integer flows will generate the sum over all allowed $`𝒢`$, with correct weights $`p`$: the non-trivial part is to show that we recover the correct factors of $`(1)`$ when we set $`m=0`$. This follows from the following simple lemma: if $`A(n)`$ is a polynomial in $`n`$, and we define the polynomial $`B(m)_{n=1}^{m1}A(n)`$, then $`B(0)=A(0)`$. We give a proof which shows that the result may be generalized to other functions: write $`A(n)`$ as a Laplace transform $`A(n)=_C(ds/2\pi i)e^{ns}\stackrel{~}{A}(s)`$. Then $`B(m)=_C(ds/2\pi i)\left((e^se^{ms})/(1e^s)\right)\stackrel{~}{A}(s)`$, so that $`B(0)=_C(ds/2\pi i)\stackrel{~}{A}(s)=A(0)`$. An immediate corollary is that if $`A(n_1,n_2,\mathrm{})`$ is a polynomial in several variables, and $`B(m)_{n=1}^{m1}A(n,mn,\mathrm{})`$, then $`B(0)=A(0,0,\mathrm{})`$. We use this to proceed by induction on the number of $`12`$ vertices in $`𝒢`$. Beginning with the vertex which occurs at the earliest time, the contribution to $`Z`$ from the proper flow on $`𝒢`$, when evaluated at $`m=0`$, is, apart from a factor $`(1)`$, equal to that for another graph $`𝒢^{}`$ which will have one fewer $`12`$ vertex. However, $`𝒢^{}`$ differs from the previously allowed set of graphs $`𝒢`$ in that it may have more than one vertex at which current may flow into the graph. For this reason we extend the definition of the allowed set of graphs to include those in which every vertex is connected to at least one ‘input’ point $`(r_1,r_2,\mathrm{})`$ and at least one ‘output’ point $`(r_1^{},r_2^{},\mathrm{})`$. In the corresponding integer flow problem, currents $`(m_1,m_2,\mathrm{})`$ flow in at the inputs, whereas the only restriction on the outputs is that non-zero current should flow out. The partition function is then the weighted sum over all such allowed integer flows. Induction on the number of $`12`$ vertices then shows that this partition function, evaluated at $`m_j=0`$, gives the corresponding DP graph correctly weighted. (The induction starts from graphs with no $`12`$ vertices which involve no summations and for which the result is trivial.) Since our main result relies only on the lemma it follows also for rather general weights $`p(n)`$. The only requirement is that $`p(n)`$ grow no faster than an exponential at large $`n`$, and that, when continued to $`n=0`$, it give the value $`p1`$. In this case, $`Z`$ will no longer be a polynomial in $`m`$, but, since by the inductive argument above it is given by a sum of convolutions of $`p(n)`$, its continuation to $`m=0`$ will be well-defined through its Laplace transform representation. For example, we could take $`p(n)=p^{1n}`$ for $`n1`$. This raises the possibility of choosing some suitable set of weights for which the integer flow problem, at least in $`1+1`$ dimensions, is integrable, for example by Bethe ansatz methods. Unfortunately our results in this direction are, so far, negative. In the case of bond percolation on a diagonal square lattice let $`Z(x_1,x_2,\mathrm{},x_m;t)`$ be the partition function under the constraint that the walkers arrive at $`\{x_1,x_2,\mathrm{},x_m\}`$ at time $`t`$, the physical region being $`\{x_1x_2\mathrm{}x_m\}`$. Turning the master equation for $`Z`$ in an eigenvalue problem and writing the eigenfunction $`\psi _m(x_1,x_2,\mathrm{}x_m)`$ in the usual Bethe ansatz form, one gets for $`\psi _2(x_1,x_2)=A_{12}e^{i(x_1k_1+x_2k_2)}+A_{21}e^{i(x_1k_2+x_2k_1)}`$ the following condition on the amplitudes: $$\frac{A_{21}}{A_{12}}=\frac{e^{i(k_1k_2)}ϵ\left(e^{i(k_1+k_2)}+e^{i(k_1+k_2)}\right)}{e^{i(k_1k_2)}ϵ\left(e^{i(k_1+k_2)}+e^{i(k_1+k_2)}\right)}$$ (the same as that which appears in the XXZ spin chain ) where $`ϵ=p(2)/p(1)^21`$. Requiring that the $`m`$-particle scattering should factorise into a product of these two-body $`S`$-matrices places constraints on the weights $`p(n)`$. In general these equations appear too difficult to solve, except in the weak interaction limit ($`ϵ1/2`$), where we find $$2^n=2q(n)+\underset{s=1}{\overset{n1}{}}q(ns)q(s)(1\lambda s(ns))+O(\lambda ^2)$$ where $`q(s)=p(s)/(p(1))^s`$, $`\lambda =2ϵ1`$. This may be solved for successive $`q(n)`$, but it is easy to see by applying the above lemma that, when continued to $`n=0`$, it will always yield the value $`1`$, rather than $`p`$ as required. We conclude that the $`m=0`$ continuation of this integrable case does not correspond to DP. It is nevertheless interesting that integrable models of such interacting walkers can be formulated. In 1+1 dimensions, our generalized friendly walker model maps naturally onto a model of a step of total height $`m`$ on a vicinal surface, by assigning integer height variables $`h(R)`$ to the sites $`R`$ of the dual lattice, such that $`h=0`$ for $`x\mathrm{}`$, $`h=m`$ for $`x+\mathrm{}`$, and $`h`$ increases by unity every time the path of a walker is crossed. The weights for neighboring dual sites $`R`$ and $`R^{}`$ are $`p(h(R^{})h(R))`$. This is slightly different from, and simpler than, the chiral Potts model studied in . A similar correspondence between percolation and interacting random walks is valid also for the isotropic case. The pair connectedness $`G(r,r^{})`$ may be represented by a sum of graphs $`𝒢`$, just as in DP . Each graph consists of a union of oriented paths from $`r`$ to $`r^{}`$, As before, each bond is counted with weight $`p`$ and each loop carries a factor $`(1)`$. Note that graphs which contain a closed loop of oriented bonds are excluded. Such contributions cannot occur in DP because of the time-ordering. The correspondence with integer flows or friendly walkers follows as before. The latter picture is particularly simple. $`m`$ walkers begin at $`r`$ and end at $`r^{}`$. When two or more walkers occupy the same bond, they must flow parallel to each other. Since they cannot form closed loops, they are self-avoiding. Moreover, walkers other than those which begin and end at $`r`$ and $`r^{}`$, which could also form closed loops, are not allowed. Each occupied bond has weight $`p(n)`$ as before, and the separate configurations are counted using Bose statistics. $`G(r,r^{})`$ is then given by the continuation to $`m=0`$ of the partition function. We conclude that ordinary percolation is equivalent to the continuation to $`m=0`$ of a problem of $`m`$ oriented self-avoiding walks, with infinite repulsive interactions between anti-parallel segments on the same bond, but attractive parallel interactions. In two dimensions, this is again dual to an interesting height model, in which neighboring heights satisfy $`|h(R^{})h(R)|m`$, but local maxima or minima of $`h(R)`$ are excluded. For example, the order parameter of percolation is given by the continuation to $`m=0`$ of the partition function for a screw dislocation of strength $`m`$ in this model. To summarize, we have shown that the DP problem is simply related to the integer flow problem, or equivalently that of $`m`$ bosonic ‘friendly’ walkers, when suitably continued to $`m=0`$. This holds on an arbitrary directed lattice in any number of dimensions, and with rather general weights. It is to be hoped that this correspondence might provide a new avenue of attack on the unsolved problem of directed percolation. The authors acknowledge useful discussions with F. Essler and A. J. Guttmann, and thank T. Tsuchiya and M. Katori for sending a copy of their paper prior to publication. This research was supported in part by the Engineering and Physical Sciences Research Council under Grant GR/J78327.
no-problem/9902/cond-mat9902010.html
ar5iv
text
# Ising pyrochlore magnets: Low temperature properties, ice rules and beyond ## Abstract Pyrochlore magnets are candidates for spin-ice behavior. We present theoretical simulations of relevance for the pyrochlore family $`R_2`$Ti<sub>2</sub>O<sub>7</sub> ($`R=`$ rare earth) supported by magnetothermal measurements on selected systems. By considering long ranged dipole-dipole as well as short-ranged superexchange interactions we get three distinct behaviours: (i) an ordered doubly degenerate state, (ii) a highly disordered state with a broad transition to paramagnetism, (iii) a partially ordered state with a sharp transition to paramagnetism. Thus these competing interactions can induce behaviour very different from conventional “spin ice”. Closely corresponding behaviour is seen in the real compounds—in particular Ho<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub> corresponds to case (iii) which has not been discussed before, rather than (ii) as suggested earlier. The pyrochlore rare earth titanates have attracted great attention recently because their unusual structure (the “pyrochlore lattice”) of corner-sharing tetrahedra can lead to geometric frustration and interesting low-temperature properties . Our interest in these particular titanates was sparked by the observation (confirmed by our crystal field calculations) that some of them are nearly ideal Ising systems . Some intriguing experimental data presented below can only be explained by assuming a competition between classical dipole-dipole interactions and quantum superexchange. Depending on their relative magnitudes, the ground states of the Ising-like systems can be “ice-like”, ordered, or partially ordered. “Ice models” get their name because real (water) ice has a large ground state degeneracy arising from local rules for the ordering of protons in water ice. Several related models have been studied since, but as far as we know this is the first time that two competing interactions have been included in such a model, with the physics changing significantly depending on their relative strengths. Pyrochlores of the form $`A_2B_2`$O<sub>7</sub> have been extensively studied, where $`A`$ are rare earth ions and $`B`$ are transition metal ions, each forming interpenetrating pyrochlore lattices. Often these can be modelled by isotropic Heisenberg antiferromagnets because the $`B`$ atom is magnetic ($`B`$ = Mn, Mo) with a small dipole moment, and the $`A`$ atom is nonmagnetic (eg $`A`$=Y), so the dominant interaction between the $`B`$ atoms is superexchange. The lattice is a three dimensional version of the kagomé lattice, with a parallelepiped as the unit cell and a tetrahedron as the basis (figure 1). Typically the magnetic ions sit at the corners of these tetrahedra. The tetrahedra form a face centred cubic lattice, so the structure can be viewed as four interpenetrating fcc lattices and the unit cell is often pictured as a cube, but we used the smaller parallelepiped unit cell in our simulations. The lattice can exhibit frustration; in the isotropic Heisenberg case, this happens for the antiferromagnet , but in an Ising limit it can happen even with ferromagnetic interactions . In our systems, the Ti<sup>4+</sup>, like the O<sup>2-</sup> ions, are nonmagnetic and play no role apart from holding the lattice together. However, typically the rare earth ion carries a large magnetic moment (from its unfilled $`f`$-electron shells), so that the dipolar interaction is as significant as the superexchange. Another important aspect is the single ion anisotropy imposed by the crystal field (CF) interaction of D<sub>3d</sub> symmetry at the rare earth site, since a strong easy-axis anisotropy results in the Ising limit, even for isotropic exchange interactions. Previous investigations of the low-temperature properties of these systems assumed a strong single-ion anisotropy along the $`111`$ direction, i.e. along the line pointing from the center of the tetrahedron to the corner where the rare earth is located However, there is so far no direct experimental evidence to support this assumption. We have therefore investigated in detail the CF interaction in the Ho-compound using inelastic neutron scattering . From the energies and intensities of the observed CF transitions, we could unambiguously determine the CF parameters and energy levels of Ho<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub> (top inset to fig. 1). Because the crystal structure varies very little on replacing one rare-earth ion by another, these CF parameters will give good estimates of the splitting and single ion anisotropy in the other compounds as well. So we find a strong easy-axis anisotropy along the line joining the tetrahedra centres for Ho and Dy, but not for Yb, Er or Tb. Though it has been suggested earlier that Yb<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub> is also Ising like, we find that there is in fact an easy plane here, rather than an easy axis: $`J=`$ 7/2, $`J_z`$ = $`\pm 1/2`$ for the ground states, so the spin points mainly in the $`x`$$`y`$ plane. The same seems to be true for Er, while Tb may be Ising like but only at very low temperatures ( $`<`$ 0.1 K). The nearest-neighbour Ising model on this lattice (considered in ) can show at most two kinds of behaviour depending on the sign of the interaction. If the interaction is “antiferromagnetic”, the ground state is doubly degenerate and each tetrahedron has alternately all spins pointing out or all spins pointing in. If the interaction is “ferromagnetic”, the ground state of a tetrahedron is given by an “ice rule” where two spins point out and two into the tetrahedron, and is sixfold degenerate. Any state with all tetrahedra satisfying this is a ground state. It is highly degenerate with a finite entropy per spin, which our simulations suggest is around 0.22 $`k_B`$ in agreement with Pauling’s prediction . In both cases, the specific heat vanishes at small as well as large temperatures, with a peak in the middle. Simulations show that in the ferromagnetic case (ice rule) the peak is broad, and occurs at the temperature scale of the interaction, suggesting a typical broad crossover from a glassy low-temperature phase with macroscopic entropy to a paramagnetic phase. In the antiferromagnetic case, the peak is very sharp and is at a temperature around 4 times the interaction energy, suggesting a phase transition from an ordered ground state to the paramagnetic phase. The energy scale of the peak here may be higher because the energy cost of a single spin flip from the ground state is 12 times the interaction energy of a pair of dipoles, as opposed to four times this energy in the ferromagnetic case. Experiments were done on polycrystalline samples of these compounds which were synthesized from stoichiometric mixtures of the lanthanide oxides (99.99%) and TiO<sub>2</sub> (99.995%) heated at 1200 C in air for 1 week with intermediate grindings. All materials were found to be phase pure by conventional powder X-ray diffraction. The specific heat was determined using a standard semiadiabatic technique, and the susceptibility measured with a commercial magnetometer. All susceptibility data were taken at 0.1 Tesla. While simulations for the specific heat of the nearest neighbour model suggest a broad crossover (antiferromagnetic interaction) or a sharp narrow peak (ferromagnetic), for Ho something entirely different occurs: at around 0.6 K a transition seems to occur, below which the spins seem to decouple thermally from the system and freeze out into a low temperature metastable glassy phase. Moreover, the data for Ho suggests a peak at substantially smaller energies than the dipolar interaction (2.3 K). To explain this we need to go beyond the nearest neighbour model, by (a) considering the long-ranged dipole-dipole interaction between the spins, (b) including an antiferromagnetic superexchange to reduce the dipolar coupling. It turns out that the Dy compound is very well described by a purely dipole-dipole interaction but with a reduced effective dipole moment (around 75% of the full value). This could be explained by a superexchange which falls off for the nearest few neighbours in roughly the same way as the dipolar interaction. This compound is very interesting in its own right, being a good realization of the “ice models” which have interested physicists for a long time, and we have discussed it extensively elsewhere . The fact that Ho has significantly different behaviour from Dy means that the superexchange behaves differently. It is not possible to account for this cleanly, so we merely assume that the superexchange is nearest-neighbour only: this still gives us excellent agreement with the observations and highlights why these compounds are different from “spin ice”. We calculate the dipole-dipole interaction, assume a nearest-neighbour superexchange which we estimate from the experimental data, do a simulation for the specific heat and susceptibility with these values, and compare with experiment. Our simulations are on systems with 2048 spins ($`8\times 8\times 8`$ tetrahedra each with 4 sites) and around 10000 Monte Carlo steps per spin. We use a long-ranged dipole-dipole interaction (up to 5 nearest neighbour distances, but the results don’t change significantly beyond the third neighbour). The convergence is good despite the long range of the interaction, probably because there is no global Ising axis and no net magnetization, so beyond the third neighbour the large numbers of spins in different directions tend to cancel one another. We obtain the superexchange for Ho from the experimental high temperature zero field susceptibility. The high temperature expansion of the susceptibility is readily obtained from elementary statistical mechanics. First we fix the notation: We use scalar Ising spins, $`S_i=\pm 1`$, with $`S_i=+1`$ if it points out of an “upward” tetrahedron (or, equivalently, into a “downward” tetrahedron”) and $`S_i=1`$ otherwise. We write the first two terms in the expansion as $`\chi (T)=\frac{C_1}{T}\left(1+\frac{C_2}{T}\right)`$ and try to evaluate these coefficients using $`M=\frac{1}{N}g_s\mu _B_iS_i\mathrm{cos}\theta _i`$ where $`g_s`$ is the Lande factor, $`S_i`$ is the effective spin of rare earth atom $`i`$ ($`=\pm |J_z|`$ for that atom), $`\mu _B`$ is the Bohr magneton, and $`\theta `$ is the angle made by the direction of the spin with the (arbitrarily chosen) direction of the external magnetic field. Our results turn out to be independent of the direction, at least to this order. The angle brackets denote the thermodynamic average. From the fluctuation-dissipation theorem, $`\chi (T)=\frac{1}{N}\beta (g_s\mu _B)^2_{i,j}\mathrm{\Gamma }_{ij}`$ where $`\mathrm{\Gamma }_{ij}=S_i\mathrm{cos}\theta _iS_j\mathrm{cos}\theta _j_{=0}`$. Using standard methods (expanding to order $`\beta `$), we arrive at $`\chi (T)`$ $`=`$ $`{\displaystyle \frac{N(g_s\mu _B)^2}{k_BT}}{\displaystyle \frac{S^2}{3}}\left[1{\displaystyle \frac{6S^2}{k_BT}}{\displaystyle \frac{1}{4}}{\displaystyle \underset{\genfrac{}{}{0pt}{}{i}{\genfrac{}{}{0pt}{}{\mathrm{over}1}{\mathrm{tetrahedron}}}}{}}{\displaystyle \underset{j}{}}J_{ij}\mathrm{cos}\theta _i\mathrm{cos}\theta _j\right]`$ (1) The sum over $`j`$ is over all sites in the lattice excluding $`i`$. If the nearest-neighbour dipolar interaction is $`J_D`$, the superexchange is $`J_S`$, and we include long-ranged dipolar interaction but only nearest neighbour superexchange, we get $`\chi (T)={\displaystyle \frac{N(g_s\mu _B)^2}{k_BT}}{\displaystyle \frac{S^2}{3}}\left[1+{\displaystyle \frac{6S^2}{k_BT}}{\displaystyle \frac{1}{4}}\left(2.18J_D+2.67J_S\right)\right]`$ and from here we can extract the coefficients $`C_1`$ and $`C_2`$. This is valid for an ideal Ising model at sufficiently high temperatures. When we plot the experimental $`\chi T`$ against $`1/T`$ (Fig. 2), we find a marked linear region at low temperatures (2–10 K). This is the region we want: if we pull out $`C_2`$ from this region, we find it is much less than 1 K, so things are consistent. At higher temperatures, where the Ising approximation should fail, the graph is no longer linear. We equate this value of $`C_2`$ to $`(6S^2/4)(2.18J_D+2.67J_S)`$ with $`J_D`$ known, pull out $`J_S`$, and do the simulation. In fact, since the slope is so small, the error is not too great if we simply put $`C_2=0`$. But using the measured slope of $`C_2`$ (and using the calculated value of $`C_1`$, for consistency, rather than the fitted value) we get $`J_D=2.35`$ K (calculated) and $`J_S=1.92`$ K (measured), both for the Ho and the Dy compounds. (Note that when we use scalar Ising spins rather than fixed vector spins, the superexchange is negative and the dipolar $`J`$ is positive—and the former favours ordering, the latter frustration, as is usually the case in Ising systems.) We now simulate with these values of $`J_D`$ and $`J_S`$. In the case of Ho<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub> (Fig. 3), the simulated susceptibility agrees well with the experimental data at all temperatures, while the specific heat has a sharp peak at very nearly the point where the experimental Ho system falls out of thermal equilibrium. Moreover, there is a large energy difference at this point, suggesting a first-order phase transition. The Yb compound has earlier been believed to be an Ising model, and we initially tried modelling it in this way, with a nearest-neighbour antiferromagnetic superexchange. The experimental data show a sharp peak, which is as we expect for an antiferromagnetic Ising model, and matching the position of the peak in the simulation in the observed position leads to fair agreement with experiment (fig. 3). This should be regarded as fortuitous. The neutron data suggest that the Yb and Er compounds are easy-plane (“XY models”), not Ising. Earlier work by Bramwell et al. suggests that the XY Heisenberg model on this lattice shows a first order phase transition from an ordered ground state; we believe that, as with Ho<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub>, it may be necessary to include a dipole-dipole interaction, and preliminary simulation of a pure dipole model correctly predicts the position and approximate shape of the peak. More work on this is in progress. Our specific heat measurements on Er and Yb agree with previous data . The remaining compound, Tb<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub>, is probably Ising-like at very low temperatures. It has been suggested that it remains paramagnetic down to 0.07 K . The gap to the excited CF states is only a few kelvin. The data for this and Er are shown in fig. 4, but no simulations were done for these. The ground states of nearest-neighbour ferromagnetic or antiferromagnetic Ising pyrochlores are well known; we now consider the more complicated case of Ho<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub>. In the nearest neighbour ferromagnetic model any state in which all tetrahedra satisfy the “ice rule” will be a ground state. With long-ranged interactions (Ho<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub>), the ice rule remains but there are further restrictions on the allowed ground states. One way to deal with these restrictions is to consider the system as a set of interacting “upward” tetrahedra, assign ice rules to the configurations of each, and determine what configuration of neighbouring tetrahedra will lower the energy. There should be an ice rule for the “downward” tetrahedra, but also other constraints. This will map on to a new interacting lattice system, with a six valued variable at each lattice site representing the configuration of the upward tetrahedron represented by that lattice site. This approach for Ho, and the simulation, suggest a partial ordering in the ground state. That is, the upward tetrahedra could have one of two allowed configurations; the configuration varies randomly along one lattice direction, but alternates perfectly along the other two. So the number of ground states is large but not macroscopic (it is exponential in $`L`$, the system length, rather than $`L^3`$), and the entropy per particle vanishes. Our calculation here ignored long-ranged superexchange, and thus may not be valid for the experimental system, but the experimental data for Ho do suggest a vanishing entropy for the ground state (on integrating $`C/T`$). In the simulation, the system remains in a disordered “paramagnetic” state till the transition temperature, but below this temperature it freezes out rapidly to such a partially ordered state. From then on further cooling leaves it stuck in this state, with the other ground states inaccessible. This seems to agree with the observation that the spins freeze in Ho<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub> below a temperature of around 0.6 K. Below this temperature inability to establish thermal equilibrium leads to unreliable data, which has not been plotted here. The freezing of spins is an interesting phenomenon in Ho<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub>. We believe it is because at the transition temperature (0.8 K) the single spin flip energy is around 4 K, so the Boltzmann factor for this is very small (around 0.006). A single spin flip from the ordered ground state for Yb has a much larger Boltzmann factor of 0.05 (assuming only near-neighbour interaction), so the fall in specific heat is not so sharp. This “spin freezing” in Ho<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub> has been commented on by Harris et al.. It seems the next-neighbour interactions must be fairly strong for this. In the absence of next-neighbour interactions there is a very large number of ground states. As we turn on and increase the next-neighbour interaction, the new constraints substantially reduce the ground state entropy. For a pure dipole-dipole interaction the new ground state entropy is reduced but still finite, but with large superexchange, as in Ho<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub>, it actually vanishes: there are few true ground states, and these are separated by large energy barriers. In summary, we perform simulations based on a theoretical calculation of dipole-dipole interactions and an estimated superexchange obtained from the experimental data. The relative strengths of these interactions have a drastic effect on the ground state properties when compared to a nearest-neighbour Ising model, and we observe three different kinds of ground states: highly disordered and ice-like , partially ordered, or fully ordered, with broad crossovers or sharp phase transitions to high temperature phases. We use only one adjustable parameter, fitted from the experimental data, as input for the simulations that agree well with experiment. Thus these systems look like excellent testing grounds to study the behaviour of disordered spin systems, glassy dynamics, and phase transitions, with the opportunity to tune the interactions to some extent, and should richly repay future study. We thank C. Dasgupta for helpful discussions. SR’s work was supported by US DOE BES-DMS W-31-109-ENG-38.
no-problem/9902/astro-ph9902205.html
ar5iv
text
# Old and new advances in black hole accretion disc theory ## 1 Why Iceland? This is the very first conference in astrophysics that has ever taken place on Iceland, a country with only about 2-3 astronomers. It is natural to ask the question: Why was it organized on Iceland? This question fortunately has several reasonable answers: First, the astronomy population on Iceland is not that small as it may seem at first sight. It is approximately similar to that in other Western countries, i.e., about 1 astronomer per 100 000 in population. Iceland’s 250 000 citizens imply a total of 2.5$`\pm \sqrt{2.5}`$ astronomers, in good agreement with the actual number. Second, one of these 2.5 astronomers works on accretion discs which makes Iceland probably the country with the world’s largest fraction of astronomers working in this field. Third, in the past, at least according to classic literature, there has been at least two historic studies of “black hole interiors” originating on Iceland. In Jules Verne’s Voyage au Centre de la Terre (1864), the research team led by Professor Otto Lidenbrock from Hamburg entered a hole (most likely being black) on the bottom of the crater of Snaefell in 1863. Here, they followed in the footsteps of the 16th century Icelandic scientist Arne Saknussemm who had explored this black hole before them as described in Jules Verne’s novel. Fourth, and maybe the most important reason is that the research area of accretion discs has grown rapidly, not only worldwide, but also in most of the Nordic countries (Denmark, Finland, Iceland, Norway, Sweden) over the last 10 years. Because of this, all the support for this conference originated from these countries directly or indirectly with the requirement, of course, that the conference had to take place in one of these countries. The summary talk summarized most of the talks at the Midsummer Symposium. However, as the chapters in this monograph were written up to a year after the symposium, their content are often different from that of the symposium talks occasionally being influenced by the summary talk. This summary chapter still summarizes the symposia talks, but also includes some comments on the material in the written reviews. ## 2 Accretion discs One of the main topics of this Midsummer Symposium is the recent research bandwagon on advection dominated accretion flows (ADAFs) starting in 1994 with probably of the order of 100 papers since then. Many of the results are covered in Abramowicz (this volume), Björnsson (this volume), Narayan, Mahadevan, & Quataert (this volume), and Lasota (this volume). The problem is that in all the papers written before the summary talk of this symposium, the history of research on ADAFs has not been accurately presented. In my summary talk, I wanted to take the opportunity to set the record straight. I therefore here quote a few paragraphs from the section on the history of theoretical accretion disc research in my chapter (Svensson 1997) in Relativistic Astrophysics: A Conference in Honour of Prof. I.D. Novikov’s 60th birthday: ### 2.1 Theoretical history (from Svensson 1997) The underlying framework for almost all efforts to understand active galactic nuclei (AGN) is the accretion disc picture described in the classical papers by Novikov & Thorne (1973) and \[Shakura & Sunyaev (1973)\]. This original picture was partly inspired by the extreme optical AGN luminosities. Here, effectively optically thick, rather cold matter forms a geometrically thin, differentially rotating Keplerian disc around a supermassive black hole. The differential motion causes viscous dissipation of gravitational binding energy resulting in outward transportation of angular momentum and inward transportation of matter. The dissipated energy diffuses vertically and emerges as black body radiation, mostly in the optical-UV spectral range for the case of AGN. Observations have also been the driving force in the discovery of two other solution branches. a) Hard X-rays from the galactic black hole candidate, Cyg X-1, as well as the discovery of strong X-ray emission from most AGN led to the need for a hot accretion disc solution. \[Shapiro, Lightman & Eardley (1976)\] (SLE) found a hot, effectively optically thin, rather geometrically thin solution branch, where the ions and the electrons are in energy balance, with the ions being heated by dissipation and cooled through Coulomb exchange, leading to ion temperatures of order $`10^{11}`$ \- $`10^{12}`$ K. The efficient cooling of electrons through a variety of mechanisms above $`10^9`$ K, leads to electron temperatures being locked around $`10^9`$ \- $`10^{10}`$ K. The SLE-solution is thermally unstable. b) Observations of Cyg X-1 and of radio galaxies led to two independent discoveries of the third solution branch. \[Ichimaru (1977)\] developed a model to explain the soft high and hard low state of Cyg X-1. The soft high state is due to the disc being in the optically thick cold state, and the hard low state occurs when the disc develops into a very hot, optically thin state. This solution branch is similar to the SLE-solution, except that now the ions are not in local energy balance. It was found that if the ions were sufficiently hot they would not cool on an inflow time scale, but rather the ions would heat up both by adiabatic compression and viscous dissipation and would carry most of that energy with them into the black hole. Only a small fraction would be transferred to the electrons, so the efficiency of the accretion is much less than the normal $`10\%`$. As the ion temperature was found to be close to virial, these discs are geometrically thick. Just as for the SLE-case above, the electrons decouple to be locked at $`10^9`$ \- $`10^{10}`$ K. The flows resemble the quasi-spherical dissipative flows studied by, e.g., \[Mészároz (1975)\] and \[Maraschi, Roasio & Treves (1982)\]. Independently, \[Rees et al. (1982)\] and \[Phinney (1983)\] proposed that a similar inefficient accretion disc solution is responsible for the low nuclear luminosities in radio galaxies with large radio lobes and thus quite massive black holes. These geometrically thick discs were named ion tori. \[Rees et al. (1982)\] specified more clearly than \[Ichimaru (1977)\] the critical accretion rate above which the flow is dense enough for the ions to cool on an inflow time scale and the ion tori-branch would not exist. Further considerations of ion tori were made by \[Begelman, Sikora & Rees (1987)\]. \[Ichimaru (1977)\] emphasized that his version of ion tori is thermally stable. The ion-tori branch has been extensively studied and applied over the last two years (1995-1996) with more than 30 papers by Narayan and co-workers, Abramowicz and co-workers, as well as many others. These studies confirm and extend the original results, although some papers do not quote or recognize the original results. New terminology has been introduced based on an Eularian viewpoint rather than a Lagrangian. Instead of the ions not cooling, it is said that the local volume is ”advectively cooled” due to the ions carrying away their energy. The ion tori are therefore renamed as advection-dominated discs. ### 2.2 Why did history go wrong? From the above, it is clear that just 4 years after the start of research on modern accretion disc theory in 1973, the pioneering work on three of the four major accretion disc branches had been done (see Narayan et al., this volume for a description of the four branches). As a graduate student with a side interest in accretion discs, I myself read the paper by Ichimaru back when it was published and a few times since then. When the ADAF research bandwagon got going in 1995, I noted that Ichimaru’s pioneering work was not quoted and that the importance of the work by \[Rees et al. (1982)\] and \[Phinney (1983)\] initially was downplayed. Instead, the ADAF-solution was presented as a new discovery. Even the issue of whether the ADAF-solution was new or not was discussed in some papers. Let me give a few quotations: “Recently, a new class of two-temperature advection-dominated solutions has been discovered”; “We have found new types of optically thin disc solutions where cooling is dominated by the radial advection of heat”; “Is our new solution really new? Certainly, to the extent that we have for the first time included advection and treated the dynamics of the flow consistently, the advection-dominated solution… is new. But even more fundamentally, it is our impression that the existence of two hot solutions was not appreciated until now”. In order to correct the history of ADAF research, I wrote the paragraphs above in June 1996 as part of a review on X-rays and gamma-rays from AGNs (Svensson 1997), submitted it to the pre-print archives, sent preprints to about 150 astronomical libraries, and spread 100s of copies at conferences. But even so, the information did not penetrate the research community. So, as a summarizer of this accretion disc symposium, I took the liberty to show that the most important results of recent ADAF-research were already obtained by Ichimaru in 1977. Although Ichimaru’s work was not mentioned in any of the talks at this symposium, it is gratifying to notice that he is quoted in three of the chapters. Furthermore, Ichimaru is now regularly quoted in ADAF-papers appearing since this symposium. Why was Ichimaru’s 1977 paper forgotten by almost everybody? It may depend on the way literature searches are done. One searches a few years back, and assumes that the major reviews cover the most important old papers. And once the ADAF history was written, it became adopted by the research community. But it does not explain why the Ichimaru-paper was ignored already in the late 1970s. It was quoted in the Cygnus X-1 review by Liang & Nolan (1984) but not for providing a new accretion disc solution, or for its explanation of the two spectral states of Cyg X-1. ### 2.3 ADAF principles Much of the recent work on different accretion disc solutions and on ADAFs, in particular, is summarized in Abramowicz (this volume), Björnsson (this volume), and Narayan, Mahavedan, & Quataert (this volume). The recent work has among other things developed self-consistent solutions for the dynamics of ADAFs, has elucidated the connections between the different solutions branches, and has calculated detailed spectra from such flows. One of the outstanding questions is which of the solution branches (see Figure 1 in Abramowicz or Björnsson, this volume) a disc chooses. Narayan & Yi (1995) discussed three possibilities. In the summary talk, I provided names for these three options (and thank Andy Fabian for proposing the “strong” and the “weak” names). * The Strong ADAF Principle: The disc always chooses the ADAF branch if it exists. * The Weak ADAF Principle: The disc chooses an ADAF branch when no other stable branch is available. * The Initial Condition Principle: The initial conditions determine which branch is chosen. One should note here that already Ichimaru (1977) employed the weak and the initial condition principles in his scenario for the two states of Cyg X-1. Depending on the initial conditions at large radii, the disc gas either goes to the ADAF branch (hard state) or cools down to the gas-pressure dominated standard Shakura-Sunyaev branch (soft state). At some smaller radius in the latter case, radiation pressure starts dominating, the standard branch becomes unstable, and the weak principle says that the disc then makes a transition to the ADAF-branch. Furthermore, Ichimaru (1977) also obtained the result that there is a maximum accretion rate above which the optically thin ADAF branch does not exist, a fact not recognized in recent ADAF-papers where this critical accretion rate was rediscovered (for this accretion rate the cooling time scale becomes equal to the inflow time scale and the disc cannot remain in an ADAF state, see §3.2.2 in Narayan et al., this volume). How unique are the ADAF-models and scenarios that are proposed for various phenomena? Depending on which ADAF-principle one uses, one gets different models and scenarios, which gives some latitude for the model-builder. It is therefore important to develop a physical understanding of the transitions (with changing radius or accretion rate) between the branches and to determine which principle applies. Furthermore, the whole solution structure with its different branches (see Figure 1 in Abramowicz, this volume, or in Björnsson, this volume) depends strongly on the type of viscosity prescription that is used. The prescription mostly used is $`t_{r\varphi }=\alpha P_{\mathrm{tot}}`$. Other prescriptions such as $`t_{r\varphi }=\alpha P_{\mathrm{gas}}`$ or $`t_{r\varphi }=\alpha \sqrt{P_{\mathrm{rad}}P_{\mathrm{gas}}}`$ are likely to give very different solution structure and scenarios. This is rarely discussed in the ADAF-literature. Note that the solution structure in the Figure quoted only includes bremsstrahlung as a soft photon source for Comptonization. The more realistic case of unsaturated thermal Comptonization (without specifying the soft photon source) was considered by Zdziarski (1998). ### 2.4 Similarities between quasi-spherical accretions and ADAFs One of the ADAF-results is that the accretion is almost spherical. As the proton temperature is close to virial, the vertical scale height is close to the radius, $`R`$. All physical parameters then scale with radius approximately as for free fall spherical accretion (see eqs. 3.15 in Narayan et al., this volume). The coefficients may be different depending upon the viscosity parameter, $`\alpha `$, but for $`\alpha `$ of order unity, even the coefficients are approximately the same. This means that the scenarios of dissipative quasi-spherical accretion of the late 1970s (e.g., \[Mészároz (1975)\] and \[Maraschi, Roasio & Treves (1982)\]) and ADAFs are essentially identical, the difference being that the workers of the late 1970s did not consider the dynamics in detail but rather used intuition to conclude that the flow scaled as free fall flows. Many of the resulting conclusions are the same. It was, e.g., noted in some early (quasi-)spherical accretion papers that the solutions depend on the black hole mass mainly through the combination $`\begin{array}{c}\text{.}\\ m\end{array}\begin{array}{c}\text{.}\\ M\end{array}/\begin{array}{c}\text{.}\\ M\end{array}_{\mathrm{Edd}}\begin{array}{c}\text{.}\\ M\end{array}/M`$, and that therefore the solutions are similar for both galactic black holes and for super massive AGN black holes. Other results were that a two-temperature structure develops in the inner, say, 100 Schwarzshild radii; that the proton temperature remains close to virial, while the electron temperature saturates at about a few times 10<sup>9</sup> K; and that the luminosity scales as $`\begin{array}{c}\text{.}\\ M\end{array}^2`$. Some of the radiation processes that were included were: self-absorbed cyclo-synchrotron emission, Compton scattering, and sometimes pion production in proton-proton collisions. The accretion rates, . $`m`$ , considered were of order unity as the purpose was to explain the luminous AGNs. These results again appear in the ADAF literature (see, e.g., Figures 4, 6, and 7 in Narayan et al., this volume). One important difference now is that also small . $`m`$ are considered giving rise to very different spectra (see Figure 6 in the chapter of Narayan et al., this volume). ### 2.5 Applications of ADAFs The ADAFs have been included in scenarios and models for several astrophysical phenomena as described by Narayan et al. (this volume) and by Lasota (this volume). Such phenomena include explaining the quiescent state of three black hole X-ray transients, the different spectral states and spectral transitions of soft X-ray transients, low-luminosity galactic nuclei such as Sgr A in the the Galactic Center, the LINER NGC 4258, and possible “dead quasars” such as M87 and M60. Lasota (this volume) discusses in greater detail how the spectral properties of the outbursts of the soft X-ray transient GRO J1655-40 can be explained within a scenario with an inner ADAF + an outer cold disc. As mentioned above, applying different ADAF principles give rise to different scenarios. One such case is the efforts to explain the different spectral states of soft X-ray transients. Chen & Taam (1996) apply the weak principle and finds the transition radius between the outer cold disc and the ADAF to increase with accretion rate. Esin, McClintock, & Narayan (1997), on the other hand, in their detailed work apply the strong ADAF principle and finds the transition radius to decrease with increasing accretion rate (see Figure 11 in Narayan et al., this volume) Another case is the LINER NGC 4258 (Narayan et al., this volume), where the strong ADAF principle gives rise to an ADAF inside the masering molecular disc, while the weak ADAF principle gives rise to a standard thin disc (Neufeld & Maloney 1995). Narayan et al. (this volume) argue that the latter scenario has a too low accretion rate to explain the observed emission. ## 3 Evidence for the existence of black holes and surrounding accretion discs These topics were covered by Andy Fabian in his talk. In this volume, most of the evidence is, however, discussed in the two observational chapters by Charles (Galactic black holes) and by Madejski (supermassive black holes), while the review chapter by Fabian is limited to the broad Fe emission lines generated by the inner parts of a cold thin accretion disc. The evidence for supermassive black holes in galactic nuclei has in the past been indirect. X-ray variability has set an upper limit to the size of the emitting region and thus to the mass. The luminosity has provided an estimate of the mass assuming the object to be radiating at the Eddington luminosity. Measuring the velocity fields close to the nuclei of nearby galaxies has also provided an estimate of the mass enclosed within that region. Recently, there has, however, been dramatic progress as described by Madejski (this volume). The VLBA mega-masers in the LINER NGC 4258 showed a Keplerian velocity profile indicating a “point” mass of $`3.6\times 10^7\mathrm{M}_{}`$. And ASCA observations showed Fe-line profiles broadened and distorted in precisely the way expected for emission from a rotating disc just outside a black hole (see Figure 4 in Madejski, this volume, or Figures 5 and 6 in Fabian, this volume). Some observations even sets constraints on the rotation of the black hole. Future space missions with observing capability of the Fe-line will provide ample opportunities to explore the strong gravitational field close to the event horizon of black holes. The Fe-line at the same time provides evidence for the existence and the properties of a cold reflecting disc close to the black hole. There has been similar progress regarding determining the dynamical mass of the compact object in several galactic soft X-ray transients (see Table 3 in Charles, this volume). Again, the progress is mainly observational depending on the X-ray satellites providing the discovery of the transients, and the very large ground-based telescopes providing the dynamical mass-determinations. Present and future X-ray missions with all-sky monitors will discover new transients increasing the statistics and possibly broadening the range of properties of galactic black holes. In this context, one should note the exotic but qualitative contribution by Novikov (this volume) on the physics just outside and inside the event horizon. Of particular interest is the qualitative discussion of the tremendous growth of the internal mass (mass inflation) of the black hole interior during the formation process. ## 4 Radiation processes While the classical rates for bremsstrahlung, cyclo/synchrotron radiation, and Compton scattering in the nonrelativistic and relativistic limits were sufficient in the early disc models, the electron temperatures of $`10^9`$ \- $`10^{10}`$ K indicated by both observations and theory required the calculation of transrelativistic rates of the above processes as well as for pair processes that becomes important at these temperatures. Rate calculations as well as exploring the properties of pair and energy balance in hot plasma clouds were done in the 1980s by Lightman, Svensson, Zdziarski and others. The Compton scattering kernel probably received its definite treatment in \[Nagirner & Poutanen (1994)\]. Recent improvements in some rates were obtained by Mahadevan, Quataert, and others. Much of this microphysics have been included both into the SLE-solutions and into the ADAF-models. One problem has been to determine the importance of electron-positron pairs in hot accretion flows. The most detailed considerations so far show that pairs have at most a moderate influence in a very limited region of parameter space (see Björnsson, this volume). To obtain approximate spectra, the Kompaneets equation with relativistic corrections and with a simple escape probability replacing the radiative transfer is sufficient (Lightman & Zdziarski 1987). However, if one want to obtain constraints on the geometry from detailed observed spectra, then methods to obtain exact radiative transfer/Comptonization solutions in different accretion geometries must be developed. Such methods were developed by \[Haardt (1993)\] (approximate treatment of the Compton scattering) and \[Poutanen & Svensson (1996)\] (exact treatment) who solved the radiative transfer for each scattering order separately (the iterative scattering method). These codes are fast enough to be implemented in XSPEC, the standard X-ray spectral fitting package, and exactly computed model spectra can now be used when interpreting the observations. The codes, furthermore, includes the reprocessing (both absorption, reflection, and transmission) of X-rays by the cold matter. These methods have mostly been used to study radiative transfer in two-phase media consisting of cold and hot gas with simple geometries, but they have also recently been integrated into the ADAF models by Narayan and co-workers. Poutanen (this volume) concludes that the galactic sources with their smaller reflection are best fit with a hot inner disc surrounded by a cold outer disc, while the Seyfert galaxies with their larger reflection are best fit with a geometry where the X-ray emission originates from active regions (magnetic flares?) atop of a cold disc (extending all the way in to the black hole as the broad, distorted Fe-lines indicates). The spectral predictions of ADAF models have not been tested against the detailed spectral observations using $`\chi ^2`$ fittings. One should therefore at the present moment have greater confidence in the conclusions of the work doing detailed radiative transfer and spectral modeling and fittings in simple geometries. Poutanen also describes how the detailed spectra of the spectral transition between the hard and the soft states of Cyg X-1 can be described by a simple hybrid pair model (see Figure 9 of Poutanen, this volume) in a geometry with a hot inner disc surrounded by an outer cold disc and where the transition radius changes during the transition. The geometry is similar to the ADAF scenario suggested by Esin et al. (see Figure 14 in Narayan et al. this volume). It is clear that the natural evolution is to merge the detailed spectral models with various accretion disc scenarios. One weakness of the ADAF-literature is that the spectral predictions of the (broad band) ADAF model is normally only compared with the standard (narrow band) disc model of Shakura & Sunyayev (1973). Other alternative scenarios, such as the two phase scenario where cold clouds are submerged in a hot medium (e.g., Krolik 1998 for a recent work on this scenario), are normally not discussed. Another problem is the determination of the detailed spectral predictions from the cold disc, where the expected spectrum certainly is not that of a black body (Krolik, this volume). ## 5 The effects of magnetic fields Magnetic fields are important in several respects in accretion discs around black holes. Several of these were discussed during the symposium and here some of them are listed: * The most important influence of magnetic fields in an accretion disc is probably as being the agent generating self-sustained MHD-turbulence in a differentially rotating disc and thereby providing the necessary anomalous viscosity needed in black hole accretion discs. Brandenburg (this volume) describes the results of 3-D numerical simulations schematically in Figure 2. The Keplerian shear gives rise to large scale magnetic fields that in its turn generates turbulence through the Balbus-Hawley and Parker instabilities. Finally, the turbulence regenerates the magnetic field through the dynamo effect. The energy flow is shown in Figure 5 (Brandenburg, this volume). Approximately the same power is released in Joule heating (of electrons and ions) as in viscous heating (of ions mainly). One of the most important results is the magnitude of the Shakura-Sunyayev viscosity parameter, $`\alpha `$, and the realization that $`\alpha `$ is not a constant. * The magnetic field generates vertical stratification. There will be two different scale hights for the magnetic and the gaseous pressures. The magnetic field is more uniformly distributed vertically than the matter. * The magnetic field plays a crucial role in the generation of a corona around the disc. The hope is to use a short time sequence from a simulation run of MHD-turbulence in the disc itself. This time sequence will form the driving background for a corona simulation. Nordlund speculated that spontaneous magnetic dissipation may generate self-organized criticality in the corona, similar to what has been found in simulations for the solar corona. Here, one should also note the partly unsuccessful efforts described by Wiita (this volume) in obtaining a self-organized critical state of the accretion disc itself. * The magnetic field may act as a confining agent for cold gas (in the scenario where cold gas clouds coexist with a hot teneous medium) as discussed by Celotti & Rees (this volume). The field may also be responsible for confining the matter or pairs in the active regions discussed by Poutanen (this volume). * The magnetic field plays an important role as electrons (and positrons) in its presence generates cyclo-synchrotron photons in the radio to IR-range. These photons serve as seed photons in the Comptonization process, and is a crucial component in the ADAF models (Narayan et al., this volume). * Under certain conditions, synchrotron self-absorption dominates over Coulomb scattering as a thermalizing mechanism. The electrons emit and absorb cyclo-synchrotron photons resulting in a Rayleigh-Jeans self-absorbed photon spectrum and a Maxwellian distribution for the electrons (Ghisellini, Guilbert, & Svensson 1988; Ghisellini & Svensson 1989; Ghisellini, Haardt, & Svensson 1998). The thermalization time scale is just a few synchrotron cooling times. ## 6 Larger scale phenomena Some phenomena on larger scales were discussed during the symposium. In an interesting talk, Pringle showed how the radiation from the central source may cause radiation-driven warping of the disc on larger scales. In certain cases, the inner disc may even turn upside down relative the outer disc resulting in self-shadowing of the radiation from the central source. The resulting ionization cone may not necessarily be aligned with the central source. Merging of galaxies containing supermassive black holes may lead to a supermassive binary black hole in the resulting galaxy. The question is: How does the binary black hole evolve? Artymowicz (this volume) discusses the history of the efforts trying to solve this problem. After some 20 years of research it now seems that dynamical friction does not cause the eccentricity of the black hole orbits to grow. The interaction of a binary black hole with a common disc may resolve the difficulty of getting black hole-black hole merging to occur in less than a Hubble time. In the process, the black holes are fed, and a periodic light curve may be observed. One such example is OJ 287 with a period of about 13 years. A black hole circling a primary black hole with an accretion disc may cause warping of the disc as shown by Papaloizou et al. (this volume). The obvious application is the observed warping of the masering disc in the LINER NGC 4258. Another issue discussed was the survival of vortices in accretion discs. The common notion is that Keplerian shear would kill such vortices on a few rotation time scales. In Spiegel’s talk (and in Bracco et al., this volume) it was shown that coherent structures indeed form. On the other hand, in the local simulations by Brandenburg (this volume) including magnetic fields, vortices do not form. The issue of vortices, their survival and influence is not yet settled. ## 7 Conclusion This was a most rewarding symposium where leading scientists presented the most recent dramatic developments regarding several of the physics areas needed for realistic modelling of black hole accretion flows. These areas include radiative processes in hot plasmas, radiation transfer in hot and cold plasmas, MHD in differentially rotating gas, the origin of disc viscosity, magnetic flares, gas or MHD-simulations of flows, and so on. Some of these research lines have already merged. As each subproblem is understood, further merging provides the potential for dramatic progress in coming years. ###### Acknowledgements. The author acknowledges support from the Swedish Natural Science Research Council and the Swedish National Space Board.
no-problem/9902/cond-mat9902256.html
ar5iv
text
# Dephasing Effects by Ferromagnetic Boundary on Resistivity in Disordered Metallic Layer ## Abstract The resistivity of disordered metallic layer sandwiched by two ferromagnetic layers at low-temperature is investigated theoretically. It is shown that the magnetic field acting at the interface does not affect the classical Boltzmann resistivity but causes a dephasing among electrons in the presence of the spin-orbit interaction, suppressing the anti-localization due to the spin-orbit interaction. The dephasing turns out to be stronger in the case where the magnetization of the two layers is parallel, contributing to a positive magnetoresistance close to a switching field at low temperature. 75.70.-i, 73.50.-h, 72.15.Rn, 75.70.Pa In metallic magnets the electronic transport properties can strongly be affected by the configuration of the magnetization. Especially the resistivity close to the coercive field will vary by a small magnetic field due to a rearrangement of the magnetization. This effect, called a magnetoresistance (MR), has been observed for a long time in bulk magnets since early time. The MR in bulk magnets is anisotropic, in the sense that it depends on the mutual angle between the current and the magnetization. The change of resistivity is of order of a few % of the total resistivity. This anisotropic MR is explained by the spin-orbit interaction in $`d`$-band. In 1988 MR of about 50% has been found in multilayer structures of Fe and Cr. Such MR seen in multilayers, which is called a giant MR (GMR), is believed to be mainly due to the spin-dependent scattering of the electron at the interface. Quite recently the MR in a mesoscopic magnetic structures has been studied intensively, for instance, on sub-micron wires of ferromagnetic metals and on a multilayer structure of a hard- and soft-magnets. So far transport properties in such magnetic structures has mostly been discussed in terms of classical theories. For instance, the Boltzmann resistivity due to a reflection by domain wall has been calculated, whose result indicated a negligiblly small contribution in 3$`d`$ transition metals since walls there are thick compared with the inverse of the Fermi wavelength $`k_F^1`$. However, the most significant feature of a mesoscopic system is the effect of the quantum coherence among electrons, which affects substantially the low energy transport properties in disordered systems. Interesting point in such weakly localized case is that even a small perturbation can result in a measurable change in the resistivity of the entire sample by disturbing the coherence. Thus it is natural to expect that in mesoscopic magnets the rearrangement of the magnetization affects the quantum transport strongly. In fact it has been predicted that in a disordered wire of metallic ferromagnet a domain wall causes a dephasing among electrons and thus decreases the quantum correction to the resistivity, in contrast to the contribution to the classical resistivity. In this paper we will study theoretically the transport properties of a non-magnetic conduction layer sandwiched between two ferromagnetic layers as shown in Fig. 3, where the $`z`$-axis is chosen perpendicular to the layer. Both the magnetization and the current are assumed to lie in the plane in $`x`$-direction. (Even if the magnetization is perpendicular to the current, the following result is not changed.) The calculation is based on the linear response theory. We assume that the metallic layer is disordered and the resistivity is dominated by the normal impurities, thus treating the effect of magnetic layers perturbatively. We assume $`d\mathrm{}k_F^1`$, where $`d`$ is the thickness of the conduction layer and $`\mathrm{}`$ is the elastic mean free path. (We neglect the spin dependence of $`k_F`$.) The conduction electron feels a magnetic field at the interface with ferromagnetic layers. Within the classical argument of resistivity this magnetic field does not affect the in-plane resistivity in the case of an ideally flat interface we consider. It turns out, however, that it affects the quantum correction to the resistivity if the two spin channels are mixed by the spin flip scattering. As source of spin flip scattering we include the spin-orbit (SO) interaction, which is known to affect the quantum correction at low temperature in, for example, Cu film. The case of isotropic SO interaction is considered. We consider a thin layer (typically $`d2300\AA `$) and thus neglect the effect of the orbital motion due to the internal field. The effect of the magnetic layers is represented by the interaction $$H_{\mathrm{int}}=\mu _Bd^3xh(z)c^{}\sigma _zc,$$ (1) where $`h(z)`$ is the magnetic field supposed to simulate the local field at the interface with the ferromagnetic layers and $`\mu _B`$ is the Bohr magneton. The quantization ($`z`$-) axis of electron spin is chosen along the direction of the magnetic field (i.e., spatial $`x`$-axis). The conductivity is evaluated from the current-current correlation function, and the interaction eq. (1) is treated perturbatively to the second order. In the classical transport theory the contribution is a made up of a self-energy (SE) and a vertex correction (VC) type processes, but these two processes cancel each other in the case of flat interface because of the symmetry. This interaction, however, has a finite effect on the quantum correction to the conductivity, since it modifies the coherence of the electron wave function. The effect on the quantum correction would be discussed most conveniently in terms of the Cooperon (particle-particle ladder ), which represents the enhancement of the backscattering amplitude due to the coherence. The conductivity correction is expressed in terms of the correction to the Cooperon, $`\delta \mathrm{\Gamma }`$, diagrammatically as in Fig. 3 (a) (see also eq. (9)). In calculation of the quantum correction we neglect the quantity of $`o(k_F\mathrm{})^1`$. First we consider the case without the SO interaction. In this case only $`\delta \mathrm{\Gamma }`$ with $`\sigma ^{}=\sigma `$ (Fig. 3(a)) contributes. This contribution is made up of two processes of the SE and VC (Fig. 3(b)) (Note that $`h(z)`$ is static). The bare Cooperon here (denoted by shaded line), with the momenta of the two incoming electrons $`k`$ and $`k+q`$ ($`|k|k_F`$)) behaves at $`q\mathrm{}^1`$ as $`\mathrm{\Gamma }_0(q)=(Dq^2\tau +\kappa )^1`$, where $`D\mathrm{}^2k_F^2\tau /3m^2`$ is the diffusion constant, $`\tau `$ being the elastic lifetime ($`\mathrm{}=\mathrm{}k_F\tau /m`$), $`\kappa \tau /\tau _\phi 1`$, $`\tau _\phi `$ being the inelastic lifetime, and $`N(0)`$ is the density of states at the Fermi energy. We consider below the most important contribution, which comes from the region $`|q|,|Q|\mathrm{}^1`$ ($`Q`$ being the momentum transfer due to the interaction (1), which has only $`z`$-component). It is easy to see then that the two processes cancels with each other. In fact the summation over the electron momentum $`𝒌^{}`$ and $`𝒌^{\prime \prime }`$ in the SE and VC-type gives rise to $`(I_{qQ})^2`$ and $`|I_{qQ}|^2`$, respectively, where $`I_{qQ}_𝒌G_𝒌^+G_{𝒌+Q}^+G_{𝒌+𝒒}^{}`$. Here $`G_𝒌^\pm `$ is the electron Green function; $`G_𝒌^{(\pm )}1/(\pm i\mathrm{}/2\tau ϵ_𝒌)`$ ($`ϵ_𝒌\mathrm{}^2𝒌^2/2mϵ_F`$). The sign $`\pm `$ corresponds to the sign of Matsubara frequency, and the difference of SE and VC contribution is due to the difference of this sign. Since $`I_{qQ}2\pi iN(0)\tau ^2/\mathrm{}^2`$ is pure imaginary, the contribution from the two processes cancels each other. Hence the conductivity is not affected by the magnetic layers in the absence of the SO interaction. Now we include the SO interaction. The spin conserving process considered in Fig. 3(b) vanishes due to the same reason as before. In contrast the correction to a Cooperon with a spin flip ($`\sigma ^{}=\sigma `$) ($`\mathrm{\Gamma }_+`$ in Fig. 3(a)) has a finite effect. This correction ($`\delta \mathrm{\Gamma }_+`$) is shown in Fig. 3(b). Other processes with two or less number of Cooperons gives smaller contributions for small $`\kappa `$. Here the bare Cooperon $`\mathrm{\Gamma }_+`$ is proportional to the strength of the SO interaction, and is calculated as $$\mathrm{\Gamma }_+(q)=\frac{2\alpha }{(Dq^2\tau +\kappa )(Dq^2\tau +\kappa +4\alpha )},$$ (2) where $`\alpha \tau /3\tau _{\mathrm{so}}1`$, $`\tau _{\mathrm{so}}`$ being the inelastic lifetime due to SO interaction. The other Cooperon in Fig. 3(a), $`\mathrm{\Gamma }_0^{}`$, is a $`\mathrm{\Gamma }_0`$ modified by the SO interaction; $$\mathrm{\Gamma }_0^{}(q)=\frac{1}{(Dq^2\tau +\kappa )}\frac{Dq^2\tau +\kappa +2\alpha }{(Dq^2\tau +\kappa +4\alpha )}.$$ (3) In processes in Fig. 3(b) the cancellation between SE and VC does not occur, because of the different signs arising from the two interaction vertices ($`\sigma _z`$), and they give the equal contribution. In fact the sum of the first and second processes ($`\delta \mathrm{\Gamma }_{\mathrm{i},\mathrm{ii}}`$) is calculated as (factor of 2 is from the complex conjugate process) $`\delta \mathrm{\Gamma }_{\mathrm{i},\mathrm{ii}}(q)`$ $`=`$ $`2{\displaystyle \underset{Q}{}}\left({\displaystyle \frac{\mathrm{}}{2\pi N(0)\tau }}\right)^3(\mathrm{\Gamma }_+(q))^2\mathrm{\Gamma }_+(q+Q)|h(Q)|^2\left[(I_{qQ})^2|I_{qQ}|^2\right]`$ (4) $`=`$ $`4{\displaystyle \underset{Q}{}}{\displaystyle \frac{\tau }{2\pi N(0)\mathrm{}}}(\mathrm{\Gamma }_+(q))^2\mathrm{\Gamma }_+(q+Q)|h(Q)|^2,`$ (5) where $`h(Q)`$ is the Fourier transform of $`h(z)`$, and $`[2\pi N(0)\tau /\mathrm{}]^1`$ stands for the strength of the impurity scattering. In this way we obtain the correction to $`\mathrm{\Gamma }_+`$ as $$\delta \mathrm{\Gamma }_+(q)=4\underset{Q}{}\frac{\tau }{2\pi N(0)\mathrm{}}|h(Q)|^2C(𝒒,Q),$$ (6) where $$C(𝒒,Q)(\mathrm{\Gamma }_+(q))^2\mathrm{\Gamma }_+(q+Q)+\mathrm{\Gamma }_0^{}(q)\left\{\mathrm{\Gamma }_0^{}(q)\mathrm{\Gamma }_+(q+Q)2\mathrm{\Gamma }_+(q)\mathrm{\Gamma }_0^{}(q+Q)\right\}.$$ (7) Each term here corresponds to the diagram i-ii), iv) and iii),v), respectively. By use of this expression the quantum correction to the conductivity induced by the magnetic layers is obtained as (see Fig. 3(a)) (including a factor of 2 coming from spin) $`\delta \sigma `$ $`=`$ $`{\displaystyle \frac{2\mathrm{}}{2\pi }}\left({\displaystyle \frac{e\mathrm{}\mu _B}{m}}\right)^2{\displaystyle \frac{1}{V}}{\displaystyle \underset{𝒌}{}}{\displaystyle \underset{q\mathrm{}^1}{}}k_x(k+q)_xG_𝒌^+G_𝒌^{}G_{𝒌+𝒒}^+G_{𝒌+𝒒}^{}\delta \mathrm{\Gamma }_+(q)`$ (8) $``$ $`{\displaystyle \frac{8e^2}{3\pi \mathrm{}}}\left({\displaystyle \frac{\mu _B\tau \mathrm{}}{\mathrm{}}}\right)^2{\displaystyle \underset{Qz,Q\mathrm{}^1}{}}|h(Q)|^2{\displaystyle \frac{1}{V}}{\displaystyle \underset{q\mathrm{}^1}{}}C(𝒒,Q).`$ (9) The effect of the magnetic layers becomes most significant when the electron coherence is kept throughout the layer thickness, i.e., $`d\mathrm{}_\phi `$ ($`\mathrm{}_\phi \mathrm{}/\sqrt{3\kappa }`$ being the inelastic mean free path). In this case the system behaves as in two-dimensions from the point of view of the coherence, namely, only the $`Q=0`$ and $`q_z=0`$ component become important and thus $`h(Q)`$ becomes essentially a uniform magnetic field. In this case eq. (9) reduces to $`\delta \sigma `$ $`=`$ $`{\displaystyle \frac{16e^2}{3\pi \mathrm{}}}\left({\displaystyle \frac{\mu _Bh(Q=0)\tau }{\mathrm{}}}\right)^2\mathrm{}^2\alpha {\displaystyle \frac{1}{V}}{\displaystyle \underset{q_x,q_y}{}}{\displaystyle \frac{1}{(Dq^2\tau +\kappa )^2(Dq^2\tau +\kappa +4\alpha )^2}}`$ (10) $`=`$ $`{\displaystyle \frac{4e^2}{3\pi ^2\mathrm{}}}\left({\displaystyle \frac{\mu _Bh(0)\tau }{\mathrm{}}}\right)^2{\displaystyle \frac{\alpha F(\kappa ,\alpha )}{d\kappa ^3}},`$ (11) where $$F(\kappa ,\alpha )\frac{3}{16}\frac{\kappa ^3}{\alpha ^2}\left[\frac{1}{\kappa }+\frac{1}{\kappa +4\alpha }\frac{1}{2\alpha }\mathrm{ln}\left(1+\frac{4\alpha }{\kappa }\right)\right].$$ (12) In the case of weak SO interaction ($`\alpha \kappa `$), $`F(\kappa ,\alpha )=1+O(\alpha /\kappa )`$. To proceed further we need the explicit profile of $`h(Q)`$. We consider two cases where the magnetization of the two ferromagnetic layers are parallel (P) or anti-parallel (AP) to each other (Fig. 3). Choosing $`z=0`$ as the center of the conduction layer, we assume the effective magnetic field at the interface is written as $$h^\pm (z)=h_0a(\delta (zd/2)\pm \delta (z+d/2)),$$ (13) for P and AP cases (denoted by $`h^+`$ and $`h^{}`$, respectively) ($`h_0`$ is a constant which represents the strength of the field and $`a`$ is the scale of the penetration of the effective field, which is of order of a lattice constant). The Fourier transform of $`h^\pm (z)`$ are obtained as $`h^+(Q)(1/d)_{d/2}^{d/2}h^+(z)e^{iQz}𝑑z=(h_0a/d)\mathrm{cos}(Qd/2)`$ and $`h^{}(Q)=i(h_0a/d)\mathrm{sin}(Qd/2)`$, where $`Q`$ takes values of $`Q=\pi n/d`$, $`n`$ being an integer from $`N/2`$ to $`N/2`$ ($`Nd/a1`$ is the number of atomic layers). The magnitude of the MR due to the flip of the magnetization is then written as $`\mathrm{\Delta }\rho /\rho _0=(\delta \sigma ^+\delta \sigma ^{})/\sigma _0`$, where $`\delta \sigma ^\pm `$ denotes the quantum correction for the configuration $`h^\pm `$ and $`\rho _0=\sigma _0^16\pi ^2(\mathrm{}/e^2k_F^2l)`$ is the resistivity due to impurities. In the case of $`d\mathrm{}_\phi `$ considered in eq.(11), the MR close to the switching field is obtained as $$\frac{\mathrm{\Delta }\rho }{\rho _0}=2\left(\frac{\mathrm{\Delta }_0}{ϵ_F}\right)^2\left(\frac{a}{d}\right)^2\frac{\mathrm{}}{d}\frac{\alpha }{\kappa ^3}F(\kappa ,\alpha ),$$ (14) where $`\mathrm{\Delta }_0\mu _Bh_0`$ is the Zeeman splitting at the interface, and $`\mathrm{\Delta }_0(a/d)`$ is a measure of the averaged splitting. If the magnetization of the two ferromagnetic layers are AP to each other in the absence of an external magnetic field (as is realized by controlling $`d`$), positive $`\mathrm{\Delta }\rho `$ obtained here contributes to a positive MR close to a switching field, where the magnetization flips. The MR in real experiments is affected also by the interface roughness and spin-dependent scattering there. The effect of the dephasing considered here will be separated from such effects by looking into the temperature dependence of $`\mathrm{\Delta }\rho `$. In fact as the temperature is lowered $`\kappa `$ decreases since dephasing due to phonons and the electron-electron interaction are suppressed (by power law $`\kappa T^p,pO(1)`$). Then $`\mathrm{\Delta }\rho `$ due to the dephasing will become large according to eq. (14), while the resistivity change due to other classical origins would not change so much at very low temperature. Consider a layer of $`d=100\AA `$ and $`\mathrm{}=30\AA `$, and take the effective exchange splitting of the $`s`$ electron at the interface as $`\mathrm{\Delta }_0/ϵ_F3\times 10^2`$. Then if $`\kappa =10^2`$, which means that the inelastic diffusion length is $`\mathrm{}_\phi 5.7\mathrm{}`$, the system behaves as in two-dimensions. If $`\alpha /\kappa 0.5`$ then we obtain $`\mathrm{\Delta }\rho /\rho _0=0.5\times 10^3`$. For a material with a larger induced Zeeman splitting, $`\mathrm{\Delta }_0`$, we expect a bigger effects. Let us discuss the case where the conduction layer is a soft ferromagnet which is in contact with a hard ferromagnet as realized by use of NiFe/CoSm. In such system the magnetization at the interface is fixed by a hard magnet at small field. Thus an artificial twisted structure of the magnetization similar to a domain wall can be formed inside the soft magnetic layer by applying a magnetic field. Although the configuration of magnetization differs from the case of non-magnetic layer in contact with ferromagnets considered above, the argument goes parallel. In fact by use of a local gauge transformation, the correction to the in-plane conductivity turns out to be obtained by eq. (9) with the replacement of $`|\mu _Bh(Q)|^2(1/12)(\mathrm{}^2k_F/m)^2|a(Q)|^2`$, where $`a(Q)(1/d)𝑑ze^{iQz}_z\theta (z)`$, $`\theta (z)`$ being the angle which describes the direction of the magnetization. (Here we assumed $`\mathrm{\Delta }\tau /\mathrm{}1`$, $`\mathrm{\Delta }`$ being the Zeeman splitting in the soft magnetic layer). If $`\theta `$ changes uniformly from $`0`$ to $`\theta _0`$ inside a conduction layer, i.e., $`_z\theta =\theta _0/d`$, then the quantum correction in the case of $`d\mathrm{}_\phi `$ is obtained (for $`\alpha \kappa `$) as $$\frac{\mathrm{\Delta }\rho }{\rho _0}=\frac{2\theta _0^2}{3}\frac{\mathrm{}}{k_F^2d^3}\frac{\alpha }{\kappa ^3}.$$ (15) This is positive, and increases for larger twist angle, $`\theta _0`$. A measurement on NiFe(300Å)/CoSm indicated an increase of the resistivity after subtracting the anisotropic MR as $`\theta _0`$ increases. The change was about 0.08% of the total resistivity ($`9.6\mu \mathrm{\Omega }`$cm) at 5K. In Ref. , the effect of a strongly spin-dependent lifetime in ferromagnets is suggested as a possible origin of the observed $`\mathrm{\Delta }\rho `$. However, as eq. (15) indicates, the dephasing due to twisted magnetization may contribute to positive $`\mathrm{\Delta }\rho `$ at low temperature. In fact if the inelastic mean free path is not very long (e.g., $`\kappa 0.1`$) (the mean free path is estimated as $`\mathrm{}30\AA `$) the expected effect due to dephasing is $`\mathrm{\Delta }\rho /\rho _0=0.08`$% for $`\alpha /\kappa 0.5`$. In Ref. increase of $`\mathrm{\Delta }\rho `$ has been observed as the temperature is lowered to 2K, although $`\mathrm{\Delta }\rho `$ itself still exists at high temperature of 100K. This enhancement at low temperature might be due to the dephasing. In conclusion we have studied based on the linear response theory the effects of the magnetic layers on the in-plane resistivity of a disordered conduction layer sandwiched between two ferromagnetic layers with ideally flat interface. It has been shown that while the magnetization at the interface with the magnetic layers does not affect the classical resistivity, it affects the quantum correction to the resistivity at low temperature in the presence of the spin-orbit interaction, resulting in a larger resistivity for the parallel configuration of the magnetization of the two ferromagnetic layers. The authors thank K. Mibu for valuable discussions. This work was partially supported by a Grand-in-Aid for Scientific Research on the Priority Area “Nanoscale Magnetism and Transport” (No.10130219) and “Spin Controlled Semiconductor Nanostructures” (No.10138211) from the Ministry of Education, Science, Sports and Culture.
no-problem/9902/cond-mat9902228.html
ar5iv
text
# Folding Lennard-Jones proteins by a contact potential ## I Introduction One of the most challenging open problems in computational molecular biology is the prediction of a protein’s structure from its amino acid sequence – the so called protein folding problem. Assuming that the folded state is thermodynamically stable the problem can be formulated as follows: for given sequence of amino acids and interactions between them, find the conformation of minimal energy. A successful prediction is marked by the experimental validation of the structure. Such kind of prediction is still unfeasible . In order to tackle the protein folding problem from a theoretical point of view, in practice one has to choose 1) a representation of protein structure; 2) an approximation for the energy; 3) an algorithm to minimize the energy (i.e. to find the lowest energy structure for a given amino acid sequence). A conceptually straightforward approach is to perform detailed Molecular Dynamics simulations of fully detailed all atom models. However, numerical integration of Newton’s equations is unrealistic with present computers on the time-scale of the folding process. Even more problematic is the choice of the energy function. An imperfect parameterization of the classical effective interactions between atoms could correspond to an energy landscape with minima unrelated to the desired ones. The true energy function which dictates the folding of proteins is unknown and one must resort to simple approximations. A fairly commonly used approximation, that of contact energies, was recently proved to be too crude to successfully fold real proteins, whose native fold is stabilized by the ”true” potential function. It is not known whether this failure is caused by the extreme complexity of the true potential, or to the oversimplification inherent in the contact energy approximation. Resolving this issue is the main purpose of this work. To do this, we pose and study in detail a very simple question: > Can artificial “proteins”, whose constituent residues interact by a Lennard-Jones pairwise potential, be folded successfully by a pairwise contact potential ? The aim of such a study is to uncover some of the general problems which arise when one is forced to resort to energy approximations. The Weizmann group has developed the contact map approach to protein folding. The contact map of a protein with $`N`$ residues is an $`N\times N`$ matrix $`𝐒`$, whose elements are defined as $`S_{ij}`$=1 if residues $`i`$ and $`j`$ are in contact, and 0 otherwise. Contact between two residues can be defined in different ways; one is to consider two amino acids in contact when their two $`C_\alpha `$ atoms are closer than some threshold (they used 8.5 Å). Another definition is based on the minimal distance between two ions that belong to the two residues . The central premise of this approach is that the contact map representation has an important computational advantage. Changing a few contacts in a map induces rather significant large-scale coherent moves of the corresponding polypeptide chain . In order to carry out the program they had to solve two problems: 1. Finding an efficient procedure to execute a search in contact map space 2. Developing a test to determine whether the resulting maps correspond to physically realizable conformations Using these techniques, they have demonstrated that a the simple contact energy approximation is unsuitable to assign the lowest energy to the native state even for one protein. This last result highlights the fact that the bottleneck in the successful prediction of protein structure is the choice of the approximation for the energy. The Sissa group has pursued a different strategy developing minimalist off-lattice models of proteins. By studying toy protein models they were able to reproduce the essential features of the real folding process in terms of stability and accessibility (first of all the typical “funnel” structure for the energy landscape, that has been shown to play a critical role). They found that the case of a $`C_\alpha `$ chain with 4 species of amino acids, equipped by a suitable design procedure, is effective to represent global features of the energy landscape . In their model, the potential is the simplest generic off-lattice potential that maintains chain connectivity, provides an attractive interaction between monomers and ensures self-avoidance: a polynomial potential between beads that are nearest neighbors along the chain (to represent the $`C_\alpha `$-$`C_\alpha `$ virtual bond) and a Lennard-Jones (LJ) potential among all other beads. This off-lattice toy model of proteins has been a suitable tool to test potential extraction techniques and, on the other hand, it useful to study of the dynamical properties can provide important insights into the real mechanisms acting in real proteins. In this work, we investigate whether the contact map representation is as a suitable approximation to the LJ model of proteins; that is, whether there exists a suitable choice of contact energy parameters, for which the ground states of a pairwise contact potential constitute a good approximation to the lowest energy states of the LJ potential. We also discuss whether dynamics in contact map space, used in combination with Molecular Dynamics can, at least in the simple case presented here, be successfully used to perform energy minimization. ## II Protein Design with Lennard-Jones Potentials ### A Definition of the Off-Lattice Model In this first part of the study we represent a protein by a chain of $`N`$ monomers or beads, representing the $`C_\alpha `$ atoms of the amino acids. A configuration $`𝐫`$ of the chain is defined by the coordinates $`𝐫_1,\mathrm{},𝐫_N`$ of each $`C_\alpha `$ atom in three-dimensional continuous space. We consider effective pairwise energies between amino acids. The interaction potential $`V_{i,j}`$ between the monomers $`i`$ and $`j`$ is the sum of a covalent bond term plus a LJ term: $$V_{i,j}=\delta _{i,j1}f(r_{ij})+(1\delta _{i,j1})V_{ij}^{LJ}$$ (1) where $$V_{ij}^{LJ}=\eta _{ij}\left[(\frac{\sigma _{ij}}{r_{ij}})^{12}(\frac{\sigma _{ij}}{r_{ij}})^6\right],$$ (2) and $`r_{ij}=|𝐫_i𝐫_j|`$ are the interparticle distances. For the energy bond function $`f(r_{ij})`$ we choose the expression: $$f(x)=a(xd_0)^2+b(xd_0)^4,$$ (3) that is the fourth-order expansion of a generic and symmetric function of the distance $`x`$. The parameters $`a`$ and $`b`$ are taken to be $`1`$ and $`100`$ respectively. We add a quartic term to the usual quadratic one because a plain harmonic potential could induce energy localization in some specific modes, significantly increasing the time needed for equilibration. The parameter $`d_0`$ represents the equilibrium distance of the nearest neighbors along the chain. We set $`d_0=3.8`$, as this is the experimental value for the mean distance between nearest neighbor $`C_\alpha `$ atoms along the chain in real proteins, as taken from the Protein Data Bank. The total energy of a chain in conformation $`𝐫`$ is defined as: $$E=\underset{i=1}{\overset{N}{}}\frac{𝐩_i^2}{2}+\underset{i=1}{\overset{N}{}}\underset{j>i}{}V_{ij}.$$ (4) The first term is the classical kinetic energy of the chain, where the $`𝐩_i`$’s are the canonical momenta conjugate to the $`𝐫_i`$’s. ### B First step: generation of off-lattice native-like structures Off-lattice there are an infinite number of conformations almost all of which are not designable (i.e. there is no sequence which has the conformation as its ground state). We aim at working with dynamically accessible, designable and compact structures. We present here an outline of the procedure; further details can be found in Ref. . The simplest way to obtain such configurations is to initially deal with homopolymer chains, setting the parameters $`\eta _{ij}=\eta `$ and $`\sigma _{ij}=\sigma `$ of Eq. (2) for all the beads in the chain. By cooling such a chain below the $`\theta `$ temperature It is known that varying the temperature an homopolymer chain presents two different phases according to the dominance of attractive or repulsive interaction energy (as it can be with the LJ potential). The transition temperature is usually called $`\theta `$ temperature. we obtain maximally compact configurations. We fix the parameters to the constant values $`\eta =40`$ and $`\sigma =6.5`$. The parameter $`\eta `$ determines the energy scale, while $`\sigma `$ determines the interaction length between monomers. Such a value for $`\sigma `$ ensures that, in practice, two monomers significantly attract each other for distances smaller then $`8`$ Å, as we show in §IV. Such distance threshold is usually used for the inter-amino acids bond, and is determined by the requirement that the average number of $`C_\alpha C_\alpha `$ contacts for each amino acid is roughly equal to the respective numbers obtained with the all-atom definition of contacts . We used Molecular Dynamics (MD) (integrating Newton’s laws of motion on a computer) for simulating the kinetics of the chains. We employed an efficient and precise simplectic algorithm , in which one varies the energy density $`ϵ=E/N`$, which is related to the temperature . In order to find low energy structures of the chain, we used a combination of MD and slow cooling. Starting with different, randomly selected, initial conditions we find different low energy configurations for the chain, since the minimum energy state of a homopolymer is largely degenerate. For each MD simulation the chain is equilibrated at its initial energy during 8000 integration steps. We estimate the temperature of the system by computing the average value of the kinetic energy of the chain . Then, after 8000 steps, we slowly decrease the temperature by rescaling each component of the momenta by the same factor ( $`<1`$). We use the value $`0.8`$ as a cooling factor. Then the chain is again equilibrated at the new temperature and the procedure is iterated until very low temperatures are reached. At low temperature the chain is “trapped” in the compact configuration of minimum energy. We performed $`6`$ times this algorithm and obtained $`6`$ different native-like structures for sequences of lengths $`N=30`$, whose coordinates will be denoted by $`𝐫_\alpha ^{}`$, ($`\alpha =1,\mathrm{},6`$). We measure the difference between two structures of equal length N, using the root mean square (RMS) distance $`D`$: $$D(𝐫_i,𝐫_i^{})=\sqrt{\frac{1}{N}\underset{i=1}{\overset{N}{}}(𝐫_i𝐫_i^{})^2}$$ (5) where one structure is translated and rotated to get a minimal $`D`$. The standard procedure, described in Ref. was used. Two structures are considered different if their distance $`D`$ is larger than $`1`$ Å. The mean distance among our 6 different native structures is about $`5`$ Å. ### C Second step: protein design At this point we need to select, for each chosen compact structure, a sequence having its energy minimum on this structure. Furthermore, it should be able to reach the selected structure in an accessible time. In order to generate such sequences we use the design procedure of Ref. . We use $`4`$ types of amino-acids, thus we deal with $`10`$ different parameters $`\eta _{ij}`$ ($`i,j=1,\mathrm{},4`$). We set these parameters by hand, with the constraint that the inequalities $`2\eta _{\alpha \beta }<eta_{\alpha \alpha }+eta_{\beta \beta },(\alpha \beta )`$, are satisfied. The following values have been used: $`\eta _{1,1}=40`$, $`\eta _{1,2}=\eta _{2,1}=30`$, $`\eta _{1,3}=\eta _{3,1}=20`$, $`\eta _{1,4}=\eta _{4,1}=17`$, $`\eta _{2,2}=25`$, $`\eta _{2,3}=\eta _{3,2}=13`$, $`\eta _{2,4}=\eta _{4,2}=10`$, $`\eta _{3,3}=5`$, $`\eta _{3,4}=\eta _{4,3}=2`$, $`\eta _{4,4}=1`$. We fix an identical amino acid composition for all the sequences and the values for each matrix element $`\eta _{ij}`$. Then we select the optimal sequence for a target structure using a Monte Carlo algorithm in sequence space. Starting with a random sequence, at fixed composition, we perform random permutations accepting or rejecting new sequences with respect to the Boltzmann factor $`P=\mathrm{exp}(\mathrm{\Delta }E/k_BT)`$, where $`\mathrm{\Delta }E`$ is the energy variation due to permutation and $`T`$ is the “temperature” parameter of the Monte Carlo optimization scheme ($`k_B`$ is set equal to 1). By slowly lowering the parameter $`T`$ we select the sequence that has the lowest energy in the target conformation. In such a way we modify the energy landscape: out of the multidegenerate minimum energy scenario of the homopolymer we select a single minimum configuration lowering its energy by designing an appropriate sequence. This method to select the sequence cannot guarantee that the target structures are indeed global energy minima. To check the obtained sequences we slowly cool (as in §II B) each selected sequence about 50 times (each time starting from a different initial condition). The cooling simulations find, as the lowest energy configurations, the same structures as before. Hence, these target structures are probably the lowest energy structures for the selected sequences, at least among the dynamically accessible ones, and this result is confirmed by the simulation in contact map space presented below. This repeated cooling process generates also a set of alternative, higher energy, metastable configurations (typically $`23`$ for each sequence) that, if perturbed, “decay” to the corresponding global minimum configuration. ## III Derivation of a set of pairwise contact energy parameters In the pairwise contact approximation the energy is written as $$E^{pair}(𝐚,𝐒,𝐰)=\underset{i<j}{\overset{N}{}}𝐒_{ij}w(a_i,a_j).$$ (6) We denote by $`𝐚`$ the sequence of amino acids, by $`𝐒`$ the conformation (represented by its contact map ) and by $`𝐰`$ the set of energy parameters. If there is a contact between residues $`i`$ and $`j`$, the parameter $`w(a_i,a_j)`$, which represents the energy gained by bringing amino acids $`a_i`$ and $`a_j`$ in contact, is added to the energy. Two amino acids are said to be in contact if their $`C_\alpha `$ atoms are closer than an upper threshold distance $`R_U`$ whose value will be discussed below. We now explain in detail the approximation involved in Eq. (6). Denote by $`𝒞`$ a micro-state of the system. In general the micro-state is specified by the coordinates of all the atoms of the proteins and of the water molecules of the solvent. In the present case, we are considering a Lennard-Jones model of a $`C_\alpha `$ chain, so a micro-state is completely specified by the coordinates of the N monomers comprising the chain. Since many microscopic conformations share the same contact map S, it is appropriate to define a free energy $`(𝐚,𝐒)`$ associated with this sequence and map: $$\mathrm{Prob}(𝐒)e^{(𝐚,𝐒)}=\underset{𝒞}{}e^{E(𝒞)/k_BT}\mathrm{\Delta }(𝒞,𝐒)$$ (7) where $`\mathrm{\Delta }(𝒞,𝐒)`$=1 if $`𝐒`$ is consistent with $`𝒞`$ and $`\mathrm{\Delta }=0`$ otherwise; i.e. $`\mathrm{\Delta }`$ is a “projection operator” which ensures that only those configurations whose contact map is S contribute to the sum (7). For real proteins $`E(𝒞)`$ is the unknown “true” microscopic energy, here it is the Lennard-Jones energy as of Eq. (1). Since it is impossible to evaluate this exact definition of the free energy of a map, we resort to a phenomenological approach, guessing the form of $`(𝐚,𝐒)`$ that would have been obtained had the sum (7) been carried out. $`E^{pair}(𝐚,𝐒,𝐰)`$ of eq. (6) is the simplest approximation to the true free energy. To test the extent to which this approximate form is capable of stabilizing the native map of a protein against other non-native maps, we must specify the parameters $`𝐰`$. To derive the set $`𝐰`$ of pairwise contact energy parameters we require that the following set of conditions be satisfied for all the proteins in the database $$E^{pair}(𝐚,𝐒^{},𝐰)<E^{pair}(𝐚,𝐒_\mu ,𝐰).$$ (8) Here $`𝐒^{}`$ is the contact map of the native conformation of sequence $`𝐚`$ and $`𝐒_\mu `$ ($`\mu =1,\mathrm{}N_D`$) are contact maps taken from a set of $`N_D`$ alternative conformations. The problem is to find a set $`𝐰`$ such that for each decoy $`\mu `$ the inequality (8) is satisfied. The perceptron learning technique is used to search for contact energy parameters $`𝐰`$ for which the set of conditions (8) are satisfied. The set $`𝐰`$ that gives the optimal solution of this problem is equivalent to the maximization of the gap between the ground state and the first excited state, as it is shown in the Appendix. ### A Generation of alternative conformations To obtain a large ensemble of decoys we performed dynamics in contact map space collecting maps along the trajectory. The method is presented elsewhere ; here we outline only the details that are relevant for the present implementation. Our algorithm is divided in four steps. 1. We start from an existing map and perform large scale “cluster” moves. At this stage, no attempt is made to preserve physicality. The contact map which is obtained by this procedure is typically uncorrelated to the starting one. 2. The resulting map is refined by using local moves. 3. We use the reconstruction algorithm to restore physicality by projecting the map obtained from the second step onto the physical subspace. 4. We perform a further optimization by an energy minimization in real space using a standard Metropolis crankshaft technique . Using this algorithm, after the choice of a suitable definition of contact –as detailed in § IV B and § IV C–, we generated a set of $`N_D`$=60000 alternative conformations, 10000 for each of the 6 sequences in the database. As discussed in Ref. , the contact maps that are obtained by contact map dynamics are uncorrelated and the $`N_D`$ decoys form a representative set of low energy competitors for the native state. The important requirement is to generate a large set of uncorrelated decoys, since a good energy function must stabilize the correct structure and destabilize all the others. This selection of candidates was performed using the set of LJ parameters $`\eta _{ij}`$ as a first guess for the pairwise contact energy parameters $`𝐰`$. ## IV Square well approximation of the LJ potential ### A Truncation of the LJ potential In the definition of contact one has to specify the upper threshold $`R_U`$ of the distance between two beads, below which are considered to be in contact. To get an estimate of a reasonable value of $`R_U`$ we studied the truncation of the LJ potential. Using MD, we analyzed the stability of the folded conformations against a cut in the tail of the LJ potential. We used a potential equal to the Lennard-Jones function for $`r`$ smaller than $`R_C`$, and equal to zero for $`r`$ larger than $`R_C`$: $$V_{LJ}^{}(r_{ij})=\{\begin{array}{cc}\eta _{ij}((\frac{\sigma }{r_{ij}})^{12}(\frac{\sigma }{r_{ij}})^6)\hfill & \mathrm{if}r_{ij}<R_C\hfill \\ & \\ 0\hfill & \mathrm{if}r_{ij}>R_C\hfill \end{array}$$ (9) For a given $`R_C`$, we determined the ground state conformations of the 6 proteins in the database. This was done by performing $`n`$ runs of Molecular Dynamics minimization (by slow cooling, as described in §II B with typically $`n=50`$). For each protein, each minimization ended in a slightly different structure $`𝐫_\alpha ^\nu (R_C)`$ ($`\alpha =1,\mathrm{},6`$, $`\nu =1,n`$) We measured the RMS distance $`D(R_C)`$, averaged over the 6 proteins in the database, between the native conformations and the conformations found as minima of the potential of Eq. (9): $$D(R_C)=\underset{\alpha =1}{\overset{6}{}}\underset{\nu =1}{\overset{n}{}}D(𝐫_\alpha ^{},𝐫_\alpha ^\nu (R_C)).$$ (10) In Fig. 1 we show the results for each protein, for several choices of the cutting length $`R_C`$. In the inset the average distance $`D(R_C)`$ with its standard deviation as a function of the cutting length $`R_C`$ are shown. Remarkably, it is possible to cut the LJ potential inside the attractive well down to $`R_C`$=8 Å, still keeping the average distance below 1 Å. ### B Derivation of the region of physicality For any value of $`R_U`$ the contact map $`𝐒`$ of any chain conformation can be easily determined. In order to construct decoys, however, we must deal with the inverse problem – to get a chain conformation starting from some contact map $`𝐒`$. In order to apply the reconstruction procedure introduced in Ref. for a possibly non physical contact map one has to specify another length; the hard core distance $`R_L`$, defined as the minimal allowed distance between two amino acids. This is essential since without such a lower cutoff we would be able to construct chain configurations for maps with much higher numbers of contacts than the native maps and hence would identify such dense decoys as physical. Since the contact energies tend to be negative such decoys will have much lower energies than the native map and will never be able to find a solution to the inequalities eq. (12). In Fig. 2 we show the histogram of all the pairwise distances for the six native conformations considered in this study. Apart from the peak at 3.7 Å, due to consecutive residues along the chain, the distances range from 6.1 to 18.8 Å. The vertical line indicates 6.9 Å, a distance whose role will be discussed below. There are 2436 distances between non-consecutive beads, of which 49 are below 6.9 Å and 7 below 6.5 Å. We now consider the plane $`(R_U,R_L)`$ and set about to determine the “region of physicality” which is that part of the plane whose points allow reconstruction of the contact maps that correspond to the six native folds. To understand the problem we are facing, one should bear in mind that once we chose a value for $`R_U`$, the contact map associated with a conformation is determined. When, however, we try to reconstruct one of these maps, we must also specify a value for $`R_L`$ and if the chosen value is too high, we may find that no corresponding chain configuration exists. In this case, according to definition, the native contact maps are non-physical. The gross features of such a region can be outlined by some simple arguments. First, since we must have $`R_U>R_L`$, the region of physicality must clearly lie below the line $`R_U=R_L`$. Second, if $`R_L`$ is too high, the contact maps are non physical and there must be a (possibly horizontal)line that bounds the physical region from above. The exact location of this line must be determined numerically. For $`R_L<6.1`$ it is clear that all the native contact maps are physical, but for $`R_L>6.1`$ Å the situation is not clear. It may be able to ”catch” the few “close” contacts (those with $`d<6.9`$ Å) by locally stretching the chain. We proved numerically that this is indeed possible up to about $`R_L<6.7`$ Å, as shown in Fig. 3a and in more detail in Fig. 3b. The physical region is the shaded trapezoid open on its right side; for points in this region we are able to reconstruct chain conformations for all the six native maps (derived from their ”true” chain conformations by the choice of $`R_U`$). ### C Derivation of the region of learnability We turn now to determine a second region in the $`(R_U,R_L)`$ plane, that of “ learnability”. At a point $`(R_U,R_L)`$ in this region it is possible to find a set of energy parameters $`𝐰`$ such that each of the six native maps have the lower energy than all their respective decoys. As was done above for the region of physicality, it is possible to predict the general shape of the region of learnability a well. First, it is limited by the same bisecting line $`R_U=R_L`$. Second, there is a vertical line at $`R_U=6.1`$ Å, at the left of which all the contact maps are empty – we are not interested in such a case. Third, there is a second vertical line, at $`R_U=19`$, at the right of which all the contact maps are filled. Also this case is not interesting. Fourth, there must be a line (approximately horizontal) below which the energy parameters are unlearnable. We expect this since for small values of $`R_L`$ amino acids are allowed to be very close to each other. Very compact conformations are possible and since the interaction parameters are on the average attractive, such compact conformations have very low energy. The exact location of this line must be determined numerically. The result is shown in Figs. 3a and 3b. For each choice of $`R_U`$ and $`R_L`$, we generated by contact map dynamics a set of $`N_D=60000`$ decoys, 10000 for each sequence in the database. Using the perceptron algorithm, we determined if such set of decoys was learnable or not. The outcome of the procedure is the triangular shaded region in Fig. 3a, whose lower edge is shown in detail in Fig. 3b. The most interesting result is that the region of physicality and the region of learnability do not overlap. It is not possible to choose $`(R_U,R_L)`$ such that the contact maps of the native LJ conformations in the database are physical and they can be stabilized by a pairwise contact potential. Two observation must be made. First, the points labeled as “unlearnable” are rigorous, whereas those labeled as “learnable” are tentative – it is possible that by increasing the number of decoys also those points will become unlearnable. The lower edge of the region of learnability could actually be moved upwards. The same effect should be expected by increasing the number of proteins in the database. Second, the points labeled as “physical” are rigorous, whereas those labeled as “non physical” are tentative – we could not reconstruct the corresponding contact maps, but this failure could be due to our reconstruction algorithm. Thus, the upper edge of the region of physicality could be moved upwards. However, by increasing the number of proteins in the database we expect that this upper edge would move downwards. In the limit of a large number of proteins the latter effect could possibly dominate. In conclusion, we argue that the gap between the region of learnability and the region of physicality is expected to widen when many proteins are taken into account. ## V Folding in contact map space Since we have proved that the two regions, of learnability and physicality of the native LJ contact maps do not overlap, we can conclude that the answer to the question posed in the Introduction is negative – the contact energy approximation is unable to stabilize the native folds of LJ chains. The contact map approximation is too crude also for extremely simplified potential as the LJ potential used in this work. The situation is, however, more subtle. We can look for contact energies that give rise to approximate solutions of the folding problem: it is impossible recover the LJ native configurations using the contact map approximation, but we want investigate how ”close” to the solution the contact map approximation can lead. To tackle this point, we selected a working point in the learnable region. We chose $`R_U`$=8 Å, according the the minimal value of $`R_C`$ determined in Sec. IVA and $`R_L=6.9`$ Å. As shown above, for this value of $`R_L`$, native maps are non physical. In other words, for a given native map there is no chain which can realize that contact map. We found, however, that for these $`R_L`$ and $`R_U`$ it is possible to reconstruct conformations whose maps are physical and typically differing by only a few contacts from the native map. Moreover, the corresponding 3D structures are, on the average, at a RMS distance of less that 2 Å from the native LJ conformations. We decided to choose $`R_L=6.9`$ Å, as the best possible compromise between solving Eq. (8) and our aim is that some physical maps of low energy are close to the native maps. Now we turn to answer the following important question: is the set $`𝐰`$ obtained this way useful to fold LJ proteins? The first aspect of this general question concerns the feasibility of using contact map dynamics. That is, we ask: > Is it possible to fold the sequences in the database using contact map dynamics? Fully successful folding in contact map space corresponds to exact recovery of the native map. Since for $`R_L=6.9`$ Å and $`R_U=8.0`$ Å the native contact maps are non physical, such an exact solution is not within our reach. Nevertheless, the contact map dynamics could select configurations slightly different from the native ones. For each of the 6 sequences we started from some random contact map and ran the contact map dynamics procedure for 1000 steps; this took about 3 cpu hours on our HP812 computer. We now analyze in detail our results. As already observed in Ref. , the correlation between energy and distance from the native map (the “funnel”) is not always very strong when using $`E^{pair}`$. A case of quite good correlations, obtained by contact map dynamics for sequence N<sup>o</sup> 5 (among the six used in this work), is shown in Fig. 4. Since the lowest energy contact map found in the simulation might or might not coincide with the closest in distance to the native one, in general we can obtain only a short list of candidates for the native state. For sequence N<sup>o</sup> 5 the physical map of lowest energy is, in fact, the closest in distance to the native one. Both maps are shown in Fig. 5. The energy $`E^{pair}`$ of the native map is -35.75 and the energy of the predicted one is -33.33. The RMS distance between two corresponding conformations, shown in Fig. 6, is 2.1 Å. We emphasize again the obvious fact that since our $`𝐰`$ is a solution of (12), the native map has lower energy than all the low-energy decoys that were used for learning. Indeed, in no case have we found by contact map dynamics maps lower in energy than the native one. We summarize in Table I the results obtained for the six sequences. The Hamming distance $`D_H`$ between two contact maps $`𝐒`$ and $`𝐒^{}`$ is defined by $$D_H=\underset{j>i}{}\frac{|S_{ij}S_{ij}^{}|}{N_c(𝐒)N_c(𝐒^{})},$$ (11) which counts the number of mismatches between maps $`𝐒`$ and $`𝐒^{}`$; $`N_c(𝐒)`$ and $`N_c(𝐒^{})`$ are the numbers of contacts in the two maps. The second and complementary question we pose is > Are the low energy conformations generated by contact map dynamics good starting points for Molecular Dynamics minimization? We analyze again sequence N<sup>o</sup> 5. We randomly selected 10 conformations among the 100 of lowest $`E^{pair}`$ energy that were generated by contact map dynamics and reconstructed their corresponding conformations. These were then used as starting points for a Molecular Dynamics minimization of the $`E^{LJ}`$ energy. In Fig. 7 we present the energy and distance to the native conformation for the conformations obtained by MD. As can be seen, 8 trajectories ended up very close to the native state and 2 in conformations at RMS distances of 6 and 8 Å respectively. We note that these last two initial conformations had the largest RMS distance from the native state. We conclude that the predictions performed in contact map space using the approximate energy $`E^{pair}`$ are, on the average, suitable starting points for a Molecular Dynamics minimization of the LJ energy. Moreover, by using Molecular Dynamics it is possible to correctly rank the predictions that were obtained by contact map dynamics. ## VI Conclusions There are four different situations for which the contact pairwise approximation has been used: 1) When the true potential is a contact pairwise one, such as for lattice models with nearest-neighbor interactions, the energy $`E^{pair}`$ of Eq. (6) is, by definition, the exact (free) energy. Models of this type were investigated in two recent studies . In both works, a database of proteins was designed using $`E^{pair}`$, with a given choice of energy parameters $`𝐰^{true}`$. The database was then used to recover a set $`𝐰`$, which ideally could be identical to $`𝐰^{true}`$. Since the native states of the proteins in the database are the lowest energy conformations of $`E^{pair}(𝐰^{true})`$, Eq. (8) can be satisfied for all the possible conformations for all the sequences in the database. 2) When the true potential is unknown, such as for real proteins, Eq. (6) is only an approximation to the true free energy (7), and there is no guarantee that such an approximation would lead to good results. The conformations stabilized by the true energy function are not necessarily stabilized against all decoys by a pairwise contact energy function. For a particular protein, crambin, it has been shown that, in fact, such a solution does not exist and the contact maps of lowest pairwise energy are not close to the native maps. 3) The “true” potential is a contact pairwise one, and it is used to design protein sequences on the structure of existing proteins. This case was considered in Ref. , where it was shown that it is unfeasible to design sequences, using a pairwise contact potential, on structures of existing proteins. This result implies that the native structures of existing proteins are “designed” in a very peculiar way by the possibly very complex potential used by Nature. A simple pairwise contact potential can neither be tuned to stabilize the true sequences (see point 2) nor used to perform protein design on the true structures. 4) The true potential is a Lennard-Jones pairwise one and it is used to design sequences on structures typically arising from Lennard-Jones interactions. The resemblance of such structures to those of real proteins appear to be reasonable , however, as discussed in this paper, the results obtained for approximate contact potentials are remarkably different. Here we have shown that it is not possible to stabilize the exact native folds. This result is somewhat surprising – the approximation seems to be quite reasonable, since both the contact potential and the LJ potential are characterized by a single length scale. However, we also showed that in this case an approximate way to solve the folding problem can be found. We showed that a dynamics in contact map space is a suitable tool to perform energy minimization, when a correct parameterization of the energy is possible. Contact map dynamics provides, for a good choice of contact parameters, not just a unique structure corresponding to the lowest energy state, but a set of candidates for it. When this list of predictions is used as starting configurations of a MD energy minimization, which uses the original ”true” LJ form of the energy, the correct ground states are recovered. It remains to be investigated if a hybrid method, based on “fusion” of contact map dynamics and molecular dynamics, in a way that takes advantage of the their respective advantages (that in some sense are complementary) may be a powerful tool to fold real proteins as well. ## Acknowledgments C.C expresses her gratitude for the hospitality to the Weizmann Institute where part of this work was carried out. We thank Devarajan Thirumalai for discussions and Ron Elber for discussing with us a similar approach, based on Eq. 8 (unpublished). This research was supported by grants from the Minerva Foundation, the Germany-Israel Science Foundation (GIF) and by a grant from the Israeli Ministry of Science. During the last three months C.C. has been supported by the NSF (Grant # 96-03839) and by the La Jolla Interfaces in Science program (sponsored by the Burroughs Wellcome Fund). ## Perceptron learning We describe here the perceptron learning technique that was used. We first show that for any conformation the condition Eq.(8) can be trivially expressed as $$𝐰𝐱^\mu >0$$ (12) To see this just note that for any map $`𝐒_\mu `$ the energy (6) is a linear function of the 10 contact energies that can appear and it can be written as $$E^{pair}(𝐚,𝐒_\mu ,𝐰)=\underset{c=1}{\overset{10}{}}N_c(𝐚,𝐒_\mu )w_c$$ (13) Here the index $`c=1,\mathrm{}10`$ labels the different contacts that can occur and $`N_c(𝐚,𝐒_\mu )`$ is the total number of contacts of type $`c`$ that actually appear for the sequence $`𝐚`$ in the map $`𝐒_\mu `$. The difference between the energy of this map and the native $`𝐒^{}`$ is, therefore, $$\mathrm{\Delta }E_\mu =\underset{c=1}{\overset{10}{}}x_c^\mu w_c=𝐰𝐱_\mu $$ (14) where we used the notation $$x_c^\mu =N_c(𝐚,𝐒_\mu )N_c(𝐚,𝐒^{})$$ (15) and $`𝐒^{}`$ is the native map. Each candidate map $`𝐒_\mu `$ is represented by a vector $`𝐱^\mu `$ and hence the question raised above regarding stabilization of $`𝐒^{}`$ for a sequence $`𝐚`$ becomes > Can one find a vector $`𝐰`$ such that condition (12) holds for all $`𝐱^\mu `$? If such a $`𝐰`$ exists, it can be found by perceptron learning. A perceptron is the simplest neural network . It is aimed to solve the following task. Given a set $`P`$ of patterns (also called input vectors, examples) $`𝐱^\mu `$, $`\mu =1,\mathrm{},P`$, find a vector $`𝐰`$ of weights, such that the condition $$h_\mu =𝐰𝐱^\mu >0$$ (16) is satisfied for every example from the set. If such a $`𝐰`$ exists for the training set, the problem is learnable; if not, it is unlearnable. We assume that the vector of “weights” $`𝐰`$ is normalized, $$𝐰𝐰=1$$ (17) The vector $`𝐰`$ is “learned” in the course of a training session. The $`P`$ patterns are presented cyclically; after presentation of pattern $`\mu `$ the weights $`𝐰`$ are updated according to the following learning rule: $$𝐰^{}=\{\begin{array}{cc}\frac{𝐰+\eta 𝐱_\mu }{|𝐰+\eta 𝐱_\mu |}\hfill & \mathrm{if}𝐰𝐱_\mu <0\hfill \\ & \\ 𝐰\hfill & \mathrm{otherwise}\hfill \end{array}$$ (18) This procedure is called learning since when the present $`𝐰`$ misses the correct “answer” $`h_\mu >0`$ for example $`\mu `$, all weights are modified in a manner that reduces the error. No matter what initial guess for the $`𝐰`$ one takes, a convergence theorem guarantees that if a solution $`𝐰`$ exists, it will be found in a finite number of training steps. . If the region of parameter space whose points are solutions is large, one is interested in the optimal solution. To obtain the optimal perceptron solution, called the maximal stability perceptron, we used the Krauth and Mezard algorithm . In this algorithm the condition (16) is replaced by $$h_\mu =𝐰𝐱^\mu >c$$ (19) where $`c`$ is a positive number that should be made as large as possible. At each time step the “worst” example $`𝐱^\nu `$ is identified, namely the one such that $$h_\nu =𝐰𝐱^\nu =\underset{\mu }{\mathrm{min}}𝐰𝐱^\mu $$ (20) and the example $`\nu `$ is used to update the weights according to the rule (18). The field $`h_\nu (t)`$ keeps changing at each time step $`t`$ and the procedure is iterated until it levels off to its asymptote. ### Captions to the figures Fig. 1. Upper part: Approximation of the Lennard-Jones potential by a contact potential. Lower part: Distance of the conformation of lowest energy found in a MD simulation, done with a potential truncated at $`R_C`$, to the true native state of the non truncated LJ interaction. Each dot represents a different run. In the inset the average $`D(R_C)`$ is shown. Down to $`R_C=8`$ Å the distance to the true native state is below 1 Å. Fig. 2. Distance probability distribution between amino acids in the six native conformations in this study. Fig. 3. Regions of physicality and of learnability in the ($`R_U`$,$`R_L`$) plane. Fig. 4. Correlation between energy $`E^{pair}`$ and RMS distance $`D`$ to the native state and between energy and the Hamming distance $`D_H`$ (see Eq. (11)). Results refer to contact map dynamics. Fig. 5. Contact maps of sequence N<sup>o</sup> 5: native (below diagonal) and of lowest energy found during the simulation (above diagonal). Fig. 6. Native conformation of sequence N<sup>o</sup> 5 and its lowest energy conformation found during the simulation. The RMS distance is 2.1 Å. Fig. 7. MD folding trajectories of sequence N<sup>o</sup> 5. Each trajectory (small connected full dots) is obtained starting from 10 conformations (large full dots) randomly chosen from the 100 conformations of lowest energy obtained by contact map dynamics. Conformations (empty large dots) collected during other MD simulation (see §II C) are also shown as reference.
no-problem/9902/astro-ph9902179.html
ar5iv
text
# What Could the Machos Be?Based on an invited talk at COSMO-98 in Asilomar, AIP Proc. “Particle Physics and the Early Universe”, ed. David Caldwell ## Abstract If the Universe has a significant baryonic dark component in the form of compact objects in galaxy halos (machos), then there is a minute chance (about $`10^7`$) that one of the Galactic machos passes sufficiently close to our line of sight to a star out of some $`10^7`$ monitored stars in the Magellanic Clouds (MCs) that it brightens by more than $`0.3`$ magnitude due to gravitational focusing. After a brief discussion of the current controversy over the interpretation of the observed events, i.e., whether the lensing is caused by halo white dwarfs or machos in general or by stars in various observed or hypothesized structures of the Clouds and the Galaxy, I propose a few observations to put ideas of the pro-macho camp and the pro-star camp to test. In particular, I propose a radial velocity survey towards the MCs. Current Debate. Experimental searches for microlenses in the line of sight to the MCs by MACHO, OGLE, EROS and a number of follow-up surveys have found more than $`16`$ candidates to the LMC and two to the SMC. Most of them are clustered within $`2^o`$ of the center of the LMC (cf. upper left panel of Fig.1). The observed rate falls short of explaining the rotation curve of the Galaxy with a smooth halo of machos (Alcock et al. 1997a). Whether this is yet another major puzzle in astronomy which requires explanation by fine tuning is an open question (Adams & Laughlin 1996, Chabrier et al. 1996, Charlot & Silk 1995, Flynn et al. 1996, Graff & Freese 1996, Gates et al. 1998, Gibson & Mould 1997, Honma & Kan-Ya 1998). Another complication to the otherwise plausible conversion of the event rate to $`\mathrm{\Omega }_{macho}`$ of the Universe is the inevitable background events, on top of any macho signal, coming from lensing of two stars in the LMC disc (Sahu 1994). Since disc-disc lensing is not very efficient if the LMC disc is cold and thin (Gould 1994, Wu 1994), more speculative models have been put forward by the pro-star camp to boost up the star-star lensing by hypothesizing a variety of special structures to the MCs. In particular, a connection is drawn between the unexpected rate of microlensing and the Milky Way-MCs and SMC-LMC interactions (Zhao 1998a, b, Weinberg 1998); the latter is yet another long-standing issue, rooted in the tidal vs. ram-stripping formation of the Magellenic stream, which is resolved finally by recent observations (Putman et al. 1998 and references therein). Several observations and theoretical arguments suggest that our line of sight to the LMC passes through a 3-dimensional stellar distribution more extended in the line of sight than a simple thin disc of the LMC (Evans et al. 1998, Kunkel et al. 1997, Zaritsky et al. 1997, 1999, Zhao 1996). Others argue against additional structures other than the thin disc of the LMC (Alcock et al. 1997b, Gallart 1998, Beaulieu & Sackett 1998, Bennett 1998, Gould 1998, Johnston 1998, Ibata et al. 1999). The related issue on the efficiency of self-lensing of a tidally stretched SMC bar has also been brought up a few times (Alcock et al. 1997c, Palanque-Delabrouille et al. 1998, Zhao 1998a), but was highlighted soon after the discovery of the 98-SMC-1 caustic binary event (Afonso et al. 1998, Alcock et al. 1999, Albrow et al. 1999, Becker et al. 1998, Sahu & Sahu 1998, Udalski et al. 1998). The observations are clearly in support of the idea that the SMC has a non-equilibrium structure extended in the line of sight (Caldwell & Coulson 1986, Mathewson, Ford & Visvanathan 1986, Welch et al. 1987, Westerlund 1997). While there is still some dispute about the interpretation of the two binary lens candidates LMC-9 (Bennett et al. 1996, Zhao 1998a) and 98-SMC-1 (Honma 1999, Kerins & Evans 1999), the consensus seems to be that self-lensing in the LMC and SMC is significant if not dominant. In general, models of the lens and source distribution to the LMC fall in two broad classes. Objects with motion decoupled from the Clouds. These are dark objects or stars of origin independent of the MCs. They generally move with a velocity quite different from the MCs. Besides the Galactic dark halo, they include the thick disc, and maybe a segment of the warp of our Galaxy (Evans et al. 1998) or even a Sagittarius-like undigested substructure in the halo which by chance is in the foreground or background of the MCs (Zhao 1996). Recent observations, including RR Lyrae sample in the MACHO sample, are not in favor of chance alignment models. Stars in the warp would have small heliocentric velocities as they participate in the Galactic rotation just like the Sun. The hypothesized substructure should be a cold feature in the velocity range between $`300\text{ km\hspace{0.17em}s}^1`$ to $`300\text{ km\hspace{0.17em}s}^1`$. Objects co-moving with the Clouds. These are generally stars which are one way or another generated by dynamical processes in the formation and evolution of the MCs or their progenitor. A predicted orbital and tidal disruption history of the Ancient Magellanic Galaxy (AMG) is shown (the two right panels of Fig. 2) in a model for the progenitor galaxy (Zhao 1998b). The lower left is a plot of the end-result of an N-body simulation (kindly made available by Lance Gardiner) of the LMC-SMC-Milky Way interaction, which creates a rich variety of stellar and gaseous substructures shown here. Zhao (1996, 1998a,b) suggested lensing of several such substructures, loosely bound or completely unbound to the MCs. These include a completely unbound grand tidal arm of the MCs which extends to Ursa Minor and Draco, a localized common halo bridging the MCs, a tidally strongly deformed SMC, a polar ring structure or a tidal halo of the LMC (cf. Fig. 2). The LMC disc might also not be perfectly thin and flat. Similar to the Galactic disc, which has a strong warp at the edge of the disc due to perhaps tidal interaction with the LMC or the Sagittarius dwarf galaxy, the gas-rich LMC disc might sport a thickened disc (Weinberg 1998) due to repeated harassment of the SMC and the Galaxy. All these models imply tidal interaction and could match observations, such as, the Magellanic Stream and its leading arm, proper motions of Ursa Minor and Draco, distributions of gas and stellar associations in the Magellanic Bridge, and a possible large line of sight extent of the SMC and a polar ring structure seen in the velocity distribution of LMC carbon stars (Kunkel et al. 1997). Weinberg’s model is also motivated by a diffuse structure seen in star count data of USNO2 which seems to end near the tidal radius of the LMC. These extra stars surrounding the Clouds are still highly speculative with highly uncertain geometries. It is debatable whether their structure is better described as a thickened disc, or a warp, a flare, a polar ring, a tidal halo etc.. But the common feature is that they circulate around the Galaxy together with the Clouds. In this sense they can be called co-moving objects, with a proper motion and radial velocity typically within, say $`100\text{ km\hspace{0.17em}s}^1`$, of MCs. For example, the Vertical Red Clump feature identified by Zaritsky et al. (1997, 1999) in their color-magnitude diagram of the LMC might be stars a few kpc in front of the LMC disc. If this interpretation holds up against countering arguments, the VRC objects would belong to the co-moving class rather than an independent stream since they move with a velocity very similar to the LMC. These models typically predict a lens-source separation $`>1`$ kpc and events of diameter crossing time $`>30`$days for a $`0.1M_{}`$ stellar lens, and require a total mass in tidal substructures above $`10\%`$ of that of the LMC, or comparable to that of the SMC to produce significant lensing. The challenge is to keep a large amount of unbound material near the Clouds, unless they are born recently (say due to a recent SMC-LMC encounter) or they remain loosely bound. For example, part of the common halo or bridge in the simulation shown in Fig. 2 is loosely bound to the MCs. It contains about $`10^9M_{}`$ in a mix of gas and stars with a velocity dispersion of about $`60\text{ km\hspace{0.17em}s}^1`$ (Gardiner et al. 1994, 1996). Smoking Guns for the two broad classes of models. Star-star lensing leaves detectable traces in the kinematical and spatial distribution of the lensed sources. This is illustrated schematically by the right two panels of Fig. 1. The narrow sinusoidal band of the PA vs. $`V_r`$ plot is indicative of the rotation of the kinematically cold (dispersion less than $`20\text{ km\hspace{0.17em}s}^1`$) young thin disc component; an older, thicker disc would also rotate but with a larger dispersion. Overplotted are the predicted radial velocities of extra stars coming from either a decoupled component (triangles), which is clearly offset from the LMC, or a substructure comoving with the LMC (asterisks), which would generally show as a thicker band with either little dependence on the PA, or dependence different from the rotating disc (e.g., stars on the polar ring of Kunkel et al.). The co-moving material can be identified after subtracting the rotation (the lower right panel); they are markedly offset from the narrow distribution of the LMC thin disc stars. Here I propose a number of ways to resolve the substructures in the line of sight. I. Self-lensing of stars comoving with the Clouds would induce a gradient of the event rate per survey star per year across the LMC disc since it is modulated by the structure of the LMC, whose width and density vary with the line of sight. The typical event time scale varies similarly. On the other hand if the lenses come from a smooth macho halo or any extended smooth structure decoupled from the MCs, it should be nearly the same for all lines of sight to the LMC disc. A tricky point here is that the dark halo could have fine structures, maybe a Sgr-like stream made of dark machos, which could induce a gradient as well. II. Self-lensing prefers sources at the backside or behind the LMC disc. This is because lensing is most efficient if the source is located a few kpc behind a dense screen of stars, here the LMC disc. As a result, one should find a slight bias of lensed sources towards fainter magnitude than average stars in the survey, which are mostly LMC disc stars. Together with the spatial bias in (i) self-lensing should induce a bias in the magnitude vs. radius relation (maybe in the direction of the arrow in upper left panel of Fig. 1). No such bias is expected for macho models. Again there could be ambiguity if the stars at the backside of the LMC disc are systematically younger or older, hence intrinsically brighter or fainter. III. Furthermore, these lensed sources behind the LMC disc are kinematically different from average stars in the LMC. This is perhaps the strongest signal of self-lensing since radial velocities can be measured so accurately that we can look for a small offset. There are two complementing effects here. First a radial velocity survey should pick up a small fraction of outliers of the rotation curve of the LMC disc (cf. right panels of Fig. 1), which may belong to some puffed-up distribution surrounding the LMC disc. Second the outliers at the backside of the LMC are more likely picked as lensed sources, with a velocity set apart from the rotation speed of the LMC disc in the same field by typically more than $`20\text{ km\hspace{0.17em}s}^1`$ (cf. the lower left panel of Fig. 1). In contrast, if all events come from LMC disc stars being lensed by foreground Milky Way machos or disc stars, then the lensed sources would follow the motions of average stars in the cold, rotating disc. IV. A continuation and expansion of the MACHO survey hopefully can yield a small sample of exotic events (e.g., when a source star passes the caustics of a binary lens) for which we can often tease out the relative lens-source parallax or proper motion. Their distribution would be a direct probe of the dynamics and structure of the lens population. This effect can be integrated with the radial velocity bias to set part various lens populations (cf. lower left panel of Fig. 1). The potential has clearly been demonstrated by LMC-9 and 98-SMC-1, where the observed small relative proper motion also seems to favor lensing by stars co-moving with the LMC (asterisks and p.m. histogram) than lensing by foreground populations in the Milky Way (triangles and the broad Gaussian curve with a p.m. dispersion of $`3`$mas/yr, typical for halo machos or Galactic thick disc stars). I expect a handful of such events in the next five years with an extrapolation of current surveys, while the Next Generation Microlensing Survey (NGMS, Stubbs 1998) holds the promise of a similar number of events each year. Again whether a sample of caustic crossing binary events is a fair sample depends on whether we expect the fraction of machos in close binaries to be the same as that of stars. Detection or non-detection of these subtle effects would be the key to resolve the controversy of the Galactic dark matter. The radial velocity survey is by far the most promising approach since it is not limited to the rare high amplification (caustic) events, it is not essential to take spectra of the source star during lensing, and the effect is big and easy to detect. While the strength of each of the “smoking guns” are still subject to details of our assumptions of, e.g., the star formation history of the LMC and intrinsic properties of machos, a complementary studies of lensing-induced systematic bias in radial velocity, distance, proper motion and spatial distributions is likely to give a robust conclusion on the nature of Galactic dark matter. More detailed simulations of the velocity distributions of various structures will be reported elsewhere, together with discussions of optimal strategies of carrying out the proposed observations to lift current degeneracies in interpreting the microlensing data. Finally studies of these effects can all be integrated in the NGMS. Tests of these effects could be first done towards the Galactic bulge where self-lensing of the Galactic bar surely plays an important role. New constraints will also come from microlensing surveys towards M31 (Crotts & Tomaney 1996, Ansari et al. 1997). A more distant but exciting prospect is to complement ground-based microlensing light curves with astrometric follow-up observations with the SIM satellite (2005-2010). Furthermore, GAIA (under study for launching 2009-2014) could in principle detect microlensing on the basis of astrometric shift alone without relying on the alerting system from the ground. The relative proper motion and parallax of the lens and the source will be the definitive test of the lens location and kinematics. The author thanks Wyn Evans, Ken Freeman, Puragra Guhathakurta, Rodrigo Ibata, Konrad Kuijken, Joel Primack, Penny Sackett and Chris Stubbs for helpful discussions, and Dennis Zaritsky for a critical reading.
no-problem/9902/cond-mat9902274.html
ar5iv
text
# Structure Optimization and Frozen Phonons in LiNbO3 ## Introduction Due to its various applications in non-linear optics and electro-optics, the ferroelectric material LiNbO<sub>3</sub> is being extensively studied over decades. Its ferroelectric transition temperature of 1480 K is among the highest known to date. The mechanism of the structural phase transition from paraelectric to ferroelectric phase is still an open question. Temperature dependence measurements of Raman scattering and far infrared reflectivity in LiNbO<sub>3</sub> suggest the displacive type phase transition. In contradiction with this picture, the absence of the $`A_1`$ mode softening reported in certain papers is an indication towards an order-disorder type phase transition. The study of the electronic structure and lattice dynamics from first principles was so far hindered by the relative complexity of the structure. In the ferroelectric phase, LiNbO<sub>3</sub> has 10 atoms in the unit cell; the space group is $`R3c`$. The atomic arrangement is given by oxygen octahedra stapled along the polar trigonal axis. Each Nb atoms is displaced from the center of the oxygen octahedron along the polar axis; the next octahedron (along the polar axis) is empty, and the following one contains a Li atom, displaced from the oxygen face along the trigonal axis. In the paraelectric phase, the space group of LiNbO<sub>3</sub> is $`R\overline{3}c`$. In this case, the Nb atoms are centered inside the oxygen octahedra, and the Li atoms lie inside the common face of two adjacent oxygen octahedra . The primitive cell and its internal parameters are shown in Fig. 1. Inbar and Cohen calculated from first principles the total energy profile associated with the ferroelectric instability. They have shown that a broadly spread assumption about the instability being primarily related to the Li displacement out of oxygen layers was not justified. In addition to the Li displacement, the shift (with respect to Nb) and distortion of the oxygen octahedra play quite a crucial role and lower the total energy much more efficiently. These results have been essentially reproduced by Yu and Park . In these previous studies, the ferroelectric distortion has been simulated by an uniform scaling of the crystal structure along the linear path between experimentally determined paraelectric and ferroelectric structures. In the present work, we fully optimize the structure of paraelectric and ferroelectric phases from first principles, adjusting the values of all internal parameters constrained only by the crystal symmetry. In the course of that, the energetics of individual atomic displacements can be analyzed and applied for the zone-center lattice dynamics simulation within the frozen-phonon scheme. With such simulation so far missing, the attribution of different experimentally measured zone-center phonon modes (with respect to atomic displacement patterns) was problematic. The extraction of normal vibration coordinates in LiNbO<sub>3</sub>, done in the present work, may be useful for the development of reliable lattice dynamics models in this system, including, e.g., the effects of doping. ## Method The calculations were performed using the full potential Linear Augmented Plane Wave method (see, e.g., ) with the addition of local orbital basis functions as implemented in WIEN97 FLAPW code . The exchange-correlation was treated within the local density approximation (LDA), using the parametrization by Perdew and Wang . The core states were treated fully relativistically, and the semicore and valence states were computed in a scalar relativistic approximation. The structure optimization in the para- and ferroelectric phase and the frozen phonon calculations were performed using a $`4\times 4\times 4`$ special $`k`$-points mesh which generated 20 $`k`$ points in the irreducible Brillouin zone. We tested the convergence in the $`k`$-space integration using a $`6\times 6\times 6`$ $`k`$-mesh (28 irreducible k-points) and found the difference in the total energy trends, as compared with the results on a sparser $`k`$-mesh, negligible for the analysis of lattice dynamics and structure optimization. The muffin tin radii chosen were 1.9 a.u. for Nb and 1.6 a.u. for Li and O, close to the values used by Inbar and Cohen in their FLAPW calculation. The convergency of the results with respect to the number of augmented plane waves used was also controlled; we used, on the average, 980 basis functions for each $`k`$-point. ## Ground-state structure The simultaneous optimization of the volume and the $`c/a`$ ratio for the paraelectric phase of LiNbO<sub>3</sub> resulted in the values of lattice parameters (in the hexagonal setting) $`a_H`$=5.1378 Å and $`c`$=13.4987 Å. As compared to the experimental data ($`a_H`$=5.1483 Å and $`c`$=13.8631 Å, see Ref. ), that corresponds to a volume underestimated by $`3\%`$ and a $`c/a`$ ratio deviating by $``$2% from experiment, i.e. quite good agreement by the standards of first-principles calculations based on the density functional theory. In the subsequent optimization of atomic positions, we kept the lattice parameters fixed. The fully optimized paraelectric structure was found energetically instable with respect to the symmetry-lowering atom displacements. The paraelectric phase has one internal coordinate whereas the ferroelectric phase has four. These four parameters are actually related to the four symmetry coordinates which can be introduced to describe the $`A_1`$-TO phonons. Therefore, in the course of accumulating total energy data for our frozen phonon calculations (see next section), we were able simultaneously to optimize the ferroelectric ground-state structure to quite good accuracy. Moreover, the calculated forces have been used in the process of structure optimization. The experimental and calculated atomic positions (in the hexagonal coordinates, following ) for both paraelectric and ferroelectric phases are given in Table 1. The agreement between theory and experiment in all internal parameters is quite good, indicating a presumably nonproblematic applicability of LDA for the study of lattice dynamics in LiNbO<sub>3</sub>. The energy difference we found between paraelectric and ferroelectric phases is essentially the same as determined by Inbar and Cohen . ## Frozen phonons The $`\mathrm{\Gamma }`$-TO frequencies in the ferroelectric structure are split by symmetry into four $`A_1`$, five $`A_2`$ and nine $`E`$ modes. The frequencies of $`A_1`$ modes have been determined in a number of Raman spectroscopy measurements, with a satisfactory agreement of results . The displacement patterns corresponding to different modes have not yet been unambiguously attributed, to our best knowledge. Some information to that point has been attained, however, based on a study of the isotope effect in Ref. , that is addressed below. The $`A_2`$ modes are both Raman and infrared silent. Our calculation data are therefore predictive with respect to these vibrations. The nine $`E`$ modes are the source of the largest controversy in the experimental study of vibrations in LiNbO<sub>3</sub>. They are attributed differently in a number of publications. The discussion on this controversy can be found, e.g., in Ref. . The first-principles description of $`E`$ modes remains beyond the scope of the present study. The calculation of the force constants to be used in the soft-phonon calculation typically involves a second-order fit over a number of data points, corresponding to different displacements within a given symmetry constraint. For $`A_1`$ modes, we had to consider more than ninety different geometries in order to obtain a satisfactory total-energy fit in a sense that the results remain relatively unaffected by the addition of extra total energy data. The latter applies at least to two (TO<sub>2</sub> and TO<sub>3</sub>) of the four modes. The calculated frequencies of the $`A_1`$ modes are shown in Table 2 in comparison with the experimental data. The agreement is very good for TO<sub>2</sub> and TO<sub>3</sub> modes. Taken together with the above mentioned stability of calculated frequencies with respect to improving the total-energy fit, this indicates that these two vibration modes are with high accuracy harmonic. The corresponding eigenvectors are shown in Table 3. (Note that the displacement of both Nb atoms and both Li atoms in the unit cell is identical within the $`A_1`$ modes.) One can see that TO<sub>2</sub> is essentially the $`z`$-vibration of Li ions with respect to a (relatively rigid) rest of a crystal. Actually this is the only $`A_1`$ mode with a substantial amount of Li movement. It is clearly seen in the experimentally measured frequencies of <sup>6</sup>Li-doped LiNbO<sub>3</sub> that only the TO<sub>2</sub> mode exhibits an isotope effect, increasing its frequency by 14 cm<sup>-1</sup> (see Table 2). Our calculation of vibration frequencies with the decreased mass of Li ion as well indicates that only the TO<sub>2</sub> mode is affected, its frequency being increased by 19 cm<sup>-1</sup>. The displacement pattern in the TO<sub>1</sub> mode has some resemblance to that in the soft mode of cubic perovskites, like e.g. KNbO<sub>3</sub>: essentially, Nb vibrates in antiphase with the oxygen sublattice along the trigonal axis, leaving Li relatively static. As follows from the frozen-phonon treatment in cubic KNbO<sub>3</sub> (see, e.g., Ref. ), this mode is instable against off-center displacements and hence exhibits in the harmonic approximation an imaginary frequency. But even when stabilized by an appropriate symmetry lowering, as was calculated for example for tetragonal or orthorhombic KNbO<sub>3</sub>, the mode in question roughly maintains its original displacement pattern. In LiNbO<sub>3</sub>, the TO<sub>1</sub> is the ultimately stabilized soft mode of the paraelectric phase. We analyzed the total energy as function of atomic displacements consistent with the TO<sub>1</sub> eigenvector and found noticeable deviations from the parabolic behaviour. Because of this, the calculated harmonic frequency strongly differs from the experimental numbers. Another example of a strongly anharmonic mode is the TO<sub>4</sub> mode. Along with TO<sub>3</sub>, it is visualized in Fig. 2 in a top view (along the $`z`$-axis). The $`z`$-displacements are negligible in these two modes, and practically only oxygen ions are participating in them. The difference between these modes is the following: in TO<sub>3</sub>, the whole oxygen octahedra are tilted as essentially rigid objects, that is a relatively soft and harmonic vibration. In the TO<sub>4</sub> mode, the torsion of individual octahedra takes place, that costs much higher energy and has a strong anharmonic contribution. For the $`A_2`$ modes, no experimental information is available by means of Raman nor infrared spectroscopy, so our results are actually a theory prediction. We used 127 different geometries to obtain an accurate second-order total energy fit in the 5-dimensional space of symmetry coordinates. Calculated frequencies and eigenvectors are shown in Table 4. Note that in contrast to $`A_1`$ modes, both Nb and both Li atoms are now moving in antiphase. Apart from the softest mode which includes essentially the Nb vs. Li antiphase $`z`$-movement, and the hardest one, which is again a distortion of oxygen octahedra (but now including the $`z`$-stretching as well), three intermediate modes are have contributions from $`z`$\- as well as $`xy`$-displacements of all three atomic constituents. Summarizing, we performed from first principles in a LDA-based calculation the optimization of the ground structure of ferroelectric LiNbO<sub>3</sub> and calculated the frequencies and eigenvectors of $`A_1`$ and $`A_2`$ $`\mathrm{\Gamma }`$-TO modes. Large anharmonic contributions were found for two of four $`A_1`$ modes. These results may present a basis for a subsequent treatment of lattice dynamics models in related systems and/or of anharmonic effects. The study of $`E`$ modes is now in progress. ## Acknowledgements The work was supported by the German research Society (SFB 225; graduate college). A. P. is grateful to R. Cohen for useful discussions. The authors appreciate the usefulness of the crystal structure visualization software written and provided by M. Methfessel.
no-problem/9902/math9902020.html
ar5iv
text
# A combinatorial proof of the log-concavity of the numbers of permutations with 𝑘 runs ## 1 Introduction Let $`p=p_1p_2\mathrm{}p_n`$ be a permutation of the set $`\{1,2,\mathrm{},n\}`$ written in the one-line notation. We say that $`p`$ changes direction at position $`i`$, if either $`p_{i1}<p_i>p_{i+1}`$, or $`p_{i1}>p_i<p_{i+1}`$, in other words, when $`p_i`$ is either a peak or a valley. We say that $`p`$ has $`k`$ runs if there are $`k1`$ indices $`i`$ so that $`p`$ changes direction at these positions. So for example, $`p=3561247`$ has 3 runs as $`p`$ changes direction when $`i=3`$ and when $`i=4`$. A geometric way to represent a permutation and its runs by a diagram is shown on Figure 1. The runs are the line segments (or edges) between two consecutive entries where $`p`$ changes direction. So a permutation has $`k`$ runs if it can be represented by $`k`$ line segments so that the segments go “up” and “down” exactly when the entries of the permutation do. The theory of runs has been studied in \[3, Section 5.1.3\] in connection with sorting and searching. In this paper, we are going to study the numbers $`R(n,k)`$ of permutations of length $`n`$ or, in what follows, $`n`$-permutations with $`k`$ runs. We will show that for any fixed $`n`$, the sequence $`R(n,k)`$, $`k=0,1,\mathrm{},n1`$ is log-concave, that is, $`R(n,k1)R(n,k+1)R(n,k)^2`$. In particular, this implies that this same sequence is unimodal, that is, there exists an $`m`$ so that $`R(n,1)R(n,2)\mathrm{}R(n,m)R(n,m+1)\mathrm{}R(n,n1)`$. We will also show that roughly half of the roots of the generating function $`R_n(x)=_{k=1}^{n1}R(n,k)x^k`$ are equal to $`1`$, and give a combinatorial interpretation for the term which remains after one divides $`R_n(x)`$ by all the $`(x+1)`$ factors. While doing that, we will also give a new proof of the well-known fact that the Eulerian numbers are log-concave. ## 2 The Factorization of $`R_n(x)`$ Let $`p=p_1p_2\mathrm{}p_n`$ be a permutation. We say that $`i`$ is a descent of $`p`$ if $`p_i>p_{i+1}`$, while we say that $`i`$ is an ascent of $`p`$ if $`p_i<p_{i+1}`$. In our study of $`n`$-permutations with a given number of runs, we can clearly assume that 1 is an ascent of $`p`$. Indeed, taking the permutation $`q=q_1q_2\mathrm{}q_n`$, where $`q_i=n+1p_i`$, we get the complement of $`p`$, which has the same number of runs as $`p`$. This implies in particular that for any given $`i`$, there are as many $`n`$-permutations with $`k`$ runs in which $`p_i<p_{i+1}`$ as there are such permutations in which $`p_i>p_{i+1}`$. Let $`Q_n(x)`$ be any generating function enumerating $`n`$-permutations according to some statistics. We say that $`Q_n(x)`$ is invariant to $`(i,i+1)`$ if the set of $`n`$-permutations with $`p_i>p_{i+1}`$ contributes $`Q_n(x)/2`$ to $`Q`$. Certainly, in this case the set of $`n`$-permutations with $`p_i<p_{i+1}`$ contributes accounts $`Q_n(x)/2`$ to $`Q`$ as well. Let $`R_n(x)=_{k=1}^{n1}R(n,k)x^k`$ be the ordinary generating function of $`n`$-permutations with $`k`$ runs, where $`1kn1`$. So we have $`R_2(x)=2x`$, $`R_3(x)=2x+4x^2`$, and $`R_4(x)=10x^3+12x^2+2x`$. One sees that all coefficients of $`R_n(x)`$ are even, which is explained by the symmetry described above. The following proposition is our initial step of factoring $`R_n(x)`$. It will lead us to the definition of an important version of this polynomial. ###### Proposition 2.1 For all $`n4`$, the polynomial $`R_n(x)`$ is divisible by $`(x+1)`$. Proof: It is straightforward to verify (by considering all possible patterns for the last four entries of $`p`$) that the involution $`I_1`$ interchanging $`p_{n1}`$ and $`p_n`$ increases the number of runs by 1 in half of all permutations, and, being an involution, decreases the number of runs by 1 in the other half of the permutations. In particular, there are as many permutations with an odd number of runs as there are with an even number of runs, so $`(x+1)`$ is indeed a divisor of $`R_n(x)`$. $`\mathrm{}`$ ###### Example 2.2 If $`n=4`$, then there is 1 permutation with 1 run, 6 permutations with 2 runs and 5 permutations with 3 runs. So $`R_4(x)=2(5x^3+6x^2+x)=2(x+1)(5x^2+x)`$. We want to extend the result of Proposition 2.1 by proving that $`R_n(x)`$ has $`(x+1)`$ as a factor with a large multiplicity, and also, we want to find a combinatorial interpretation for the polynomial obtained after dividing $`R_n(x)`$ by the highest possible power of $`(x+1)`$. For that purpose, we introduce the following definition. ###### Definition 2.3 For $`jm=(n2)/2`$, we say that $`p`$ is a $`j`$-half-ascending permutation if, for all positive integers $`ij`$, we have $`p_{n+12i}<p_{n+22i}`$. If $`j=m`$, then we will simply say that $`p`$ is a half-ascending permutation. So $`p`$ is a 1-half-ascending permutation permutation if $`p_{n1}<p_n`$. In a $`j`$-half-ascending permutation, we have $`j`$ relations, and they involve the rightmost $`j`$ disjoint pairs of entries. The term half-ascending refers to the fact that at least half of the involved positions are ascents. There are $`n!2^j`$ $`j`$-half-ascending permutations Now we define a modified version of the polynomials $`R_n(x)`$ for $`j`$–half-ascending permutations. ###### Definition 2.4 Let $`p`$ be a $`j`$-half-ascending permutation. Let $`r_j(p)`$ be the number of runs of the substring $`p_1,p_2,\mathrm{},p_{n2j}`$, and let $`s_j(p)`$ be the number of descents of the substring $`p_{n2j},p_{n+12j},\mathrm{},p_n`$. Denote $`t_j(p)=r_j(p)+s_j(p)`$, and define $$R_{n,j}(x)=\underset{pS_n}{}x^{t_j(p)}.$$ In particular, we will denote $`R_{n,m}(x)`$ by $`T_n(x)`$, that is, $`T_n(x)`$ is the generating function for half-ascending permutations. So in other words, we count the runs in the non-half-ascending part and we count the descents in the half-ascending part (and on that part, as it will be discussed, the number of descents determines that of runs.) ###### Corollary 2.5 For all $`n4`$ we have $$\frac{R_n(x)}{x+1}=R_{n,1}(x).$$ Moreover, $`R_{n,1}(x)`$ is invariant to $`(i,i+1)`$ for all $`in3`$. Proof: Recall from proof of Proposition 2.1 that involution $`I_1`$ makes pairs of permutations, and each pair contains two elements whose numbers of runs differ by 1. Note that half of these pairs consist of two elements with $`p_{n3}<p_{n2}`$ and the other half consist of two elements with $`p_{n3}>p_{n2}`$. As $`R_n(x)`$ is invariant to $`(n3,n4)`$, it suffices to consider the first case. Dividing $`R_n(x)`$ by $`(x+1)`$ we obtain the run-generating function for the set of permutations which contains one element of each of these pairs, namely, the one having the smaller number of runs. Observe that for these permutations, the number of runs is equal to the value of $`t_1(p)`$ for the permutation in that pair in which $`p_{n1}<p_n`$ (by checking both possibilities $`p_{n2}<p_{n1}`$ and $`p_{n2}>p_{n1}`$), so $`R_n(x)/(x+1)=R_{n,1}(x)`$. Note that our argument also proves that those permutations with $`p_i<p_{i+1}`$ contribute exactly $`R_{n,1}(x)/2`$ to $`R_{n,1}(x)`$, as they represent half of $`R_n(x)`$ divided by $`(x+1)`$, so our second claim is proved, too. $`\mathrm{}`$ We point out that it is not true in general that in each pair made by $`I_1`$, the permutation having the smaller number of runs is the one with $`p_{n1}<p_n`$. What is true is that we can suppose that $`p_{n1}<p_n`$ if we count permutations by the defined parameter $`t_1(p)`$ instead of the number of runs. This latter could be viewed as the $`t_0(p)`$ parameter. ###### Example 2.6 If $`n=4`$, then we have 6 permutations in which $`p_3<p_4`$ and $`p_1<p_2`$: 1234, 1324, 1423, 2314, 2413, 3412. We have $`t_1(1234)=1`$ and $`t_1(p)=2`$ for all the other five permutations, showing that indeed, $`R_{4,1}(x)=2(5x^2+x)`$. For $`1jm`$, let $`I_j`$ be the involution interchanging $`p_{n+12j}`$ and $`p_{n+22j}`$. Then the following strong result generalizes Proposition 2.1. ###### Lemma 2.7 For all $`n4`$ and $`1jm`$, we have $$\frac{R_n(x)}{(x+1)^j}=R_{n,j}(x),$$ where $`m=(n2)/2.`$ Moreover, $`R_{n,j}(x)`$ is invariant to $`(i,i+1)`$ for $`in2j1`$. Proof: By induction on $`j`$. For $`j=1`$, the statement is true by Proposition 2.1. Now suppose we know the statement for $`j1`$. To prove that $`R_{n,j1}(x)/R_{n,j}(x)=x+1`$, we need to group all $`(j1)`$-half-ascending permutations in pairs, so that the $`t_{j1}`$ values of the two elements of any given pairs differ by one, and show that the set of permutations consisting of the elements of each pair having the smaller $`t_{j1}`$ value yields the generating function $`R_{n,j}(x)`$. However, $`I_j`$ just does that, as can be checked by verifying both possibilities $`p_{n2j}<p_{n+12j}`$ and $`p_{n2j}>p_{n+12j}`$. These are the only cases to consider as we can assume by our induction hypothesis that $`p_{n12j}<p_{n2j}`$. Moreover, permutations with $`p_i<p_{i+1}`$ contribute exactly $`R_{n,j}(x)/2`$ to $`R_{n,j}(x)`$ if $`in2j1`$ as they represent half of $`R_{n,j1}(x)`$ divided by $`(x+1)`$. $`\mathrm{}`$ Note that we have just repeated the proof of Proposition 2.1 with general $`j`$, instead of $`j=1`$. ###### Corollary 2.8 We have $$\frac{R_n(x)}{(x+1)^m}=T_n(x).$$ So we have proved that $`m=(n2)/2`$ of the roots of $`R_n(x)`$ are equal to $`1`$, and certainly, one other root is equal to 0 as all permutations have at least one run. It is possible to prove analytically that the other half of the roots of $`R_n(x)`$, that is, the roots of $`T_n(x)`$, are all, real, negative, and distinct. That implies that the coefficients of $`R_n(x)`$ and $`T_n(x)`$ are log-concave. However, in the next section we will combinatorially prove that the coefficients of $`T_n(x)`$ form a log-concave sequence. Let $`U(n,k)`$ be the coefficient of $`x^k`$ in $`T_n(x)`$. Let $`𝒰(n,k)`$ be the set of half-ascending permutations with $`k`$ descents, so $`|𝒰(n,k)|=U(n,k)`$. Now suppose for shortness that $`n`$ is even and assume that $`p`$ is a half-ascending permutation, that is, $`p_{2i1}<p_{2i}`$ for all $`i`$, $`1in/2`$. The following proposition summarizes the different ways we can describe the same parameter of $`p`$. ###### Proposition 2.9 Let $`p`$ be a half-ascending permutation. Then $`p`$ has $`2k+1`$ runs if and only if $`p`$ has $`k`$ descents, or, in other words, when $`t(p)=k+1`$. If $`n`$ is odd, then the rest of our argument is a little more tedious, though conceptionally not more difficult. We do not want to break the course of our proof here, so we will go on with the assumption that $`n`$ is even, then, in the second part of the proof of Theorem 4.2, we will indicate what modifications are necessary to include the case of odd $`n`$. So in order to prove that the sequence $`R(n,k)`$ is log-concave in $`k`$, we need to prove that the sequence $`U(n,k)`$ enumerating half-ascending $`n`$-permutations with $`k`$ descents is log-concave. That would be sufficient as the convolution of two log-concave sequences is log-concave . ## 3 A lattice path interpretation Following , we will set up a bijection from the set $`𝒜(n,k)`$ of $`n`$-permutations with $`k`$ descents onto that of labeled northeastern lattice paths with $`n`$ edges, exactly $`k`$ of which are vertical. However, our lattice paths will be different from those in ; in particular, they will preserve the information if the position $`i`$ is an ascent or descent. Let $`𝒫(n)`$ be the set of labeled northeastern lattice paths with the $`n`$ edges $`a_1,a_2,\mathrm{},a_n`$ and the corresponding positive integers as labels $`e_1,e_2,\mathrm{},e_n`$ so that the following hold: 1. the edge $`a_1`$ is horizontal and $`e_1=1`$, 2. if the edges $`a_i`$ and $`a_{i+1}`$ are both vertical, or both horizontal, then $`e_ie_{i+1}`$, 3. if $`a_i`$ and $`a_{i+1}`$ are perpendicular to each other, then $`e_i+e_{i+1}i+1.`$ We will not distinguish between paths which can be obtained from each other by translations. Let $`𝒫(n,k)`$ be the set of all such labeled lattice paths which has $`k`$ vertical edges, and let $`P(n,k)=|𝒫(n,k)|`$. ###### Proposition 3.1 The following two properties of paths in $`𝒫(n)`$ are immediate from the definitions. * For all $`i2`$, we have $`e_ii1`$. * Fix the label $`e_i`$. Then if $`e_{i+1}`$ can take value $`v`$, then it can take all nonnegative integer values $`wv`$. Also note that all restrictions on $`e_{i+1}`$ are given by $`e_i`$, independently of preceding $`e_j`$, $`j<i`$. The following bijection is the main result in this section. ###### Theorem 3.2 The following description defines a bijection from $`𝒜(n)`$ onto $`𝒫(n)`$. Let $`p`$ be a permutation on $`n`$ elements. To obtain the edge $`a_i`$ and the label $`e_i`$ for $`2in`$, restrict the permutation $`p`$ to the $`i`$ first entries and relabel the entries to obtain the permutation $`q=q_1\mathrm{}q_i`$. * If the position $`i1`$ is a descent of the permutation $`p`$ (equivalently, of the permutation $`q`$), let the edge $`a_i`$ be vertical and the label $`e_i`$ be equal to $`q_i`$. * If the position $`i1`$ is an ascent of the permutation $`p`$, let the edge $`a_i`$ be horizontal and the label $`e_i`$ be $`i+1q_i`$. Moreover, this bijection restricts naturally to a bijection between $`𝒜(n,k)`$ and $`𝒫(n,k)`$ for $`0kn1`$. Proof: It is straightforward to see that the map described is injective on the set of labeled lattice path, not necessarily satisfying conditions (2) and (3). Assume that $`i`$ and $`i+1`$ are both descents of the permutation $`p`$. Let $`q`$, respective $`r`$, be the permutation when restricting to the $`i`$, respective $`i+1`$, first elements. Observe that $`q_i`$ is either $`r_i`$ or $`r_i1`$. Since $`r_i>r_{i+1}`$ we have $`q_ir_{i+1}`$ and condition (2) is satisfied in this case. By similar reasoning the three remaining cases are shown, hence the map is into the set $`𝒫(n)`$. To see that this is a bijection, we show that we can recover the permutation $`p`$ from its image. It is sufficient to show that we can recover $`p_n`$, and then use induction on $`n`$ for the rest of $`p`$. To recover $`p_n`$ from its image, simply recall that $`p_n`$ is equal to the label $`l`$ of the last edge if that edge is vertical, and to $`n+1l`$ if that edge is horizontal. $`\mathrm{}`$ The lattice path corresponding to the permutation $`243165`$ is shown on Figure 2. The difference between our bijection and that of is that in ours, the direction of $`a_i`$ tells us whether $`p_{i1}`$ is a descent in $`p`$. This is why we can use this bijection to gain information the class of half-ascending permutations. ###### Corollary 3.3 The bijection in Theorem 3.2 restricts to a bijection from $`𝒰(n,k)`$ to lattice paths in $`𝒫(n,k)`$ where $`a_i`$ is horizontal for all even indices $`i`$. ## 4 The log-concavity of $`U(n,k)`$ In this section we are going to give a new proof for the fact that the numbers $`A(n,k)=|𝒜(n,k)|`$ are unimodal in $`k`$, for any fixed $`n`$. This fact is already known and has elegant proofs . However, our proof will also indicate the unimodality of the $`U(n,k)`$. ###### Theorem 4.1 For all positive integers $`n`$ and all positive integers $`kn`$ we have $$A(n,k1)A(n,k+1)A(n,k)^2$$ and also $$U(n,k1)U(n,k+1)U(n,k)^2.$$ Proof: To prove the theorem combinatorially, we construct a quasi-injection $$\mathrm{\Phi }:𝒫(n,k1)\times 𝒫(n,k+1)𝒫(n,k)\times 𝒫(n,k).$$ By quasi-injection we mean that there will be some elements of $`𝒫(n,k1)\times 𝒫(n,k+1)`$ for which $`\mathrm{\Phi }`$ will not be defined, but the number of these elements will be less than that of elements in $`𝒫(n,k)\times 𝒫(n,k)`$ which are not in the image of $`\mathrm{\Phi }`$. In particular, the restriction of $`\mathrm{\Phi }`$ onto $`𝒱(n,k1)\times 𝒱(n,k+1)`$ will map into $`𝒱(n,k)\times 𝒱(n,k)`$, where $`𝒱(n,k)`$ is the subset of $`𝒫(n,k)`$ consisting of lattice paths in which $`a_i`$ is horizontal for all even $`i`$. Let $`(P,Q)𝒫(n,k1)\times 𝒫(n,k+1)`$. Place the initial points of $`P`$ and $`Q`$ at $`(0,0)`$ and $`(1,1)`$, respectively. Then the endpoints of $`P`$ and $`Q`$ are $`(nk+1,k1)`$ and $`(nk,k)`$, respectively, so $`P`$ and $`Q`$ intersect. Let $`X`$ be their first intersection point (we order intersection points from southwest to northeast), and decompose $`P=P_1P_2`$ and $`Q=Q_1Q_2`$, where $`P_1`$ is a path from $`(0,0)`$ to $`X`$, $`P_2`$ is a path from $`X`$ to $`(nk,k)`$, $`P_1`$ is a path from $`(1,1)`$ to $`X`$, and $`Q_2`$ is a path from $`X`$ to $`(nk+1,k1)`$. Let $`P^{}=P_1Q_2`$ and let $`Q^{}=Q_1P_2`$. If $`P^{}`$ and $`Q^{}`$ are valid paths, that is, if their labeling fulfills conditions (1)–(3), then we set $`\mathrm{\Phi }(P,Q)=(P^{},Q^{})`$. See Figure 3 for this construction. It is clear that $`\mathrm{\Phi }(P,Q)=(P^{},Q^{})𝒫(n,k)\times 𝒫(n,k)`$, (in particular, $`(P^{},Q^{})`$ belongs to the subset of $`𝒫(n,k)\times 𝒫(n,k)`$ consisting of intersecting pairs of paths), and that $`\mathrm{\Phi }`$ is one-to-one. What remains to show is that the number of pairs $`(P,Q)𝒫(n,k1)\times 𝒫(n,k+1)`$ for which $`\mathrm{\Phi }`$ cannot be defined this way is less than the number of pairs $`(P^{},Q^{})𝒫(n,k)\times 𝒫(n,k)`$ which are not obtained as images of $`\mathrm{\Phi }`$. In fact, we will show that this is true even if we restrict ourselves to pairs $`(P^{},Q^{})𝒫(n,k)\times 𝒫(n,k)`$ which do intersect. Let $`a,b,c,d`$ be the labels of the four edges adjacent to $`X`$ as shown in Figure 4, the edges $`AX`$ and $`XB`$ originally belonging to $`P`$ and the edges $`CX`$ and $`XD`$ originally belonging to $`Q`$. (It is possible that these four edges are not all distinct; $`A`$ and $`C`$ are always distinct as $`X`$ is the first intersection point, but it could be, that $`B=D`$ and so $`BX=DX`$; this singular case can be treated very similarly to the generic case we describe below and hence omitted). Then a configuration shown on Figure 4 can be part of a pair $`(P,Q)`$ in the domain of $`\mathrm{\Phi }`$ exactly when $`ab`$ and $`cd`$. On the other hand, such a configuration can be part of a pair of paths $`(P^{},Q^{})`$ in the image of $`\mathrm{\Phi }`$ exactly when $`a+di`$ and $`b+ci`$, where $`i1`$ is the sum of the two coordinates of $`X`$. Let us keep $`b`$ fixed, and see what that means for $`a`$ and $`c`$. The value of $`a`$ can be $`b,b+1,\mathrm{},i1`$, so $`a`$ can take $`ib`$ different values, whereas the value of $`c`$ can be $`1,2,\mathrm{},ib`$ which is again $`ib`$ different possibilities. Note in particular that the second set of values can be obtained from the first by simply subtracting each value from $`i`$. Then the set of all labeled paths from $`(0,0)`$ to $`A`$ is identical to that of paths from $`(1,1)`$ to $`C`$. In particular, the distributions of the labels of the edges ending in $`A`$, respectively $`C`$, are identical, even if we also require that they end in a horizontal, or in a vertical edge. Let $`H(X)`$ be the set of all pairs of labeled paths $`((0,0),X)\times ((1,1),X)`$. Now it is easy to see that if any labeled path $`G`$ from $`(0,0)`$ to $`A`$ allows $`a`$ to be in the interval $`b,b+1,\mathrm{},i1`$, then the path from $`(1,1)`$ to $`C`$ identical to $`G`$ allows $`c`$ to be in the interval $`1,2,\mathrm{},ib`$. Indeed, the edge preceding $`AX`$ is either horizontal, and then it must have a label between $`b`$ and $`i2`$, or it is vertical, and then it must be between 1 and $`ib`$ to make it possible for $`a`$ to be in the interval $`b,b+1,\mathrm{},i1`$. Similarly, if the edge preceding $`CX`$ is horizontal, and it has a label between $`b`$ and $`i2`$, or if it is vertical, and has a label between 1 and $`ib`$, then it makes it possible for $`c`$ to be in the interval $`1,2,\mathrm{},ib`$. (And certainly, if the edge preceding $`CX`$ is horizontal, and it has a label smaller than $`b`$, that is good, too). As the distributions of the labels of the edges ending in $`A`$, respectively $`C`$ are identical, this implies that for any fixed values of $`b`$, there are at least as many pairs of paths in $`H(X)`$ so that $`b+ci`$ as there are pairs of paths in $`H(X)`$ with $`ab`$. (Recall that $`i1`$ is the sum of the coordinates of $`X`$). In other words, if the pair $`(\alpha ,\beta )((0,0),A)\times ((1,1),C)`$ allows $`ab`$, then the pair $`(\beta ,\alpha )((0,0),A)\times ((1,1),C)`$ allows $`b+ci`$, so we can flip $`\alpha `$ and $`\beta `$. We point out that this is intuitively not surprising: $`a`$ has to be at least a certain value, while $`c`$ has to be at most a certain value, and it is clear that this second requirement is easier in our labeling. By symmetry, if we fix $`d`$ instead of $`b`$, the same holds: the number of pairs of paths in $`H(X)`$ so that $`a+di`$ is at least as large as that of pairs of paths in $`H(X)`$ with $`cd`$, and that can be seen again by flipping $`\alpha `$ and $`\beta `$. Finally, this same argument certainly applies if we want both conditions to be satisfied: if the pair $`(\alpha ,\beta )((0,0),A)\times ((1,1),C)`$ allows $`ab`$ and $`cd`$, then the pair $`(\beta ,\alpha )((0,0),A)\times ((1,1),C)`$ allows $`b+ci`$ and $`a+di`$. And this is what we wanted to prove: there are at least as many pairs of paths in $`𝒫(n,k)\times 𝒫(n,k)`$ which are not images of $`\mathrm{\Phi }`$ as there are pairs of paths in $`𝒫(n,k1)\times 𝒫(n,k+1)`$ for which $`\mathrm{\Phi }`$ is not defined. As $`\mathrm{\Phi }`$ is one-to-one, this proves that $`A(n,k1)A(n,k+1)A(n,k)^2`$, so the sequence $`\{A(n,k)\}_k`$ is log-concave for all $`n`$. To prove that the sequence $`\{U(n,k)\}`$ is log-concave, recall that half-ascending permutations in $`𝒰(n,k)`$ correspond to elements of $`𝒱(n,k)`$, that is, elements of $`𝒫(n,k)`$ in which all edges $`a_i`$ are horizontal if $`i`$ is even. We point out that this implies $`B=D`$. Then note that $`\mathrm{\Phi }`$ does not change the indices of the edges, in other words, if $`\mathrm{\Phi }(P,Q)=(P^{},Q^{})`$, and a given edge northeast from $`X`$ was the $`i`$th edge of path $`P`$, then it will be the $`i`$th edge of path $`Q^{}`$. Therefore, $`\mathrm{\Phi }`$ preserves the property that all even-indexed edges are horizontal, so the restriction of $`\mathrm{\Phi }`$ into $`𝒱(n,k1)\times 𝒱(n,k+1)`$ maps into $`𝒱(n,k)\times 𝒱(n,k)`$. Finally, we need to show that there are more pairs of paths in $`𝒱(n,k)\times 𝒱(n,k)`$ which are not images of $`\mathrm{\Phi }`$ than there are pairs of paths in $`𝒱(n,k1)\times 𝒱(n,k+1)`$ for which $`\mathrm{\Phi }`$ is not defined. Note that the corresponding fact in the general case was a direct consequence of the fact that for any labeled path $`((0,0),A)`$ was identical to a unique labeled path $`((1,1),C)`$, and therefore the distributions of the labels $`a`$ and $`c`$ were identical. This remains certainly true if we restrict ourselves to paths in which all edges with even indices are horizontal. As any restriction of $`\mathrm{\Phi }`$ is certainly one-to-one, this proves that $`U(n,k1)U(n,k+1)U(n,k)^2`$. $`\mathrm{}`$ Now we are in a position to prove the main result of this paper. ###### Theorem 4.2 The polynomial $`R_n(x)`$ has log-concave coefficients, for all positive integers $`n`$. Proof: First suppose that $`n`$ is even. For $`n3`$, the statement is true. If $`n4`$, then Lemma 2.7 shows that $`R_n(x)=(x+1)^mT_n(x)`$. The coefficients of $`(x+1)^m`$ are just the binomial coefficients, which are certainly log-concave , while the coefficients of $`T_n(x)`$ are the $`U(n,k)`$, which are log-concave by Theorem 4.1 and the remark thereafter. As the product of two polynomials with log-concave coefficients has log-concave coefficients , the proof is complete for $`n`$ even. If $`n`$ is odd, then the equivalent of Proposition 2.9 is a bit more cumbersome. Again, we let us make use of symmetry by taking complements, but instead of assuming $`p_1<p_2`$, let us assume that $`p_2<p_3`$. Taking $`R_{n,m}(x)`$ then adds the restrictions $`p_4<p_5`$, $`p_6<p_7`$, $`\mathrm{}`$, $`p_{n1}<p_n`$. Then it is straightforward from the definition of $`t_m(p)`$ that $`t_m(p)=d(p)`$ where $`d(p)`$ is the number of descents of $`p`$, and we say, for shortness, that the singleton $`p_1`$ has 0 runs. So for odd $`n`$ we have $`T_n^{odd}(x)=2_{\genfrac{}{}{0pt}{}{pS_n}{p_2<p_3}}x^{t_m(p)}=2_{\genfrac{}{}{0pt}{}{pS_n}{p_2<p_3}}x^{d(p)}`$, and then, in order to see that the coefficients of $`T_n^{odd}(x)`$ are log-concave, we can repeat the argument of Theorem 4.1. Indeed, the coefficient of $`x^k`$ in $`T_n^{odd}(x)`$ equals the cardinality of $`𝒱^{}(n,k)`$, the subset of $`𝒫(n,k)`$ in which the edges $`a_3,a_5,\mathrm{},a_7`$ are horizontal. And the fact that the $`|𝒱^{}(n,k)|`$ are log-concave can be proved exactly as the corresponding statement for the $`|𝒱^{}(n,k)|=U(n,k)`$, that is, by taking the relevant restriction of $`\mathrm{\Phi }`$. This completes the proof of the theorem for all $`n`$. $`\mathrm{}`$
no-problem/9902/cond-mat9902292.html
ar5iv
text
# References The Fractional Quantum Hall Effect Sumathi Rao <sup>1</sup><sup>1</sup>1e-mail address: srao@thwgs.cern.ch, sumathi@mri.ernet.in Mehta Research Institute, Chhatnag Road, Jhunsi, Allahabad 211 019, India. Abstract We give a brief introduction to the phenomenon of the Fractional Quantum Hall effect, whose discovery was awarded the Nobel prize in 1998. We also explain the composite fermion picture which describes the fractional quantum Hall effect as the integer quantum Hall effect of composite fermions. I would like to start my talk<sup>2</sup><sup>2</sup>2Talk presented at the Prof. K. S. Krishnan Birth Centenary Conference on Condensed Matter Physics, held at Allahabad University, Dec 7, 1998 by mentioning that the the 1998 Nobel Prize in Physics has been awarded for the discovery of Fractional Quantum Hall Effect to * Robert Laughlin - a theorist from Stanford University, * Horst Stormer - an experimentalist from Lucent technologies (formerly Bell Labs), and * Daniel Tsui - an experimentalist from Princeton University Their citation reads - “ for their discovery of a new form of quantum fluid with fractionally charged excitations”. In this talk, I will try to describe this new form of quantum fluid and its fractionally charged excitations. However, since I am speaking to a general audience and the phenomenon of the fractional quantum Hall effect may not be familiar to all, I will start my talk with a brief introduction to the classical Hall effect, before I start with the quantum Hall effect. The Hall effect, discovered in 1879, is simply the phenomenon that when a plate carrying an electric current is placed in a transverse magnetic field, the Lorentz force causes a potential drop perpendicular to the flow of current * Fig. 1 The Hall geometry. This experiment is performed at room temperature and with moderate magnetic fields ( $`1Tesla`$). If we measure the Hall resistance and plot it as a function of the magnetic field, we get a straight line - $`i.e.`$, Hall resistance varies linearly with magnetic field. * Fig. 2 The linear Hall resistance at moderate fields and room temperature. Much later, in the early seventies, it was found that under certain conditions, electrons could be made to effectively move only in two dimensions. This is achieved by forming an inversion layer at the interface between a semiconductor and an insulator ($`SiSiO_2`$) or between two semiconductors ($`GaAsAl_xGa_{1x}As`$). In such a layer, at very low temperatures, (around $`272\mathrm{deg}C`$), by applying an electric field perpendicular to the interface, the electrons can be made to sit in a deep quantum well, which quantises the motion of the electrons perpendicular to the interface. Thus, the electrons are essentially constrained to move only in two dimensions. In 1980, at very low temperatures ($`1\mathrm{deg}Kelvin`$) and at high magnetic fields, (3-10 $`Tesla`$), Klaus von Klitzing discovered that the Hall resistance does not vary linearly with magnetic field, but varies in a ’stepwise’ fashion, with the strength of the magnetic field. Even more surprisingly, the value of the resistance at these plateaux was completely independent of the material, temperature, and other variables of the experiment and depended only on a combination of physical constants divided by an integer - $`\frac{h/e^2}{n}`$! This was the first example of quantisation of the resistivity. (Note that for the Hall geometry, Hall resistance = Hall resistivity.) In fact, the accuracy of this quantisation is so high, that it has led to a new international standard of resistance represented by the unit 1 Klitzing = $`h/4e^2`$ = 6.25 kilo-ohms defined as the Hall resistance at the fourth step. Note also that where the Hall resistance was flat, the longitudinal resistance was found to vanish. In effect, the system was dissipationless and thus related to superconductivity and superfluidity. For this discovery of the integer quantum Hall effect (IQHE), Klaus von Klitzing was awarded the Nobel prize in physics in 1982. * Fig. 3 The Hall resistance varies stepwise with changes in magnetic field at high magnetic fields and low temperatures. The steps are quantised at integer values of the filling fraction.(Kosmos 1986) The IQHE can be easily explained using simple quantum mechanics of non-interacting electrons in an external magnetic field. The Hamiltonian for the system is given by $`H`$ $`=`$ $`{\displaystyle \underset{i}{\overset{N}{}}}{\displaystyle \frac{(𝐩_ie𝐀(𝐱_i))^2}{2m}}`$ $`\mathrm{with}𝐀(𝐱_i)`$ $`=`$ $`{\displaystyle \frac{B}{2}}(y_i,x_i),B=B\widehat{z}.`$ (1) Solving this Hamiltonian, we find the energy eigenvalues $`E_{n,k_y}=(n+1/2)\mathrm{}\omega `$, which are called Landau levels (LL) in terms of the cyclotron frequency $`\omega =eB/mc`$. The Landau levels are degenerate, since they do not depend on the $`k_y`$ quantum number. The degeneracy of the Landau levels $`\rho _B`$ (the number of states per unit area ) can be explicitly computed and is given by $`\rho _B=\frac{eB}{hc}`$. Let us define a filling factor $`\nu =\frac{\rho }{\rho _B}`$ as the number of electrons per Landau level. The filling factor can be thought of as a measure of the magnetic field. Theoretical analyses are often presented with the resistances as a function of the filling fraction, rather than the magnetic fields. In terms of the filling fractions, plateaux occur whenever $`\nu =\mathrm{integer}`$ or whenever an integer number of Landau levels are fully occupied. Why does this happen? Let us see what happens as we increase the density of electrons. As long as states are available in the LL, we can put more electrons into the level and the conductivity goes on increasing (resistance decreases), but when a LL is full, there exists an energy gap to the next available state in the next Landau level. But there exist localised states in the gap, due to impurities in the sample. Hence, as the Fermi level passes through the gap, the localised states gets occupied by the electrons and so do not contribute to the conductivity. This is what causes the plateaux in the transverse conductivity, until the next Landau level is reached and the same story is repeated. To understand the extra-ordinary accuracy of the quantisation of the resistance, one has to also realise the more subtle point that even when some of the states in each LL get localised due to impurities, the conductance by the remaining states in that level is as if the entire Landau level was fully occupied! In other words, the electrons in the extended states move faster to compensate for the loss of the electrons in the localised states. Another simpler hand-waving way to explain the IQHE is to say that the system is particularly stable when an integer number of LL’s filled. When we now add more particles, the system prefers to keep the average density fixed and accomodate the extra particles as local fluctuations pinned by disorder. Thus, IQHE is easily explained with just quantum mechanics of non-interacting elctrons and the pinning of some of the states due to disorder. In 1982, Horst Stormer and Dan Tsui repeated the experiment with cleaner samples, lower temperatures and higher magnetic fields ( upto $`30Tesla`$). They found that the integers at which resistivity is quantized can now be replaced by fractions \- 1/3, 1/5, 2/5, 3/7 $`\mathrm{}`$. * Fig. 4 The Hall resistance varies stepwise with changes in magnetic field at even higher magnetic fields, lower temperatures and cleaner samples. The steps are now quantised at fractional values of the filling fraction. (Science 1990) The QHE at these fractions could not be explained by simple non-interacting quantum mechanics, which says that at these fractions, the Fermi level is within the lowest LL and so, the system is expected to be highly degenerate with no gap. Without the gap, there is no stability and no possible explanation for the plateaux. But this degeneracy is lifted because the electrons are interacting. The Hamiltonian for interacting electrons is given by $$H=\underset{i}{\overset{N}{}}\frac{(𝐩_ie𝐀(𝐱_i))^2}{2m}+\underset{i<j}{\overset{N}{}}\frac{e^2}{|𝐱_i𝐱_j|}.$$ (2) Moreover, for fractions less than one, all the electrons are in the lowest LL. Hence, the kinetic energy is completely quenched and the only relevant term in the Hamitonian is the inter-electron Coulomb repulsion. But the quenching of the kinetic term means that $`e^2`$ is not small compared to anything. (For IQHE, on the other hand, the potential energy $`e^2/r_{av}`$, where $`r_{av}`$ is the average inter-electrons spacing, was small compared to the cyclotron energy and could be neglected.) Hence, we cannot use perturbation theory and the problem is intrinsically one of strong correlations - a very hard problem. Laughlin in 1983 used a mixture of physical insight and numerical checks to write down a wave-function - the by-now celebrated Laughlin wave-function $$\psi _L=\mathrm{\Pi }_{i<j}(z_iz_j)^{2p+1}e^{_i\frac{z_i^2}{4l^2}}$$ (3) \- as a possible variational wave-function ( with no variational parameters!) as an ansatz solution for the interacting Hamiltonian. Here $`z_i=x_i+iy_i`$ is the complex position of the $`i^{\mathrm{th}}`$ particle and $`l^2=hc/eB`$ is the magnetic length. Using this wave-function, Laughlin could demonstrate the following properties - * The wave-function describes a uniform distribution of electrons ( not random) $`i.e.`$, the number of particles within any patch remains the same. + Fig. 5 Comparison between a random distribution of particles (on the left) and a uniform distribution of particles (on the right). Clearly, the uniform distribution (fluid-like) minimises Coulomb energy much better than the random distribution, which can have patches with a large number of particles costing a large energy. In fact, $`\psi _L`$ was found to be very close to the exact ground state wave-function (calculated numerically) for small systems. * The state described by the wave-function is incompressible. There exists a finite energy gap for all excitations. This is a non-trivial point, since naively, one expects a large degeneracy and instead, one now finds that there is a unique wave-function at these fractions with lowest energy and all other possible wave-functions are less efficient in mimimising the Coulomb energy and hence have higher energies. This is related to the fact that the Laughlin wave-function has multiple zeroes when two particles approach each other, whereas Fermi statistics only needs a single zero. These multiple zeroes are reponsible for uniformising the distribution, which in turn, as seen in point 1) minimises the Coulomb energy. * Quasi-particle excitations over the ground state have fractional charges. This explains the citation which honours the scientists for their discovery of “ a new quantum fluid with fractionally charged excitations ” Why is the Laughlin wave-function ansatz so celebrated? Its fame lies in the fact that it is a correlated wave-function. Normally, many-body wave-functions are Slater determinants of one- particle wave-functions - $`i.e.`$, products of one-particle wave-functions appropriately anti-symmetrised. For instance, the wave-function for one filled Landau level is given by $$\chi _1=\left|\begin{array}{cccc}1& 1& \mathrm{}& 1\\ z_1& z_2& \mathrm{}& z_N\\ z_1^2& z_2^2& \mathrm{}& z_N^2\\ .& .& \mathrm{}& .\\ .& .& \mathrm{}& .\\ z_1^{N1}& z_2^{N2}& \mathrm{}& z_N^{N1}\end{array}\right|$$ which is a Slater determinant of single particle wave-functions. Similarly, the two filled Landau level state involves $`z_i^{}`$’s as it involves the second Landau level, but it can still be written as a Slater determinant of one-particle levels in each of the two Landau levels. But $`\psi _L=\mathrm{\Pi }_{i<j}(z_iz_j)^{2p+1}e^{_i\frac{z_i^2}{4l^2}}`$ cannot be written as sum of products of one-body wave-functions - it intrinsically describes a correlated many particle state at a filling fraction $`\nu =1/(2p+1)`$. Once, we have the result that at these fractions, the system is gapped, just like in the IQHE, it is easy to understand the plateau formation by now having localised states in the intra-LL gap. Thus, using his wave-function, Laughlin could explain the odd denominator rule, which simply comes from fermion statistics and FQHE at the fractions 1/3, 1/5, $`\mathrm{}`$, 1/(2p+1). But more contrived scenarios (called the hierarchy picture) was needed to explain fractions like 2/5, 3/7, $`\mathrm{}`$. In 1989, the next step in understanding the problem was taken by Jainendra Jain. He identified the right quasi-particles of the system and called them composite fermions. (There is no guarantee that appropriate quasi-particles, in terms of which any complicated strongly interacting system appears weakly interacting, always exist, but the challenge is to try and find them, if they do exist. Phonons and magnons in lattices and spin models, Landau quasiparticles in metals, Cooper pairs in superconductors and Luttinger bosons(holons) in one dimensional fermion models are some examples.) In terms of these quasi-particles, FQHE of strongly interacting fermions is like IQHE of composite fermions. The easiest way to understand his quasi-particles is pictorially. Let us measure magnetic field in terms of flux quanta per electron. IQHE at filling fraction $`\nu =1`$ occurs when there is precisely one flux quanta per electron. FQHE, which occurs at higher magnetic fields has more flux quanta per electron - $`e.g.`$, filling fraction $`\nu =1/3`$ corresponds to three flux quanta per electron. Hence, we can depict IQHE as * Fig. 6 IQHE at $`\nu =1`$. Electrons (depicted as balls) and flux quanta (depicted as tubes). On the average, there is one flux quanta per electron. and FQHE at $`\nu =1/3`$ as * Fig. 7 FQHE at $`\nu =1/3`$. Electrons ’holding hands’ implying strong interactions. On the average, there are three flux quanta per electron. Strongly interacting electrons are depicted as ’holding hands’! Jain identified composite fermions as fermions with even number of flux quanta attached - in this (simplest) case, two flux quanta are attached. Hence, in Jain’s picture, FQHE at $`\nu =1/3`$ is depicted as * Fig. 8 FQHE at $`\nu =1/3`$. Composite electrons - electrons with two flux quanta attached - see on the average, one flux quanta per composite electron. Observe that composite electrons are no longer interacting! They see one flux quanta per composite electron - similar to IQHE at $`\nu =1`$ where electrons see one flux quanta per electron. Hence, FQHE is analogous to IQHE of composite fermions. The explanation now for unique incompressible wave-functions at the fractions is simply that they occur when composite fermion Landau levels are filled. Whenever an integer number of composite fermion Landau levels are completely filled, there exists a unique ground state and a gap to the next level. This picture could not only accomodate all the Laughlin fractions $`\nu =1/(2p+1)`$, but also all the hierarchy fractions $`\nu =2/5,3/7,\mathrm{}`$ at the same level. For instance, FQHE at $`\nu =2/5`$ is just the IQHE of composite fermions at $`\nu =2`$ and so on. Jain used this mean field picture to propose more general wave-functions than the Laughlin wave-function. He first rewrote the Laughlin wave-function as $`\psi _L`$ $`=`$ $`\mathrm{\Pi }_{i<j}(z_iz_j)^{2p+1}e^{_i\frac{z_i^2}{4l^2}}`$ $`=`$ $`\mathrm{\Pi }_{i<j}(z_iz_j)^{2p}\chi _1e^{_i\frac{z_i^2}{4l^2}}`$ where $`\chi _1`$ is the wave-function of one filled Landau level. Then he wrote the wave-functions for the other fractions of the form $`\nu =n/(2pn+1)`$ (which are all the experimentally observed fractions) as $$\psi _{\mathrm{Jain}}=\mathrm{\Pi }_{i<j}(z_iz_j)^{2p}\chi _ne^{_i\frac{z_i^2}{4l^2}}$$ where $`\chi _n`$ is the wave-function of $`n`$\- filled Landau levels. $`n`$ is the filling fraction of composite fermion Landau levels and the Jastrow factor $`\pi _{i<j}(z_iz_j)^{2p}`$ turns composite fermions into fermions. These wave-functions have been tested numerically and found to be very close to the exact ground state. More interestingly, there now exists experimental evidence for composite fermions - the cyclotron orbit of the charge carrier in FQHE has been shown to be determined by the effective magnetic field seen by the composite fermion. There is yet another way to understand incompressibility at the odd denominator fractions. For the fraction $`\nu =1/3`$, consider fermions with three flux quanta attached - a fermion with three ’hands’ holding three flux quanta depicted as * Fig. 9 FQHE at $`\nu =1/3`$. Composite bosons ‘holding’ three flux quanta in zero field Bose condense. These composite particles are now bosons in zero field and Bose condense. Hence, incompressibilty of the fermion system is equivalent to Bose condensation of the composite bosons. Various explanations are possibe because in two dimensions, one can have statistical transmutation and describe the same system in terms of fermions, bosons or even anyons (particles with ’any’ statistics). However, composite fermions are really the appropriate quasi-particles because they are the ones which are ‘weakly interacting’. Finally, I will conclude by mentioning a few directions in which the subject is currently progressing and give some examples of open problems. * Edge states at the edge of a sample of quantum Hall fluid. Edge states form a chiral Luttinger liquid and there have been several recent experiments to probe edge physics. * Double layer or multi-layer FQHE. If the distance between layers is small, one can get new correlated electron states (with correlations between electrons in different layers) as ground states. * FQHE with unpolarised and partially polarised spins The usual FQHE assumes that the spin is completely polarised, so that one is justified in working with spinless electrons, but there are experimental situations where this is not true and one needs to explicitly include the spin degree of freedom. * $`\nu =1/2`$ state. The composite fermion picture yields ’free’ fermions at $`\nu =1/2`$. There has been a lot of interest both theoretical and experimental in the study of this state which shows novel non-Fermi liquid behaviour. * Detailed calculations regarding the widths of plateaux, transitions between plateaux, effects of temperature, disorder, etc are yet to be performed at a quantitative level. * At a more theoretical level, it is still an open problem to understand how microscopic Coulomb repulsions lead to the formation of a composite fermion.
no-problem/9902/math9902063.html
ar5iv
text
# Special Lagrangian Tori on a Borcea-Voisin Threefold ## 1. Special Lagrangian Tori on Borcea-Voisin Threefolds Let $`E_i`$ be an elliptic curve with periods $`1`$ and $`\tau _i`$ ($`i=1,2,3`$). The Borcea-Voisin Threefold in discussion is the resolution of quotient $`E_1\times E_2\times E_3`$ by $`_2\times _2`$. We denote the quotient by $`M_0`$. Let $`z_1,z_2,z_3`$ be the coordinates of the three elliptic curves. The generators of the two $`_2`$ actions are $`\alpha :z_1z_1+{\displaystyle \frac{1}{2}},z_2z_2+{\displaystyle \frac{1}{2}},z_3z_3,`$ $`\beta :z_1z_1,z_2z_2,z_3z_3.`$ The fixed locus of $`\alpha `$ is the union of sixteen elliptic curves: $$(\frac{2+\tau _1\pm 1\pm \tau _1}{4},\frac{2+\tau _2\pm 1\pm \tau _2}{4})\times E_3.$$ The fixed locus of $`\beta `$ is the union of sixteen elliptic curves: $$\frac{1+\tau _1\pm 1\pm \tau _1}{4}\times E_2\times \frac{1+\tau _3\pm 1\pm \tau _3}{4}$$ The fixed locus of $`\alpha \beta `$ is empty. Note that the 32 elliptic curves in the fixed locus do not intersect each other and their images in $`M_0`$ are 16 elliptic curves. Denote the image set by $`F`$. Let $`\pi :MM_0`$ be the resolution map by a single blowup of $`F`$. Then $`M`$ is a Borcea-Voisin threefold. To see this fact, first we resolve the quotient of $`E_1\times E_2\times E_3`$ by $`\alpha `$. According to Kummer construction, we get $`K\times E_3`$, where $`K`$ is a Kummer surface. Then $`\beta `$ induces an action on $`K\times E_3`$. This is exactly the action used in Borcea-Voisin threefold construction (see \[B\] or \[V\]). $`M`$ is the Borcea-Voisin threefold constructed from $`K\times E_3`$ and $`\beta `$. Note that the fixed locus of the involution on $`K`$ is two tori and $`M`$ is self mirror. 1a. Holomorphic (3,0)-form on $`M`$. Note that $`dz_1dz_2dz_3`$ is a holomorphic (3,0)-form on $`E_1\times E_2\times E_3`$. Let $`\mathrm{\Omega }_0`$ be the induced holomorphic (3,0)-form on $`M_0`$. Denote $`\pi ^{}\mathrm{\Omega }_0`$ by $`\mathrm{\Omega }`$. Then ###### Lemma 1 $`\mathrm{\Omega }`$ is a holomorphic (3,0)-form on $`M`$. Proof. We only need to check that $`\pi ^{}\mathrm{\Omega }_0`$ extends across the exceptional divisors and is nonzero everywhere on the exceptional divisors. Since $`M`$ is resolved by a single blow-up along the singular elliptic curves in $`M_0`$. In the normal direction the singularities are of the form $`^2/_2`$. We know the required extension is possible for $`^2/_2`$ (see, for example, \[L\], the proof of Lemma 3.1). ∎ 1b. Ricci-flat metrics on $`M`$. First we describe the Ricci-flat metrics on the total spaces of the normal bundles of the exceptional divisors in $`M`$. There are two types of exceptional divisors in $`M`$: 8 copies of $`P^1\times E_3`$ and 8 copies of $`P^1\times E_2`$. Let $`E`$ be either $`E_2`$ or $`E_3`$. The total space of the normal bundle of $`P^1\times E`$ in $`M`$ is $`K_{P^1}\times E`$. We can identify $`K_{P^1}\times E`$ with the resolution of $`(^2/_2)\times E`$. Let $`w_1,w_2`$ be coordinates on $`^2`$ and $`w_3`$ be a coordinate on $`E`$. Define $`U=|w_1|^2+|w_2|^2`$ and $$f_a(U)=U\sqrt{1+\frac{a^2}{U^2}}+a\mathrm{ln}\frac{U}{\sqrt{U^2+a^2}+a},a>0.$$ $`1`$ Then $`\stackrel{~}{g}_a=\sqrt{1}\overline{}(f_a(U)+|w_3|^2)`$ are Ricci-flat Kähler metrics on $`K_{P^1}\times E`$. Note that we need to extend these metrics under blowup. These metrics are asymptotically flat at $`\mathrm{}`$. Now we construct the approximate Ricci-flat metrics on $`M`$. We glue $`M_0`$ with 8 copies of $`K_{P^1}\times E_2`$ and 8 copies of $`K_{P^1}\times E_3`$, by patching the boundaries of some fixed tubular neighborhoods of $`F`$ in $`M_0`$, the boundaries of some fixed tubular neighborhoods of $`P^1\times E_2`$ in $`K_{P^1}\times E_2`$ and the boundaries of some fixed tubular neighborhoods of $`P^1\times E_3`$ in $`K_{P^1}\times E_3`$. Topologically the result manifold is $`M`$. We glue the Kähler potential of the flat orbifold metric on $`M_0`$ and the 16 copies of the Kähler potential described above to get a function $`h_\stackrel{}{a}`$ (we use different $`a`$’s for different divisors, $`\stackrel{}{a}=(a_1,\mathrm{},a_{16})`$). Since all metrics are nearly flat near the boundaries as long as $`\stackrel{}{a}`$ is very small, $`g_\stackrel{}{a}=\sqrt{1}\overline{}h_\stackrel{}{a}`$ are Kähler metrics on $`M`$. They are very close to Ricci flat (as close as we want by choosing small $`\stackrel{}{a}`$). From Yau’s existence theorem of Ricci-flat Kähler metrics (see \[Y\]), there is a unique function $`u_\stackrel{}{a}`$ on $`M`$ with $`_Mu_\stackrel{}{a}𝑑g_\stackrel{}{a}=0`$ such that $`g_\stackrel{}{a}^{RF}`$ is Ricci flat, where $`g_\stackrel{}{a}^{RF}=g_\stackrel{}{a}+\sqrt{1}\overline{}u_\stackrel{}{a}`$. We need the following ###### Theorem 2 Let $`F`$ be the set of singular points in $`M_0`$. For any relatively compact set $`W`$ in the complement of the proper transformation of $`F`$ in $`M`$, there exists positive constant $`C`$ independent of $`\stackrel{}{a}`$ but depending on $`W`$, such that $$u_\stackrel{}{a}_{\stackrel{~}{C}^{4,\alpha }(W)}C|\stackrel{}{a}|^2,$$ $`2`$ where $`\stackrel{~}{C}^{4,\alpha }(W)`$ is the Hölder norm with respect to some fixed coordinate system on $`M`$. Proof. The approximate Ricci-flat metrics $`g_\stackrel{}{a}`$ on $`M`$ are similar to the approximate Ricci-flat metrics $`\omega _a`$ in \[L\]. The same proof of Theorem 3.4 in \[L\] gives the proof of the theorem here. ∎ 1c. Special Lagrangian tori on $`M`$. First $`E_1\times E_2\times E_3`$ has special Lagrangian torus fibration with respect to holomorphic (3,0)-form $`dz_1dz_2dz_3`$ and the flat metric, namely $`T_{\alpha ,\beta ,\gamma }=T_\alpha \times T_\beta \times T_\gamma `$ for any real numbers $`\alpha ,\beta `$ and $`\gamma `$, where $`T_\alpha E_1`$ is the image of $`\alpha +i`$ under the projection $`E_1`$ (Here we need the periods $`\tau _i`$ to be pure imaginary, $`i=1,2,3`$). For generic values of $`\alpha ,\beta `$ and $`\gamma `$, the image of $`T_{\alpha ,\beta ,\gamma }`$ in $`M_0`$ does not intersect with the singular set $`F`$. They are embedded tori in $`M_0`$. Now we conclude that these tori can be perturbed to special Lagrangian tori in $`M`$ (embedded). ###### Theorem 3 Any special Lagrangian torus $`f_0`$ in $`M_0`$ as described above, can be perturbed to a special Lagrangian torus in $`M`$. Proof. Assume that open set $`U`$ contains the image of $`f_0`$ in $`M_0`$, the closure $`\overline{U}`$ is compact. On $`\overline{U}`$, the metric $`g_\stackrel{}{a}^{RF}`$ differs from the flat metric on $`E_1\times E_2\times E_3`$ by an exact form. The difference is small on $`\overline{U}`$ by Theorem 2. $`\mathrm{\Omega }`$ on $`\pi ^1(\overline{U})`$ is the same as $`\mathrm{\Omega }_0`$ on $`\overline{U}`$. The image of $`f_0`$ in $`M`$ is approximate special Lagrangian torus. Now we can apply the proof of Theorem 2.1 in \[L\] to conclude that $`f_0`$ can be perturbed to a special Lagrangian torus provided we choose $`\stackrel{}{a}`$ small enough. ∎ Next we remove the assumption that the periods $`\tau _i`$ being pure imaginary. We can view the threefolds with general $`\tau _i`$ as deformations of those with pure imaginary periods. Then by applying Theorem 2.1 i) in \[L\], we conclude that the threefolds have a family of embedded special Lagrangian tori when the real parts of $`\tau _i`$ are sufficiently small. We check that the special Lagrangian torus $`f:T^3M`$ satisfies $`f^{}H^2(M)=0`$. This is the condition required by mirror symmetry (see \[L\]). Note that $`h^2(M)=19`$ by \[B\]. The following are a basis of $`H^2(M)`$. There are 16 classes which are the Poincare dual of the exceptional divisors in $`M`$. Since the image $`f(T^3)`$ has no intersection with the exceptional divisors, the pull-backs of these classes are zero. Again let $`z_1=x_1+iy_1,z_2=x_2+iy_2,z_3=x_3+iy_3`$ be the complex coordinates of $`E_1\times E_2\times E_3`$. Then $`dx_1dy_1,dx_2dy_2,dx_3dy_3`$ are the classes of degree two invariant under actions $`\alpha `$ and $`\beta `$. They can be lifted to three classes in $`H^2(M)`$. Obviously their pull-backs on $`T^3`$ are zero classes. ## 2. Special Lagrangian Submanifolds on $`K_{P^n}`$ In the following the periods $`\tau _1,\tau _2`$ and $`\tau _3`$ are pure imaginary. From section 1c we know that one can perturb any special Lagrangian torus $`T_{\alpha ,\beta ,\gamma }`$ to one in $`M`$ as long as $`\stackrel{}{a}`$ is small, except $`(\alpha ,\beta )=(\frac{1}{4},\frac{1}{4}),(\frac{1}{4},\frac{3}{4}),(\frac{3}{4},\frac{1}{4}),(\frac{3}{4},\frac{3}{4})`$ or $`(\alpha ,\gamma )=(0,0),(0,\frac{1}{2}),(\frac{1}{2},0),(\frac{1}{2},\frac{1}{2})`$ (these are also tori which are preserved by either action $`\alpha `$ or $`\beta `$). Note that each such torus intersects 4 fixed elliptic curves in $`F`$. For example, $`T_{0,\beta ,0}`$ intersects $`0\times E_2\times 0,\mathrm{\hspace{0.17em}0}\times E_2\times \frac{\tau _3}{2},\frac{\tau _1}{2}\times E_2\times 0,\frac{\tau _1}{2}\times E_2\times \frac{\tau _3}{2}`$. Here we try to describe what happens to these special Lagrangian tori. Let $`z_1=u_1+iv_1,z_2=u_2+iv_2`$ be coordinate on $`^2`$. A obvious family of special Lagrangian submanifolds $`L_{bc}^0`$ on $`^2`$ are given by: $`u_1+iu_2=(b+ic)(v_2+iv_1)`$. It is clear that they are invariant under the $`_2`$ action of $`^2`$. We show ###### Theorem 4 Under the blow up of $`^2/_2`$ (the blowup is $`K_{P^1}`$), $`L_{bc}^0`$ give a family of special Lagrangian which covers $`K_{P^1}`$. Proof. First we show that there are submanifolds $`L_{bc}`$ in $`K_{P^1}`$ corresponding to extending $`L_{bc}^0`$ across the exceptional divisor. Blowup coordinates are $`p=p_1+ip_2,q=q_1+iq_2`$ with $`z_1=p^{1/2},z_2=p^{1/2}q`$. Combining with the equation of $`L_{bc}^0`$ we get by eliminating $`u_1,u_2,v_1,v_2`$ $`bq_1^2+bq_2^22bq_1+(b^2+c^21)q_2b=0,`$ $`3`$$`4`$ $`{\displaystyle \frac{(bq_1c)^2(bq_21)^2}{(bq_1c)^2+(bq_21)^2}}={\displaystyle \frac{p_1}{\sqrt{p_1^2+p_2^2}}}.`$ Equation (3) shows that the intersection of special Lagrangian $`L_{bc}`$ with $`P^1`$ is a circle. So the topological type of $`L_{bc}`$ is $`S^1\times `$ for all $`b`$ and $`c`$. Note that $`L_{bc}`$ cover $`P^1`$ for $`c=0`$ and $`1b1`$. It is not a fibration and there is no sub-family of $`L_{bc}`$ to form a fibration of $`P^1`$ with fibre $`L_{bc}P^1`$. Similar to 1a the holomorphic (2,0)-form $`\mathrm{\Omega }`$ on $`K_{P^1}`$ is induced from $`dz_1dz_2`$. So Im$`\mathrm{\Omega }|_{L_{bc}}=0`$. The Ricci-flat metric on $`K_{P^1}`$ is given by $$\omega =\frac{\sqrt{1}}{U^2\sqrt{1+U^2}}[(1+U^2)U\overline{}UU\overline{}U].$$ where $`U=|z_1|^2+|z_2|^2`$. A easy calculation shows that $`\overline{}U|_{L_{bc}}=0`$ and $`U|_{L_{bc}}`$ is real one form. So $`\omega |_{L_{bc}}=0`$. $`L_{bc}`$ are special Lagrangian submanifolds. ∎ Note that special Lagrangian $`L_{00}^0\times T_\beta `$ matches with special Lagrangian $`T_{0\beta 0}`$ near each singular locus $`0\times E_2\times 0,\mathrm{\hspace{0.17em}0}\times E_2\times \frac{\tau _3}{2},\frac{\tau _1}{2}\times E_2\times 0`$, and $`\frac{\tau _1}{2}\times E_2\times \frac{\tau _3}{2}`$ in $`M_0`$. We glue four copies of $`L_{00}\times T_\beta `$ to $`T_{0\beta 0}`$ and get $`\stackrel{~}{T}_{0\beta 0}`$ in $`M`$, it is tempting to think that some perturbation of $`\stackrel{~}{T}_{0\beta 0}`$ will be a special Lagrangian torus in $`M`$. We can not prove it because we do not know how to estimate the first eigenvalue of Laplace operator acting on $`\mathrm{\Omega }^1(\stackrel{~}{T}_{0\beta 0})`$. Finally we construct special Lagrangian submanifolds in $`K_{P^{n1}}`$ which is isomorphic to the blowup of $`^n/_n`$. Let $`z_1=x_1+iy_1,\mathrm{},z_n=x_n+iy_n`$ be complex coordinates of $`^n`$. The holomorphic $`(n,0)`$-form $`\mathrm{\Omega }`$ on $`K_{P^{n1}}`$ is the pull-back of $`dz_1\mathrm{}dz_n`$. Let $`U=|z_1|^2+\mathrm{}+|z_n|^2`$. The Ricci-flat Käler form on $`K_{P^{n1}}`$ is $`\omega =\sqrt{1}(f^{}(U)\overline{}U+f^{\prime \prime }(U)U\overline{}U),`$ $`f^{}(U)=(1+{\displaystyle \frac{1}{U^n}})^{1/n}.`$ Let $`\stackrel{}{x}=(x_1,\mathrm{},x_n),\stackrel{}{y}=(y_1,\mathrm{},y_n)`$ and $`A=(a_{ij})_{n\times n}`$ a real matrix. Consider the real $`n`$-dimensional plane $`L_A^0`$ in $`^n`$ defined by $`\stackrel{}{x}=A\stackrel{}{y}`$. It is easy to check that $`\overline{}U|_{L_A^0}=0`$ and $`U|_{L_A^0}`$ is real one form if and only if $`A`$ equals its transpose $`A^t`$. So $`A=A^t`$ implies that $`\omega |_{L_A^0}=0`$. Note that the only singular point on the image of $`L_A^0`$ in $`^n/_n`$ is origin. Let $`z_1=w_1^{1/n},z_2=w_1^{1/n}w_2,\mathrm{},z_n=w_1^{1/n}w_n`$ be the blowup coordinates. Let $`L_A`$ be the image of $`L_A^0`$ under blowup. We show that $`L_A`$ is smooth. Rewrite $`\stackrel{}{x}=A\stackrel{}{y}`$ in terms of $`z_1=x_1+iy_1,w_2=u_2+iv_2,\mathrm{},w_n=u_n+iv_n`$, we have $`(1{\displaystyle \underset{j=2}{\overset{n}{}}}a_{1j}v_j)x_1=(a_{11}+{\displaystyle \underset{j=2}{\overset{n}{}}}a_{1j}u_j)y_1,`$ $`5`$$`6i`$ $`(u_i{\displaystyle \underset{j=2}{\overset{n}{}}}a_{ij}v_j)x_1=(a_{i1}+v_i+{\displaystyle \underset{j=2}{\overset{n}{}}}a_{ij}u_j)y_1,\mathrm{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}2}in.`$ Dividing (6i) by (5) we get $`n1`$ equations defining the intersection of $`L_A`$ with $`P^{n1}`$ which is smooth. In particular when $`n=3`$, $`a_{11}=a_{22}=1`$ and all other $`a_{ij}=0`$, we get the equations $`u_2+v_2=0,u_3+v_3=0`$ which defines a $`S^1\times S^1`$ in $`P^2`$. So the topological type of $`L_A`$ is $`S^1\times S^1\times `$ for the $`A`$. Let $`A_{i_1\mathrm{}i_k}`$ be a $`k\times k`$ matrix formed from elements in $`A`$ with both row and column numbers in set $`\{i_1,\mathrm{},i_k\}`$. Then $$dz_1\mathrm{}dz_n|_{L_A^0}=(\underset{k=\text{even}}{}\underset{i_1<\mathrm{}<i_k}{}i^kdet(A_{i_1\mathrm{}i_k})+\underset{k=\text{odd}}{}\underset{i_1<\mathrm{}<i_k}{}i^kdet(A_{i_1\mathrm{}i_k}))dy_1\mathrm{}dy_n.$$ Taking the phase factor into consideration we conclude that $`L_A`$ is special Lagrangian submanifolds in $`K_{P^{n1}}`$ when $`A=A^t`$ and $$\mathrm{sin}\theta \underset{k=\text{even}}{}\underset{i_1<\mathrm{}<i_k}{}det(A_{i_1\mathrm{}i_k})+\mathrm{cos}\theta \underset{k=\text{odd}}{}\underset{i_1<\mathrm{}<i_k}{}det(A_{i_1\mathrm{}i_k}),$$ where $`0\theta <2\pi `$. Furthermore any real $`n`$-dimensional hyperplane in $`^n`$, which is the limit of special Lagrangian $`L_A^0`$, gives rise to a special Lagrangian submanifold in $`K_{P^{n1}}`$.
no-problem/9902/astro-ph9902052.html
ar5iv
text
# ABSTRACT ## ABSTRACT We present here the results of the mapping observation of the Cygnus Loop with the Gas Imaging Spectrometer (GIS) onboard the ASCA observatory. The data covered the entire region of the Cygnus Loop. Spatial resolution of the GIS is moderate whereas the energy resolving power is much better than those used in the previous observations. The ASCA soft-band image shows the well-known shell-like feature whereas the ASCA hard-band image shows rather center-filled morphology with a hard X-ray compact source at the blow-up southern region. ## 1 INTRODUCTION The Cygnus Loop is a proto-type shell-like supernova remnant. Its large apparent size and the low interstellar absorption feature allow us the detailed investigation of the spatially-resolved plasma with the ASCA observatory. Previous ASCA observation reveals that hot plasma containing rich Si, S, and Fe exists at the center region, suggesting the ejecta in origin (Miyata et al. 1998a). They estimated that only 1 % of ejecta is still present at the center region, resulting that a major part of ejecta is distracted inside the shell region. Therefore, a mapping of the entire region is important to investigate the distributions of heavy elements. A part of the ASCA results was summarized in Miyata (1996). In this paper, we report the preliminary result of the complete mapping observation of the Cygnus Loop with the ASCA GIS. ## 2 OBSERVATION AND DATA ANALYSIS We observed the Cygnus Loop from the PV phase (April 1993) to AO-5 (June 1997). Total number of observation is 30. We re-analyzed all data sets with the ASCA\_ANL. We excluded all the data taken at elevation angle below 5 from the night earth rim and 25 from the day earth rim, a geomagnetic cutoff rigidity lower than 6 GeV c<sup>-1</sup>, and the region of the South Atlantic Anomaly. We also applied the ’flare-cut’ described in the ASCA News letter No.5. Total exposure time is 180ksec after the data screening. We made four kinds of images for each region: a total count image, a non-X-ray background image, a cosmic X-ray background image, and an exposure map. The image of non-X-ray background was produced with the H02-sorting method in DISPLAY45 (the detailed description of this method is in Ishisaki (1996)). The image of cosmic X-ray background was extracted from the LSS survey data. We also subtracted the non-X-ray background image from the LSS data to produce the mean cosmic X-ray background image solely. We used the day earth image to correct the vignetting effect of the ASCA telescopes and the grid structure of the GIS since the X-ray spectrum of the day earth is very soft and is quite similar with that of the Cygnus Loop. The data were combined into a single image using DISPLAY45 and DIS45userlib. ## 3 OVERALL STRUCTURE We constructed the X-ray images in the energy band of 0.7-1.5 and 1.5-5 keV as shown in figure 1. These images were corrected both for the exposure and the effective area after subtracting the background properly. The 0.7-1.5 keV image shows a limb-brightening structure and is similar to the previously well-known image of the Cygnus Loop (Ku et al. 1984; Aschenbach 1994). On the contrary, the 1.5-5 keV image shows a center-filled structure rather than the well-known shell-like structure. We find two bright regions: a compact source in the southern region (AX J2049.6+2939) and a north-east (NE) region. The X-ray spectrum of AX J2049.6+2939 is much harder than those of shell regions and can be fitted with a power-law function with a photon-index of 2.1 (Miyata et al. 1998b). Except AX J2049.6+2939, the hardest emission can be found at the NE region ($`\alpha 313^{},\delta 31^{}`$; hereafter we call this region as the northern hot spot). Hatsukade & Tsunemi (1990) performed the scanning observation with Ginga in the energy band above 1.5 keV and found the center-filled morphology rather than the shell-like morphology for the Cygnus Loop. The Ginga intensity profile has a maximum at ($`l74.9^{},b8.6^{}`$), which well coincides with the northern hot spot we found ($`l74.9^{},b8.6^{}`$). The Ginga intensity profile also showed a tail structure toward the southern region, which was probably due to AX J2049.6+2939. ## 4 HARDNESS RATIO MAP Figure 2 shows the hardness ratio map obtained with 1.5-5 keV band image dividing by 0.7-1.5 keV band image. Contour map overlaid was constructed with 0.7-1.5 keV band image. The northern hot spot clearly extends toward the north. Comparing the contour map of the 0.7-1.5 keV image, the northern hot spot is just inside both of the brightest NE limb and of the northern bright shell region. Miyata et al. (1998c) investigated the radial profile from the NE limb toward the center region and found that the kTe distribution showed maximum of $``$ 1 keV at $``$ 0.4 $`R_\mathrm{s}`$, where $`R_\mathrm{s}`$ is the shock radius. This hard spot coincides with the hottest region. There is a hard X-ray emitting region at the center portion of the Loop. Miyata et al. (1998a) investigated the center portion in detail and found hot ($``$ 0.8 keV) and metal rich plasma. Such plasma account for the hard X-ray emitting region we found. ## 5 SUMMARY We summarize results of our preliminary analysis of the entire Cygnus Loop. * ASCA soft-band image in 0.7-1.5 keV shows the well-known shell-like structure. * ASCA hard-band image in 1.5-5 keV shows rather center-filled morphology and well coincides with the Ginga scanning observation. * AX J2049.6+2939 is the hardest compact source inside the Cygnus Loop in the ASCA energy band. * There is a hot spot in the ASCA hard-band image. The hot spot is located in the inner region of the bright NE limb. ## 6 REFERENCES * Aschenbach B. 1994, in New Horizon of X-ray Astronomy, ed F. Makino, T. Ohashi (Universal Academy Press, Tokyo) p103 * Hatsukade I., Tsunemi H. 1990, ApJ, 362, 566 * Ishisaki, Y. 1996, Ph.D. thesis of Univ. of Tokyo, ISAS RN 613 * Ku W.H.-M., Kahn S.M., Pisarski R., Long K.S. 1984, ApJ, 278, 615 * Miyata, E. 1996, Ph.D. thesis of Osaka Univ., ISAS RN 591 * Miyata, E., Tsunemi, H., Kohmura, T., Suzuki, and S., Kumagai, S. 1998a, PASJ, 50, 257 * Miyata, E., et al. 1998b, PASJ, 50, 475 * Miyata, E., et al., 1998c, in preparation
no-problem/9902/cond-mat9902206.html
ar5iv
text
# Charge sensitivity of radio frequency single-electron transistor (RF-SET) ## Abstract A theoretical analysis of the charge sensitivity of the RF-SET is presented. We use the “orthodox” approach and consider the case when the carrier frequency is much less than $`I/e`$ where $`I`$ is the typical current through RF-SET. The optimized noise-limited sensitivity is determined by the temperature $`T`$, and at low $`T`$ it is only 1.4 times less than the sensitivity of conventional single-electron transistor. Single-electron devices are gradually becoming useful in real applications. Despite the wide variety of studied circuits, the single-electron transistor (SET) remains the most important device in applied single-electronics (in this letter we will discuss the new version of the SET setup). At present the best reported charge sensitivity of the SET at 10 Hz is $`2.5\times 10^5e/\sqrt{\text{Hz}}`$ (the previous record figure was $`7\times 10^5e/\sqrt{\text{Hz}}`$). The low-frequency sensitivity of the SET is limited by 1/f noise, so it improves as the frequency increases. The best achieved so far figure of 9$`\times 10^6e/\sqrt{\text{Hz}}`$ was measured at 4.4 kHz. This is still an order of magnitude worse than the limit determined by the thermal/shot noise of the SET. The difficulty of further frequency increase is due to the relatively large output resistance $`R_d`$ of the SET. For the typical figure $`R_d10^5\mathrm{\Omega }`$ and wiring capacitance $`C_L10^9`$ F the corresponding $`R_dC_L`$ time limits the bandwidth by a few kHz (the use of filters can make it even lower). The importance of potential high-frequency applications makes urgent a significant increase of the bandwidth. This can be done in several ways. The output resistance can be reduced in superconducting (Bloch) SET based on supercurrent modulation (the use of the quasiparticle tunneling threshold does not help much because $`R_d`$ is limited by the quantum resistance even at the threshold ). The load capacitance $`C_L`$ can be decreased placing the next amplifier close to the SET. However, while bandwidth up to 700 kHz was demonstrated using this idea, the charge sensitivity was relatively poor because of extra heating and extra noise produced by the preamplifier. Finally, a bandwidth over 100 MHz has recently been demonstrated in the so-called radio frequency (RF) SET in which the SET controlled the dissipation of the tank circuit which in turn affected the reflection of the carrier wave with frequency $`\omega /2\pi =1.7`$ GHz. A sensitivity of $`1.2\times 10^5e/\sqrt{\text{Hz}}`$ has been achieved at 1.1 MHz. The theoretical analysis of the ultimate sensitivity of the RF-SET is the subject of the present letter. In principle a wide bandwidth could be achieved simply by illuminating the SET with microwaves and measuring the wave reflection. The gate voltage would change the SET differential resistance $`R_d`$ and thus affect the reflection coefficient $`\alpha =(ZR_0)/(Z+R_0)`$, where $`Z^1=i\omega C_s+R_d^1`$, $`R_050\mathrm{\Omega }`$ is the cable wave resistance, and $`C_s`$ is the stray capacitance. However, because of the large ratio $`R_d/R_010^3`$, the signal would be extremely small. To estimate the signal power $`PA^2R_0/2R_d^2[1+(\omega C_sR_0)^2]`$, let us use $`R_d=10^5\mathrm{\Omega }`$ and the amplitude of the SET bias voltage oscillation $`A=1`$ mV ($`A`$ is limited by the Coulomb blockade threshold); then $`P10^{15}`$ W. This figure corresponds to the noise power of the amplifier with noise temperature of 10 K within 10<sup>7</sup> Hz bandwidth and clearly makes such an experiment quite difficult. To increase the signal, the authors of Ref. inserted the SET into the tank circuit (see Fig. 1). Then at resonant frequency $`\omega =(LC_s)^{1/2}`$ the circuit impedance is small, $`ZL/C_sR_dR_0`$ (we assume $`𝒬_{SET}𝒬1`$ where $`𝒬_{SET}=R_d/\sqrt{L/C_s}`$ and $`𝒬=\sqrt{L/C_s}/R_0`$), so $`\alpha 1+2L/C_sR_dR_0`$. The signal power $`P=[V_{in}(\alpha +1)]^2/2R_0`$ ($`V_{in}`$ is the amplitude of the incoming wave) can be expressed via the SET bias amplitude $`A2𝒬V_{in}`$ as $`P=𝒬^2A^2R_0/2R_d^2`$, indicating $`𝒬^2`$ gain in comparison with the nonresonant case. The linear analysis above can be used only as an estimate because of the considerable nonlinearity of the SET I-V curve. For a more exact analysis let us write the differential equation (see Fig. 1) for the voltage $`v(t)`$ at the end of the cable (the static component $`V_0`$ is subtracted): $`\ddot{v}LC_s+\dot{v}R_0C_s+v=2(1\omega ^2LC_s)V_{in}\mathrm{cos}\omega tR_0I(t),`$ where $`V_{in}\mathrm{cos}\omega t`$ is the incoming wave at the end of cable and $`I(t)`$ is the current through the SET while the SET bias voltage is $`V_b(t)=V_0+v+(2V_{in}\omega \mathrm{sin}\omega t+\dot{v})L/R_0`$. The reflected wave can be written (at the end of cable) as $`v(t)V_{in}\mathrm{cos}\omega t=V_{in}\mathrm{cos}\omega t+X_1\mathrm{cos}\omega t+Y_1\mathrm{sin}\omega t+X_2\mathrm{cos}2\omega t+Y_2\mathrm{sin}2\omega t+\mathrm{}`$, where the coefficients $`X_k`$ and $`Y_k`$ should be calculated self-consistently (an obvious way is the iterative updating of $`V_b(t)`$ and $`X_k,Y_k`$). While the analysis of the higher harmonics is important for the possible versions of RF-SET in which the signal is measured at the double (or triple) frequency, we will limit ourselves by the reflected wave at the basic harmonic. For simplicity we assume exact resonance, $`\omega =(LC_s)^{1/2}`$, then $`X_1=2\sqrt{L/C_s}I(t)\mathrm{sin}\omega t,`$ (1) $`Y_1=2\sqrt{L/C_s}I(t)\mathrm{cos}\omega t,`$ (2) where $``$ denotes averaging over time. In the first approximation (if $`𝒬_{SET}𝒬1`$) the SET bias voltage is $`V_b(t)=V_0+A\mathrm{sin}\omega t`$ where $`A=2𝒬V_{in}`$. The coefficients $`X_1`$ and $`Y_1`$ (we omit index 1 below) can be measured separately using homodyne detection and both can carry information about the low frequency signal applied to the SET gate (as usual, we will describe it in terms of the background charge $`Q_0`$ induced into the SET island). If the amplifier noise and other fluctuations are negligible, then the sensitivity of the RF-SET is determined by the intrinsic noise of the SET. The minimal detectable charge $`\delta Q`$ can be expressed as $`\delta Q_X=\sqrt{S_X(f_s)\mathrm{\Delta }f}/(dX/dQ_0),`$ (3) $`\delta Q_Y=\sqrt{S_Y(f_s)\mathrm{\Delta }f}/(dY/dQ_0),`$ (4) while the simultaneous measurement of $`X`$ and $`Y`$ can give $`\delta Q=[(1K^2)/(\delta Q_X^2+\delta Q_Y^22K/\delta Q_X\delta Q_Y)]^{1/2}`$, where $`K=(\text{Re}S_{XY}/\sqrt{S_XS_Y})\text{sign}[(dX/dQ_0)(dY/dQ_0)]`$ is the correlation between two noises. Here $`S_X(f_s)`$ is the spectral density of $`X(t)`$ fluctuations at signal frequency $`f_s`$ (which should be within the tank circuit bandwidth, $`2\pi f_s\omega /𝒬`$), $`S_{XY}`$ is the mutual spectral density, and $`\mathrm{\Delta }f`$ is the measurement bandwidth (inverse “accumulation” time). In this letter we consider only the case of sufficiently low carrier frequency $`\omega I/e`$ (where $`I`$ is the typical current through the SET), so that the quasistationary state is reached at any moment during the period of oscillations. In this case the spectral density does not depend on $`f_s`$ (which is even lower than $`\omega `$) and $$S_X=4(L/C_s)S_I(t)\mathrm{sin}^2\omega t,$$ (5) where $`S_I(t)`$ is the low frequency spectral density of the thermal/shot noise of the current through the SET, which has the time dependence because of oscillating bias voltage $`V_b`$. There is no need to consider $`Y`$ output in this case because $`Y=0`$ (so $`\delta Q_Y=\mathrm{}`$) and the noise correlation is absent, $`K=0`$ (nonzero $`Y`$ and $`K`$ would appear at higher $`\omega `$ due to delay of tunneling events). We use the “orthodox” theory for a normal SET consisting of two tunnel junctions with capacitances $`C_1`$ and $`C_2`$ and resistances $`R_1`$ and $`R_2`$ (see Fig. 1) assuming $`R_jR_Q=\pi \mathrm{}/2e^2`$ (as usual, the gate capacitance is distributed between $`C_1`$ and $`C_2`$ in a proper way). The effects of finite photon energy $`\mathrm{}\omega `$ are neglected. We also neglect the possible rf modulation of the SET gate voltage. The low frequency thermal/shot noise of the SET current is calculated in the standard way. Figure 2 shows the dependence of $`X`$, $`S_X`$, and $`\delta Q=\delta Q_X`$ on the background charge $`Q_0`$ for a symmetric SET ($`C_1=C_2`$, $`R_1=R_2`$) at $`T=0.01e^2/C_\mathrm{\Sigma }`$ ($`C_\mathrm{\Sigma }=C_1+C_2`$), $`V_0=0`$, and $`A=0.7e/C_\mathrm{\Sigma }`$. One can see that the minimum of $`\delta Q`$ is achieved near the edge of $`Q_0`$ range corresponding to nonzero $`X`$, so that the amplitude $`A`$ is only a little larger than the Coulomb blockade threshold $`V_t`$. For $`V_b`$ close to $`V_t`$ the noise of the current through the SET obeys Schottky formula, $`S_I=2eI`$, with a good accuracy at low temperatures, while the current $`I`$ can be approximated as $`I=W/eR_j[1\mathrm{exp}(W/T)]`$ where $`W=e(V_bV_t)(C_1C_2/C_jC_\mathrm{\Sigma })=(1)^je(Q_0Q_{0,t})/C_\mathrm{\Sigma }`$ ($`j`$th junction determines the threshold) and $`|dI/dQ_0|=(dI/dV_b)C_j/C_1C_2`$. (As a consequence of the Schottky formula, the dashed curve in Fig. 2 is approximately twice as high as the $`X`$-curve at small $`X`$.) Using these equations and optimizing $`Q_0`$, one can find the minimum $`\delta Q1.2e(R_\mathrm{\Sigma }C_\mathrm{\Sigma }\mathrm{\Delta }f)^{1/2}(TC_\mathrm{\Sigma }/e^2)^{1/2}\times (eA/T)^{1/4}`$ for the symmetric SET at $`TeA<e^2/C_\mathrm{\Sigma }`$ ($`R_\mathrm{\Sigma }=R_1+R_2`$). This dependence as a function of rf amplitude $`A`$ is shown in Fig. 3a by the dashed line while the numerical result is shown by the solid line. The sensitivity gets worse ($`\delta Q`$ increases) at $`A>e/C_\mathrm{\Sigma }`$ because of $`X`$ and $`S_X`$ increase. The sensitivity also worsens rapidly when $`A`$ is too small and becomes comparable to $`T/e`$, because of the contribution from the Nyquist noise of the SET at $`V_b`$ close to zero. Before optimizing the amplitude $`A`$, let us notice that the results shown in Fig. 3a correspond to relatively small $`X`$ that can be difficult to measure experimentally \[in the approximation above $`X2(L/C_s)^{1/2}\times 15(T/eR_\mathrm{\Sigma })(T/eA)^{1/2}`$\]. However, as seen from Fig. 2, $`X`$ can be significantly increased for the price of a few ten per cent increase of $`\delta Q`$. Figure 3b shows $`\delta Q`$ minimized over both $`A`$ and $`Q_0`$ and the corresponding optimum values of $`A`$ and $`Q_0`$ as functions of the dc bias voltage $`V_0`$. One can see that for a symmetric SET the best sensitivity is achieved at $`V_0=0`$ and there is a long plateau of $`\delta Q`$ which ends when $`V_0`$ approaches $`e/C_\mathrm{\Sigma }`$ leading to significant worsening of the sensitivity. For the asymmetric SET (dashed line) the best sensitivity can be achieved in the plateau range. At the plateau $`\delta Q`$ can be calculated analytically using the approximations above, $`\delta Q3.34e(2R_{min}C_\mathrm{\Sigma }\mathrm{\Delta }f)^{1/2}(TC_\mathrm{\Sigma }/e^2)^{1/2}`$ where $`R_{min}=\text{min}(R_1,R_2)`$. This expression can be compared with the optimized low-temperature sensitivity of the conventional SET which is given by the same formula with the numerical factor 1.90 instead of 3.34. For the symmetric RF-SET the optimized low-temperature sensitivity (at $`V_0=0`$) is $$\delta Q2.65e(R_\mathrm{\Sigma }C_\mathrm{\Sigma }\mathrm{\Delta }f)^{1/2}(TC_\mathrm{\Sigma }/e^2)^{1/2},$$ (6) only 1.4 times worse than for the conventional SET. Figure 4 shows numerically minimized $`\delta Q`$ for the symmetric SET and corresponding optimal $`A`$ and $`Q_0`$ (while $`V_0=0`$) as functions of temperature. The result of Eq. (6) is shown by the dashed line. The sensitivity scales as $`T^{1/2}`$ at low temperatures while it significantly worsens at $`T>0.1e^2/C_\mathrm{\Sigma }`$, similar to the result for the conventional SET (dotted line). The “orthodox” sensitivity improves with the decrease of tunnel resistances while the optimum value (which should be comparable to $`R_Q`$) could be calculated if cotunneling was taken into account. To make a comparison with experiment, let us take $`C_\mathrm{\Sigma }=0.45`$ fF, $`R_\mathrm{\Sigma }=97`$ k$`\mathrm{\Omega }`$, and $`T=100`$ mK, then after optimization $`\delta Q2.7\times 10^6e/\sqrt{\text{Hz}}`$ in the normal case (necessity of relatively large $`X`$ would lead to a factor about 1.5). So, there is still an order of magnitude for possible experimental improvement. Comparison for the superconducting case is not straightforward because the sensitivity depends on the junction quality. In conclusion, we have shown that the price for the wide bandwidth of the RF-SET is only a little decrease of the noise-limited sensitivity in comparison with conventional SET. The authors thank K. K. Likharev and H. Seppä for valuable discussions. The work was supported in part by US AFOSR, Russian Fund for Basic Research, and Finnish Academy of Sciences and Letters.
no-problem/9902/cond-mat9902337.html
ar5iv
text
# Hole concentration induced transformation of the magnetic and orbital structure in Nd1-xSrxMnO3 ## I Introduction The systematic investigation of the phase diagram of the perovskite manganites was initiated with studies of La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> in 1950s, while later in 1980s, a very similar rich phase diagram was rediscovered for the Pr<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> system. A distinct metallic ferromagnetic (FM) state in these phase diagrams was explained in terms of the double-exchange (DE) mechanism between the $`e_g`$ electrons of Mn ions, and the richness of the phase diagrams were naturally considered as a manifestation of the strong couplings among the spin, charge, and Jahn-Teller (JT) lattice distortions. Recent discovery of the colossal magnetoresistance (CMR) effect in doped perovskite manganites has renewed interest in these compounds, and intensive experimental and theoretical efforts have been devoted to clarify the origin of the CMR effect. In addition to the DE interactions as well as the JT distortions, recent studies revealed that the ordering of the two fold $`e_g`$ orbitals of Mn ions plays an essential role to determine physical properties in the hole-doped manganites. For example, it was recently reported that the underlying $`d(x^2y^2)`$-type orbital ordering leads to a metallic antiferromagnetic (AFM) state instead of either the metallic FM state or a so-called CE-type charge/spin ordered insulating state. In view of newly developed ideas and of greatly improved experimental techniques, it would be extremely useful to perform systematic experimental studies concerning a phase diagram to gain further profound understanding of the interplay between $`e_g`$ orbitals and magnetic as well as transport properties in doped manganites. For a study of the hole-concentration dependent phase diagram of doped manganites, we chose the Nd<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> system, because detailed transport studies have already been performed on this system. We have carried out comprehensive neutron diffraction studies on the Nd<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> melt-grown polycrystalline samples with the Sr ion concentration of $`x=0.49`$, 0.50, 0.51, 0.55, 0.60, 0.63, 0.67, 0.70, and 0.75. In what follows, we shall demonstrate that the moderately narrow one-electron bandwidth of the Nd<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> system yields a variety of physical properties such as the charge ordering, metal-insulator transition, and unique magnetic structures as a result of the interplay between the charge and/or orbital orderings and spin/lattice structures. As a function of the hole concentration $`x`$, the Nd<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> system shows a systematic change of the magnetic structures. With increasing $`x`$, the ground state spin ordering varies from metallic ferromagnetism to charge ordered CE-type antiferromagnetism, then to metallic A-type antiferromagnetism, and finally to insulating C-type antiferromagnetism (F $``$ CE $``$ A $``$ C) (See Fig. 1). It can be shown that each spin order corresponds to its specific orbital order, and the determined crystal structures are consistent with the corresponding orbital order for indivisual AFM spin structures. For example, the structures for the CE-type and A-type AFM states are characterized by apically compressed MnO<sub>6</sub> octahedra, while that of the C-type AFM state consists of apically elongated octahedra, reflecting their respective layered-type or rod-type orbital ordering patterns (See Fig. 4). These systematic changes of the ordering of the spins and orbitals are well reproduced by the recent theoretical calculation which takes into account the double degeneracy of the $`e_g`$ orbitals. The charge ordering also plays an important role in the Nd<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> system. The CE-type charge/spin order is formed in the Nd<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> system, but it is limited in a very narrow region of $`x`$ around $`x=1/2`$, and it coexists with the A-type AFM state in the $`x=1/2`$ and $`x=0.51`$ samples. The coexistence of these two states may be interpreted as the orbital-order induced phase segregation between the insulating charge-ordered state and the metallic orbital-ordered state. In the C-type AFM phase with $`x`$ beyond 0.6, the Bragg peaks show a selective anisotropic broadening, which indicates disorder in the spacing of the lattice planes along the tetragonal $`c`$ axis. We argue that this result indicates a possible new charge order for a commensurate hole concentration of either $`x=\frac{3}{4}`$ or $`\frac{4}{5}`$. The broadening is resulted from both the $`d(3z^2r^2)`$-type orbital ordering and the charge ordering. The rest of the paper is organized as follows. The next section briefly describes the experimental procedures. The experimental results are described in Sec. III, where the property of the magnetic and crystal structures in the Nd<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> system are given with the results of the Rietveld refinement analysis. In Secs. IV, the relations between the magnetic and crystal structures are discussed in detail for each type of the AFM orderings. A brief summary is given in Sec. V. ## II Experimental procedures For the present study, the powder samples were prepared by powdering the melt-grown single crystals, and were pressed into rod shape. The single crystal samples were grown by floating-zone method. The detailed procedures of sample preparation were described elsewhere. The quality of the samples was checked by X-ray diffraction measurements and inductively coupled plasma mass spectroscopy (ICP). The results showed that the samples are in single phase and the hole concentration agrees with a nominal concentration within 1 % accuracy. Neutron diffraction measurements were performed on a powder diffractometer HERMES and a triple axis spectrometer GPTAS installed in the JRR-3M research reactor at Japan Atomic Energy Research Institute. The incident neutron wave lengths of HERMES and GPTAS were $`\lambda =1.8196`$ Å and 2.35 Å, respectively. The collimation of HERMES is 6-open-18, while several combinations of the collimators were utilized at GPTAS, depending on the necessity of intensity and momentum resolution. Most of the measurements, especially those for the structural analysis, were performed on HERMES, but part of the measurements was performed with GPTAS because stronger intensity of magnetic reflections are available on this spectrometer due to the high incident neutron flux. The samples were mounted in aluminum capsules with helium gas, and were attached to the cold head of a closed-cycle helium gas refrigerator. The temperature was controlled within accuracy of 0.2 degrees. To obtain the structural parameters, the Rietveld analysis was performed on the powder diffraction data using the analysis program RIETAN. ## III Magnetic and crystal structures We begin with the description of the overall features of the lattice and magnetic structure of the Nd<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> system by examining the $`x`$$`T`$ phase diagram for $`0.3x0.8`$ shown in Fig. 1. In the distorted perovskite crystal structure, Mn ions are surrounded by six O ions, and the MnO<sub>6</sub> octahedra form pseudo-cubic lattice, whereas Nd or Sr ions occupy the body-center position of the pseudo-cubic lattice of the MnO<sub>6</sub> octahedra. Due to the buckling of the octahedra, however, the orthorhombic unit cell becomes $`\sqrt{2}\times \sqrt{2}\times 2`$ of the cubic cell. In the concentration region for $`0.3x0.8`$, the crystal structure is classified into two phases from the lattice parameters. The one is a well-known O phase with $`c/\sqrt{2}<b<a`$, and appears in the lower Sr concentration region for $`x0.55`$ at room temperature, while the other is a pseudo-tetragonal O phase with $`ab<c/\sqrt{2}`$ for $`x0.55`$ as indicated in the Fig. 1. At low temperatures, on the other hand, the region of the O phase expands, and the phase boundary shifts towards around $`x=0.60`$. In addition, a monoclinic structure was detected near the low temperature structural phase boundary near $`x0.60`$. For $`0.55x0.60`$, a structural transition from the O phase to the O phase coincides with the AFM transition temperature $`T_\mathrm{N}`$. For $`x<0.48`$, the ground state is a FM metal. In the region for $`0.50x0.60`$, there appears a metallic AFM state with the layered type AFM ordering, which is called as A-type after Ref. . With further increasing $`x`$, the C-type AFM order was observed in the O phase. In this phase, the resistivity uniformly increases with lowering temperature, and the sample remains insulating for all temperature, although the temperature derivative of the resistivity shows an anomaly at $`T_\mathrm{N}`$. Only within a small range of the Sr concentration around $`x0.50`$, the system exhibits a charge-ordered insulating state which is accompanied with the CE-type AFM spin ordering after it shows the metallic FM state below $`T_\mathrm{C}`$ in the intermediate temperature region. ### A Magnetic structures The AFM spin ordering yields superlattice reflections in the neutron diffraction profiles. Since the positions of the superlattice peaks are different from each other according to the spin patterns, one can determine the spin structure from the neutron diffraction profiles. In Fig. 2, we show the typical powder diffraction patterns measured at the lowest temperature ($`10`$ K) for $`x=0.49`$, 0.55, and 0.75. One can clearly recognize the different superlattice reflection patterns for the CE-type, A-type, and C-type AFM spin arrangements, respectively. Cross symbols represent the measured intensity, and solid lines are the calculated diffraction patterns for the nuclear reflections obtained by the Rietveld analysis. The AFM superlattice reflections are indicated by hatches, and the corresponding spin ordering patterns are depicted in the insets. The CE-type spin ordering (Fig. 2(a)) is characterized by the alternate ordering of the Mn<sup>3+</sup> and Mn<sup>4+</sup> ions. The spin ordering pattern in the $`ab`$ plane is rather complicated, and it stacks antiferromagnetically along the $`c`$ axis. The magnetic reflections for the Mn<sup>3+</sup> and Mn<sup>4+</sup> sublattices are decoupled, and the former are indexed as $`(h/2,k,l)`$ with $`k=\text{integer}`$ and $`h,l=\text{odd integer}`$, while the latter are indexed as $`(h/2,k/2,l)`$ with $`h,k,l=\text{odd integer}`$. In the A-type spin ordering (Fig. 2(b)), the spins order ferromagnetically in the $`ab`$ plane with the moments pointing toward the $`a`$ axis, and the FM planes stack antiferromagnetically along the $`c`$ axis. The magnetic reflections appear at $`(hkl)`$ with $`h+k=\text{even integer}`$ and $`l=\text{odd integer}`$. As described later, on the other hand, we observed the monoclinic structure in the A-type AFM phase of the $`x=0.60`$ sample, and the FM planes stack along the \[1 1 0\] direction, being identical with the case of the monoclinic Pr<sub>1/2</sub>Sr<sub>1/2</sub>MnO<sub>3</sub>. In the C-type spin ordering (Fig. 2(c)), the spins order ferromagnetically along the $`c`$ axis, and the neighboring spins in the $`ab`$ plane point the opposite direction. The magnetic reflections are observed at $`(hkl)`$ with $`h+k=\text{odd integer}`$ and $`l=\text{even integer}`$. We note that, for the $`x=0.63`$ and 0.67 samples, the FM component was observed in their magnetization curve below $`T_{\mathrm{CA}}45`$ K for the $`x=0.63`$ sample or $`T_{\mathrm{CA}}15`$ K for the $`x=0.67`$ sample, as indicated by the line CAF in Fig. 1. To confirm the existence of the FM component in these samples, we have measured the temperature dependence of the (110) and (002) reflections for the $`x=0.63`$ polycrystalline sample. If the FM Bragg scattering appears, the intensity of these reflections should increase. Although we have observed a slight increase of the intensity below $`T_{\mathrm{CA}}`$, its magnitude is no more than the statistical error. Even if the FM component exists, it is small to derive the accurate moment from the powder sample data. In Table I, we summarize the magnetic moments per Mn site and their directions for all the samples studied. Interestingly, we found a clear trend that the direction of the moment is always parallel to the largest lattice constant (See Table II). ### B Crystal structures In order to characterize the crystal structures for each phase, we have performed the Rietveld analysis on neutron powder diffraction patterns for all the samples observed at selected temperatures. The obtained structural parameters are summarized in Table II. The types of the crystal structure (CS) and of the magnetic structure (MS) are also listed in the table. The shapes of MnO<sub>6</sub> octahedra for the O and O phases are schematically illustrated in Fig. 3(a) and (b), respectively. We examine the characteristic crystal symmetry for each phase, and discuss the influence of the distortion of the MnO<sub>6</sub> octahedra in the following. In spite of the fact that many orthorhombic perovskite manganites have the $`Pbnm`$ ($`Pnma`$ in another setting) symmetry due to the GdFeO<sub>3</sub>-type distortion, the measured powder diffraction profiles in the O phase were well fitted with the orthorhombic space group $`Ibmm`$ (or $`Imma`$). It should be noted that the $`Ibmm`$ structure was also observed in Pr<sub>0.65</sub>Ba<sub>0.35</sub>MnO<sub>3</sub> at room temperature, Nd<sub>0.5</sub>Sr<sub>0.5</sub>MnO<sub>3</sub> (Ref. ), and Pr<sub>1/2-x</sub>Y<sub>x</sub>Sr<sub>1/2</sub>MnO<sub>3</sub> (Ref. ). To see the difference of the two structures, the tilting of the MnO<sub>6</sub> octahedra for $`Pbnm`$ and $`Ibmm`$ are illustrated in Fig. 3(c). In the $`Pbnm`$ symmetry, the octahedra rotate both around the $`b`$ and $`c`$ axes. In contrast, the tilting of the octahedra is restricted only to the $`b`$ axis in the $`Ibmm`$ symmetry, and thereby the $`x`$ and $`y`$ coordinates of the inplane oxygen O(2) are fixed to 1/4. As a result, two Mn–O bonds in the $`ab`$ plane have an equal length. One can distinguish the $`Ibmm`$ symmetry from $`Pbnm`$, in principle, because $`Pbnm`$ has a lower symmetry, and there should exist additional Bragg reflections which are allowed only in the $`Pbnm`$ symmetry. In the paramagnetic phase, however, we observed no additional reflection which is specific to the $`Pbnm`$ symmetry, and we tentatively assigned the space group in the paramagnetic phase as $`Ibmm`$. Unfortunately, the scattering angles of AFM superlattice reflections overlap with the $`Pbnm`$ specific reflections in the low temperature AFM phase, and it prevented us from determining the precise space group in this phase. Accordingly, we have performed the Rietveld analysis for both space groups, and obtained almost identical $`R`$ factors. For comparison, the two parameter sets determined for both symmetries on the $`x=0.50`$ sample are shown in Table II, but only the parameter sets for the $`Ibmm`$ symmetry are tabulated for the rest of the samples. We found that the measured powder diffraction profiles in the O phase were well fitted with the tetragonal space group $`I4/mcm`$. Pr<sub>0.65</sub>Ba<sub>0.35</sub>MnO<sub>3</sub> at 210 K (Ref. ) and La<sub>0.5</sub>Sr<sub>0.5</sub>MnO<sub>3</sub> (Ref. ) were reported to have the same space group. The feature of the space group $`I4/mcm`$ is also shown in Fig. 3(d). In the $`I4/mcm`$ symmetry, the MnO<sub>6</sub> octahedra rotate only around the $`c`$ axis, and all the Mn–O bonds in the $`ab`$ plane are equal in length. The octahedra in the $`z=1/2`$ plane rotate in the opposite directions to those in the $`z=0`$ plane. \[ \] As illustrated in Figs. 3(c) and (d), the observed orthorhombic and tetragonal structures are closely related. If each space group is denoted by the tiltings of the MnO<sub>6</sub> octahedra in the Glazer’s terminology, $`Pbnm`$, $`Ibmm`$, and $`I4/mcm`$ symmetries are expressed by $`a^+b^{}b^{}`$, $`a^0b^{}b^{}`$, and $`a^0a^0c^{}`$, respectively. Here the positive and negative signs denote that the octahedra along the tilt axis are tilted in-phase or anti-phase, and 0 means no tilt. Therefore, they can be derived from the cubic lattice by introducing successive tiltings of the MnO<sub>6</sub> octahedra. Comparing the projection of the octahedra onto the plane in the orthorhombic phase (Fig. 3(c)) and that onto the plane in the tetragonal phase (right part of Fig. 3(d)), one can see that, as far as the tilting of the octahedra is concerned, the tetragonal axis coincides with the axis in the orthorhombic structure. In addition to the tilting of octahedra, distortions of the MnO<sub>6</sub> octahedra provide useful information on the orbital as well as charge orderings. As one can see from Table II, the two Mn–O bonds in the $`ab`$ plane are always longer than those along the $`c`$ axis in the O phase. This feature indicates that the A-type AFM structure is accompanied with the $`d(x^2y^2)`$ type orbital order as depicted in Fig. 4(b), and this result is supported by the recent theoretical calculations. The $`d(x^2y^2)`$ type orbital order causes a unique magnetic and transport properties due to the strongly anisotropic couplings within and perpendicular to the orbital ordered planes. As we have predicted and demonstrated in the preceding studies, and will be discussed in Sec. V, this type of orbital order yields an metallic A-type AFM state. It should be noted that one might consider that the orbital-order-induced anisotropy will be less significant in the paramagnetic phase, since the difference of the bond length between the apical and inplane Mn–O bonds becomes rather small at elevated temperatures. Surprisingly, however, the anisotropic behavior persists in high temperature phases. Very recently, we have demonstrated the existence of the anomalous anisotropic spin fluctuations in the paramagnetic and FM phases in Nd<sub>0.50</sub>Sr<sub>0.50</sub>MnO<sub>3</sub>, indicating that the $`d(x^2y^2)`$ type orbital order has strong influence at high temperatures. In the O phase, on the other hand, the Mn–O bond length along the $`c`$ axis is longer than the ones in the $`ab`$ plane, and this difference is further enhanced in the AFM phase. The apically stretched MnO<sub>6</sub> octahedron is consistent with the ordering of the $`d(3z^2r^2)`$ orbitals depicted in Fig. 4(c). It is worth to mention that the recent theoretical calculation confirmed that there appears the C-type AFM state with the ordering of the $`d(3z^2r^2)`$ orbitals in the higher doping region. Finally, we would like to mention the CE type charge/orbital ordering. The CE-type ordering is characterized by the alternate ordering of the Mn<sup>3+</sup> and Mn<sup>4+</sup> ions and by the ordering of the $`d(3x^2r^2)`$/$`d(3y^2r^2)`$ orbitals on the Mn<sup>3+</sup> sites in the $`ab`$ plane as depicted in Fig. 4(a). This type of orbital ordering doubles a size of the unit cell along the $`b`$ axis, and produces the superlattice reflections at $`(h,k/2,l)`$ with $`h=\text{even}`$, $`k=\text{odd}`$, and $`l=\text{integer}`$. Even for the polycrystalline samples, we could observe the superlattice reflections in the $`x=0.49`$ and 0.50 samples at $`2\theta =49^{}`$ (indicated by an arrow in Fig. 2(a)) which can be indexed as $`(2\frac{1}{2}2)+(2\frac{3}{2}0)`$, as was the case of the Pr<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> system. On the other hand, this type of charge ordering necessitates to consider two independent Mn sites for the Mn<sup>3+</sup> and Mn<sup>4+</sup> ions. Namely we need to treat two types of distortions in the MnO<sub>6</sub> octahedra. Clearly, such an analysis multiplies the number of parameters in a fitting process, and would yield less reliable structural parameters. For this reason, we omitted to treat the doubling of the unit cell due to the orbital ordering, and performed the Rietveld analysis on the CE-type samples assuming only the original $`Ibmm/Pbnm`$ structure. Consequently, the obtained Mn–O bond lengths and Mn–O–Mn angles give the averaged values for two Mn sites. ## IV Influence of the CE-type ordering Near $`x\frac{1}{2}`$, the doped perovskite manganites are usually expected to show a so-called CE-type charge/orbital/spin superstructure depicted in Fig. 4(a). For the Pr<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> system, for instance, the CE-type ordering is observed over a wide range of $`0.3x0.5`$. As shown in Fig. 1, however, the CE-type ordering is observed only in a very limited range near $`x0.50`$ in the Nd<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> system. Note that, the CE-type ordering was not observed in the $`x=0.55`$ sample. A distinct feature of the CE-type ordering in the Nd<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> system is the coexistence with another spin ordering. Because the FM order is taken over by the A-type AFM order near $`x=0.5`$ in the Nd<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> system (See the phase diagram in Fig. 1.), the FM order coexists with the CE-type order in the $`x=0.49`$ sample, whereas the A-type AFM order coexists with the CE-type order in the $`x=0.51`$ sample. To illustrate the situation more specifically, we shall describe the behavior of the $`x=0.49`$ and 0.51 samples in detail below. Concerning the AFM side for $`x>0.50`$, similar results were reported very recently on the same Nd<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> system with $`x=0.52`$ and 0.54. Figure 5(a)–(f) show the temperature dependences of the magnetic Bragg peaks, the lattice constants, and the resistivity for the $`x=0.49`$ and 0.51 samples. In the $`x=0.49`$ sample, the FM spin ordering was observed below $`T_\mathrm{C}280`$ K. As shown in Fig. 5(a), the intensity of the (110) and (002) reflections increases below $`T_\mathrm{C}280`$ K due to the FM order. Below $`T_\mathrm{N}160`$ K, it suddenly drops, and the $`(\frac{1}{2}\frac{1}{2}1)`$ reflection appears, indicating the formation of the CE-type AFM order. It should be noted, however, that the magnetization study strongly suggests the persistence of the ferromagntic order below $`T_\mathrm{N}`$. As shown in Table I, the $`x=0.49`$ sample has the FM moment of $`0.8\mu _B`$ at 15 K. Therefore, the FM order coexists with the CE-type AFM order in the $`x=0.49`$ sample. In the $`x=0.51`$ sample, the behavior of the magnetic ordering was very similar to that of the $`x=0.50`$ sample reported in Refs. and . As shown in Fig. 5(d), the intensity of the $`(110)+(002)`$ reflections increases below $`T_\mathrm{C}240`$ K owing to the onset of the FM order. With decreasing temperature, it increases quickly, but shows a sudden drop at $`T_\mathrm{N}^\mathrm{A}200`$ K at which the (001) A-type AFM reflection appears. In contrast to the $`x=0.49`$ FM sample, the intensity of $`(110)+(002)`$ reflection in the $`x=0.51`$ AFM sample has no magnetic contribution below $`T_\mathrm{N}^\mathrm{A}`$. The difference between the intensity above $`T_\mathrm{C}`$ and below $`T_\mathrm{N}^\mathrm{A}`$ is due to the structural transition at $`T_\mathrm{N}^\mathrm{A}`$. With further lowering temperature, the $`(\frac{1}{2}\frac{1}{2}1)`$ CE-type AFM superlattice reflection appears below $`T_\mathrm{N}^{\mathrm{CE}}150`$ K. Note that the CE-type ordering suppresses the increase of the intensity of the (001) A-type AFM reflection below $`T_\mathrm{N}^{\mathrm{CE}}`$, indicating that the two spin orderings are strongly correlated. The temperature dependence of the lattice constants in the $`x=0.49`$ sample is shown in Fig. 5(b). It shows a weak inflection at $`T_\mathrm{C}`$, while a sharp jump at $`T_\mathrm{N}`$, where the $`c`$ axis shrinks while the $`a`$ and $`b`$ axes expand, being consistent with the CE-type orbital ordering in the $`ab`$ plane. Similarly, the lattice constants of the $`x=0.51`$ AFM sample exhibit a large split at $`T_\mathrm{N}^\mathrm{A}`$ (Fig. 5(e)), indicating that the A-type magnetic transition is accompanied with the structural transition which stabilizes the $`d(x^2y^2)`$-type planar orbital ordering as discussed in the previous section. They show, however, no distinct anomaly at $`T_\mathrm{N}^{\mathrm{CE}}`$. In fact, we have carried out detailed structural analysis on the two powder diffraction data, one is observed at $`T=160`$ K in the A-type AFM phase for $`T_\mathrm{N}^{\mathrm{CE}}<T<T_\mathrm{N}^\mathrm{A}`$ and the other at $`T=10`$ K in the low temperature phase where two orderings coexist. But we found no difference of crystal structure between two AFM phases (see Table II). In particular, all the nuclear reflections in the powder diffraction data at 10 K can be well fitted to a single structure in spite of the coexistence of the A-type and CE-type AFM orderings. These results on the $`x=0.51`$ sample demonstrate that the crystal structure of the A-type AFM phase in Nd<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> is practically indistingushable from that of the CE-type phase despite the difference of spin structure. The change of the magnetic structure also has a strong influence on the behavior of the resistivity. As shown in Fig. 5(c), the resistivity in the $`x=0.49`$ FM sample shows the metallic behavior below $`T_\mathrm{C}`$, then a sharp rise at $`T_\mathrm{N}`$ due to the CE-type charge order. The increase of the resistivity is, however, suppressed below $``$100 K by the FM order with the moments of $`0.8\mu _B`$ which coexists with the CE-type AFM spin order. In the $`x=0.51`$ AFM sample, on the other hand, the resistivity shows the metallic behavior below $`T_\mathrm{C}`$, a moderate increase at the onset of the A-type AFM spin order, and it shows the second increase at $`T_\mathrm{N}^{\mathrm{CE}}`$ due to the CE-type charge order as shown in Fig. 5(f). By comparing this behavior with that of the $`x=0.49`$ FM sample, the influence of the spin ordering is clear. The FM spin order restricts the resistivity of the $`x=0.49`$ FM sample at the order of $`\rho 5\times 10^2\mathrm{\Omega }\mathrm{cm}`$ for $`T<T_\mathrm{N}`$. In contrast, the resistivity of the $`x=0.51`$ sample exhibits a jump at $`T_\mathrm{N}^\mathrm{A}`$, it remains an order of $`\rho 5\times 10^3\mathrm{\Omega }\mathrm{cm}`$ in the metallic A-type AFM phase for $`T_\mathrm{N}^{\mathrm{CE}}<T<T_\mathrm{N}^\mathrm{A}`$, and then increases monotonically below $`T<T_\mathrm{N}^{\mathrm{CE}}`$. It should be noted that the metallic resistivity of the $`x=0.51`$ AFM sample in the A-type AFM state for $`T_\mathrm{N}^{\mathrm{CE}}<T<T_\mathrm{N}^\mathrm{A}`$ is of the same order with those of other metallic A-type AFM samples. There are several possibilities for the origin of the simultaneous presence of the CE-type ordering with the FM or A-type AFM spin orderings in the $`x=0.49`$ and 0.51 samples. Scenarios of the inhomogeneous distribution of the holes in the sample or a canted magnetic ordering consisting of the CE-type and A-type moments seem to be consistent with observed results. The former scenario can be attributed either to a trivial concentration distribution, or to an intrinsic phase segregation. Although it is extremely difficult to experimentally distinguish these two possibilities, there are some interesting observations which seem to favor the intrinsic spontaneous phase segregation in doped manganites near $`x\frac{1}{2}`$. As mentioned above, the increase of the CE-type Bragg intensity suppresses the A-type AFM intensity in the $`x=0.51`$ sample; in other words, the CE-type order grows at the expense of the A-type ordered region. This fact excludes the possibility of a trivial distribution of the hole concentration. In addition, we found that the magnetic moments for the A-type and CE-type AFM structures lie in the same direction, i.e., along the $`a`$ axis (Table I). This result seems to suggest that a canted magnetic ordering is unlikely in the present case. We also found that the averaged lattice structure of the CE-type phase is almost identical with that of the A-type phase in the present samples. As shown in the case of the $`x=0.51`$ sample, the lattice parameters exhibit little anomaly between two phases. Since both phases exhibit the orbital ordering within the basal plane, when the charges are progressively localized with decreasing $`T`$, the orbitals are reorganized with surprisingly small lattice distortions from the $`d(x^2y^2)`$-type orbital order for the A-type AFM ordering to the $`d(3x^2r^2)`$/$`d(3y^2r^2)`$-type orbital order for the CE-type ordering. Combining these observations, we believe that the simultaneous existence of two states strongly indicates that these two states are very close in energy, and their relative fraction can be easily varied either by temperature or by tuning other physical parameters such as one electron bandwidth, and at the same time this could explain why the CE-type ordering appears only in a very narrow concentration range of $`12`$ % around the $`x=1/2`$ in the Nd<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> system. As discussed in Ref. , we argue that this behavior can be viewed as an effective phase separation between two different orbital ordered regions which takes place in doped manganites with hole concentration $`x\frac{1}{2}`$. The strong correlation between the coexistence of the CE-type and A-type orderings and their resistivity is recently pointed out for doped manganite systems with $`x=\frac{1}{2}`$ including two-dimensional single and bilayer systems, La<sub>0.5</sub>Sr<sub>1.5</sub>MnO<sub>4</sub> and La<sub>1</sub>Sr<sub>2</sub>Mn<sub>3</sub>O<sub>7</sub> (Ref. ). ## V metallic A-type antiferromagnet The most important result in the metallic A-type antiferromagnetic phase is the fact that all crystal structures in this phase share the common feature that the lattice spacing in the direction of the AFM stacking is the smallest (See Fig. 3). This salient feature causes an anisotropy in both magnetic and transport properties, as discussed in Sec. III-B. As reported recently, the spin wave dispersion relation in the metallic A-type AFM Nd<sub>0.45</sub>Sr<sub>0.55</sub>MnO<sub>3</sub> exhibits a large anisotropy of the effective spin stiffness constants between the intraplanar direction within the FM layers and the interplanar direction perpendicular to the layers. It should be noted that a similar directional anisotropy of the resistivity was also observed in the A-type AFM samples. These anisotropies in physical properties are strong evidence of the $`d(x^2y^2)`$-type orbital ordering within the FM layers, and are fully consistent with the characteristics of the crystal and magnetic structures observed in the present studies. In this section, we will focus on the detailed crystal structures observed in the region of $`0.55x<0.63`$ where the system shows a transition from the paramagnetic O phase to the metallic A-type AFM O phase. Figures 6 (a) and (b) show the temperature dependences of the A-type AFM Bragg peak and the $`d`$ spacing of the planes of the (110)/(002) doublet for the $`x=0.55`$ sample. The (001) A-type AFM Bragg peak appears below $`T_\mathrm{N}=230`$ K. The lattice spacings of the (002) and (110) nuclear reflections cross at $`T_\mathrm{N}`$ due to the change of the space group from O to O. The shrinkage of the $`c`$ axis in the A-type AFM phase reflects the $`d(x^2y^2)`$-type orbital ordering. Figures 6(c) and (d) show the similar temperature dependences for the $`x=0.60`$ sample. This sample also belongs to the tetragonal O phase at the paramagnetic phase, and shows a first order structural phase transition at $`T_\mathrm{N}`$. In contrast to the $`x=0.55`$ sample, however, it has a monoclinic structure whose unique axis is the $`c`$ axis in the AFM phase, as one can clearly see the splitting of the tetragonal (220) reflection into the monoclinic $`(220)+(\overline{2}20)`$ reflections below $`T_\mathrm{N}`$. We have previously reported that Pr<sub>0.50</sub>Sr<sub>0.50</sub>MnO<sub>3</sub> also has the monoclinic structure in the A-type AFM phase, and its crystal structure belongs to the $`P112_1/n`$ ($`P2_1/c`$, cell choice 2) space group. In order to analyze the powder patterns of the present Nd<sub>0.40</sub>Sr<sub>0.60</sub>MnO<sub>3</sub> sample collected at 10 K, we have performed the Rietveld analysis assuming the same $`P112_1/n`$ space group at first. However, we noticed that the $`P112_1/n`$ space group predicts too many allowed reflections, compared to the observed Bragg peaks. Therefore, in the next step, we fitted the profile with the space group $`I112/m`$ ($`C2/m`$, cell choice 3) which has a higher symmetry, and we found that the fitting yields the almost equal goodness with the case of the $`P112_1/n`$ space group. We have listed the parameters obtained with this symmetry in Table II. The difference of the crystal structure of two space groups are the following: In both space groups, two unequal Mn sites are placed at adjacent sites alternately in all directions, but the freedom of the O sites is much restricted in the case of the $`I112/m`$ structure. The apical oxygen O(1) is placed on the line which connect the nearest Mn ions along the $`c`$ axis, and only its $`z`$ coordinate is allowed to vary, while the position of the inplane oxygen O(2) and O(3) are confined in the $`ab`$ plane. Unfortunately, the error of the refinement for the $`x=0.60`$ sample is the worst among the samples analyzed in the present study ($`S=R_{\mathrm{wp}}/R_\mathrm{e}2.2`$ for the $`x=0.60`$ sample, whereas $`S`$ is less than 2 for other samples). The main reason for the large $`R`$ factor may be that this sample is not in a single phase at 10 K, presumably because the AFM phase of $`x=0.60`$ lies just at the boundary between the orthorhombic O region and the tetragonal O region. For the $`x=0.63`$ sample, we found that $``$10 % of the sample of the $`x=0.63`$ sample has the same lattice constants with those of the $`x=0.60`$ sample at 10K, and shows the A-type antiferromagnetism. This large $`R`$ factor causes slight ambiguity in identification of the indices for the closely located peaks such as (004), (220), and (2̄20) in the monoclinic phase, but when the assignment of the three axes were assumed as labeled in Fig. 6(d), the Rietveld analysis gave the best fit. The $`x=0.60`$ sample exhibits the same A-type AFM structure with the $`x=0.51`$ and 0.55 samples. However, the monoclinic structure in the $`x=0.60`$ sample affects its magnetic structure. The AFM superlattice reflections in the $`x=0.60`$ sample are indexed as $`𝐐=(2n\pm \frac{1}{2},2n^{}\pm \frac{1}{2},\text{even})`$ with $`n,n^{}=\text{integer}`$, while those in the $`x=0.51`$ and 0.55 samples are indexed as $`(hkl)`$ with $`h+k=\text{even integer}`$ and $`l=\text{odd}`$. This difference of the reflection conditions indicates that the propagation vector of the AFM structure for $`x=0.60`$ is different from the other A-type samples. It is rotated by 90 from the axis, and it points towards the direction. This is consistent with the fact that the $`d`$ spacing of (001) remains larger than that of (110) below $`T_\mathrm{N}`$ in this sample. Such a rotation of the propagation vector of the A-type AFM ordering was also observed in another monoclinic sample Pr<sub>0.50</sub>Sr<sub>0.50</sub>MnO<sub>3</sub> (Ref. ). ## VI possible charge order in the C-type AFM insulating phase Finally, we shall discuss the features of the insulating C-type AFM state which appears in the O phase for $`x0.63`$. ### A anomaly in the lattice constant $`c`$ in the C-type AFM phase Figure 7 shows the temperature dependences of the intensity for the AFM Bragg peak and of the lattice constants for the $`x=0.75`$ sample. The (100) AFM Bragg peak for the C-type spin ordering was observed below $`T_\mathrm{N}300`$ K. The change of the lattice constants with temperature is very smooth throughout $`T_\mathrm{N}`$, although the difference between the values at 330 K and those at 10 K is quite large. With lowering temperature, the length of the $`c`$ axis increases whereas the $`a`$ ($`b`$) axis decreases. We would like to stress that we have observed a selective broadening of the nuclear Bragg reflections. Figure 8 represents the temperature dependence of the peak widths (FWHM) of the (004) and (220) reflections for $`x=0.75`$, 0.70, and 0.67. The width of the (220) reflection remains constant throughout all temperatures. However, it is clear that the width of the (004) peak gradually increases below $`TT_\mathrm{N}`$. Note that the (004) reflection is attributed solely to nuclear reflection, and no magnetic scattering contributes to this reflection for the C-type AFM order (see Table I). Comparing the data for three samples depicted in Fig. 8, one can see that the $`x=0.75`$ sample shows the most clear broadening, and it becomes less distinct as $`x`$ decreases. In order to clarify the origin of the broadening of the nuclear Bragg reflections, we have examined the powder diffraction patterns and have found that the broadening was limited to the reflections with the Miller indices $`(hkl)`$ of large $`l`$, for example, (004), (114), (206), (226), and (008). In Fig. 9, we show typical examples of the broadening of the Bragg profiles for the $`x=0.75`$ sample at 10 K. Filled circles are the observed intensity profiles, and solid lines are the calculated intensity obtained by the Rietveld refinement. One can clearly see that the widths of the (004) and (206) reflections are wider than those of (220) and (422). We first fitted the profile assuming that the sample is in a single phase with the $`I4/mcm`$ symmetry, and the calculated profiles are depicted in Figs. 9(a) and (b). Despite the fitting is quite good for (220) and (422), the fit to (004) and (206) is relatively poor. Because the calculated peak positions are in excellent agreement with the observed peaks, the symmetry $`I4/mcm`$ assumed in the analysis cannot be too far from the true crystal symmetry. Therefore, one of the possible reasons for these broadening could be the lowering of the symmetry to the orthorhombic or monoclinic one, and the resultant splitting of the original reflections which may be unresolved due to the moderate angular resolution of the neutron powder diffractometer. This possibility, however, will be easily discarded because the peak broadening is also observed at (00$`l`$) reflections which will split in neither the orthorhombic nor monoclinic structure. Considering the fact that the peak broadening occurs selectively at the reflections with large $`l`$, that is, the reflections from the lattice planes which are nearly perpendicular to the $`c`$ axis, it is very likely that it is originated from an anisotropic strain in the system, and a possible microscopic picture of such a strain is a distribution of the lattice constant of the $`c`$ axis. To ascertain this idea, we have assumed a simple structural model that the sample consists of two phases in which they have two different lattice constants for the $`c`$ axis. By keeping other parameters identical for both phases, we have fitted the observed diffraction pattern to this model, and have obtained substantially improved results as depicted in Figs. 9(c) and (d). From these facts, we concluded that the observed selective broadening of Bragg peaks results from the anisotropic strain caused by a distribution of the $`d(3z^2r^2)`$ orbitals. As stated above, the $`e_g`$ electrons occupy the $`d(3z^2r^2)`$ orbitals in the O phase. When the charges are localized in the insulating phase, only Mn<sup>3+</sup> sites have the $`d(3z^2r^2)`$ orbital, and Mn<sup>4+</sup>sites have no $`e_g`$ electrons. In Fig. 10, we illustrated an arrangement of Mn<sup>3+</sup> with the $`d(3z^2r^2)`$ orbital and Mn<sup>4+</sup> with no $`e_g`$ orbital. Because the $`d(3z^2r^2)`$ orbital extends toward the $`c`$ direction, the distance between Mn<sup>3+</sup> and Mn<sup>4+</sup> are elongated along the $`c`$ direction, whereas in the $`ab`$ plane the Mn<sup>3+</sup>–Mn<sup>4+</sup> distance is almost equal to the Mn<sup>4+</sup>–Mn<sup>4+</sup> distance. At high temperatures, the charges are mobile by thermal activation, which averages out the local distortion of the lattice spacing along the $`c`$ axis. At low temperatures, on the other hand, the thermal energy is insufficient for the charges to hop, and the local ordering of the $`e_g`$ electrons with the $`d(3z^2r^2)`$ orbital may be formed, and it leads to the anomaly in the lattice constant. It should be noted that the broadening of the peak starts below $`T_N`$ where the C-type AFM spin ordering is formed as shown in Fig. 8, and at the same time a temperature derivative of the resistivity shows an anomaly. ### B possibility of the $`x=0.8`$ charge ordering in the C-type AFM phase As we described in the previous subsection, we found that the broadening of the nuclear Bragg peaks becomes clearer as $`x`$ increases, indicating that the charge localization/ordering is also progressively stabilized with the increase of $`x`$. Furthermore, the anomaly of the temperature derivative of the resistivity at $`T_\mathrm{N}`$ develops as $`x`$ increases, and it is most clearly observed at $`x=0.80`$. In addition, the resistivity of the $`x=0.80`$ sample itself shows a steep increase at $`T_\mathrm{N}`$ with decreasing temperature. Judging from these facts, it is very likely that the charge ordering associated with the $`\mathrm{Mn}^{3+}:\mathrm{Mn}^{4+}`$ ratio of $`1:4`$ is formed below $`T_\mathrm{N}`$. On the other hand, Jirák et al. was reported that superlattice reflections which might be originated from the charge ordering of $`\mathrm{Mn}^{3+}:\mathrm{Mn}^{4+}=1:3`$ were observed in their Pr<sub>0.2</sub>Ca<sub>0.8</sub>MnO<sub>3</sub> sample whose valence distribution of the Mn ions was determined to be Mn$`{}_{0.25}{}^{}{}_{}{}^{3+}`$Mn$`{}_{0.75}{}^{}{}_{}{}^{4+}`$ by chemical analysis. To check a possibility of such charge ordering, we examined our powder pattern profiles in detail, but it was not possible to detect any indication of the superlattice reflections for the charge ordering in the powder diffraction data. A study of a single crystal sample is strongly desirable to elucidate the nature of a possible 4/5 or 3/4 charge ordering in the C-type AFM phase. As for the charge ordering for $`x1/2`$, an interesting charge ordering with large periodicity was recently observed in the La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> system. In this system, incommensurate superlattice peaks were observed at the wave vector $`Q=(\delta ,0,0)`$ with $`\delta 1x`$ below the charge ordering temperature $`T_{\mathrm{CO}}`$ by electron diffraction. To understand the incommensurability, a stripe-type charge/orbital ordering was proposed. In this model, a pair of Mn<sup>3+</sup>O<sub>6</sub> stripes are formed, and they are separated by another stripe-shaped region of the Mn<sup>4+</sup>O<sub>6</sub> octahedra. A pair of Mn<sup>3+</sup>O<sub>6</sub> stripes are accompanied with the $`d(3x^2r^2)/d(3y^2r^2)`$ orbital ordering and with a large lattice contraction due to the JT effect, while the Mn<sup>4+</sup>O<sub>6</sub> regions are free from lattice distortions. At $`x=1/2`$, this incommensurate pairs of JT stripes converges to the well-known CE-type orbital/spin ordering. Similar incommensurate superlattice peaks were also observed in Bi<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> single crystals with $`0.74x0.82`$, while the long-period structure with four-fold periodicity (21 Å) and 32-fold periodicity (170 Å) to the orthorhombic lattice unit were clearly observed in the $`x=0.80`$ sample. These results also share many features with the paired JT stripes proposed for the La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> with $`x0.5`$. We would like to point out that this type of paired JT stripe ordering can be easily excluded from the possible charge ordering for the present C-type AFM Nd<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> samples from the consideration of the lattice parameters. For the CE-type charge ordering and associated paired JT stripe ordering, the orthorhombic $`c`$ axis (in the Pbnm notation) must be the shortest. This is due to the fact that the $`d(3x^2r^2)/d(3y^2r^2)`$ orbitals lie in the $`ab`$ plane as shown in Fig. 4, and it is easily checked that this relation is satisfied by other manganites with the CE-type ordering, for example, Pr<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> (Refs. ) as well as La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> with $`x0.50`$ (Refs. ) from the lattice constants data in the existing reports. On the other hand, the $`c`$ axis is the longest for the C-type AFM spin ordering in the present Nd<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub>, in the C-type AFM region of Pr<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> (Ref. ) and in La<sub>0.2</sub>Ca<sub>0.8</sub>MnO<sub>3</sub> (Ref. ). As explained in the previous subsection, this relation of the lattice parameters results from the $`d(3z^2r^2)`$-type orbital ordering in the C-type structure. Concerning the magnetic ordering of the Bi<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> system, we are puzzled that the magnetic ordering was reported to be of C-type. Because the $`c`$ axis ought to be the longest for the C-type spin order, it cannot be compatible with the proposed JT stripe-type ordering with the shortest $`c`$ axis. Finally, we comment that the FWHM of the (004) peak of the $`x=0.75`$ sample at 330 K is larger than the one at about 300 K (see Fig. 8). This is because a finite amount of the scattering exists between the (220) and (004) peaks. Similar extra scattering is observed at some other scattering angles. Presumably, another phase with very close length of the $`a`$ and $`c`$ axis may exist at higher temperatures, and the remnant of the higher $`T`$ phase may exist at 330 K. ## VII Conclusions Neutron diffraction study was performed on Nd<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> powder samples with $`0.49x0.75`$ and their crystal and magnetic structure were analyzed by the Rietveld method. A systematic change of the crystal and magnetic structures was observed as a function of $`x`$. With increasing $`x`$, the magnetic structure of the ground state varies from metallic ferromagnetism to charge ordered CE-type antiferromagnetism, then to metallic A-type antiferromagnetism, and finally to insulating C-type antiferromagnetism. The magnetic structure is driven by underlying Mn $`e_g`$ orbital ordering and resultant crystal structure. In the CE-type and A-type AFM states, the crystal structure is characterized by apically compressed MnO<sub>6</sub> octahedra reflecting the planar $`d(x^2y^2)`$-type orbital ordering. On the other hand, in the C-type AFM state it consists of apically elongated octahedra which is influenced by the ordering of the rod-type $`d(3z^2r^2)`$ orbitals. The CE-type AFM state was observed only in the neighborhood of $`x=1/2`$. In the $`x=0.51`$ sample, the CE-type AFM state and the A-type AFM state coexisted due to the small energetic difference between the two AFM states. In addition, the C-type AFM phase exhibits an anisotropic broadening of Bragg peaks, which becomes clearer as $`x`$ increases. This can be interpreted as a precursor of the $`d(3z^2r^2)`$-type orbital ordering at $`\text{Mn}^{3+}:\text{Mn}^{4+}=1:3`$ or $`1:4`$. ###### Acknowledgements. This work was supported by a Grand-In-Aid for Scientific Research from the Ministry of Education, Science, Sports and Culture, Japan and by the New Energy and Industrial Technology Development Organization (NEDO) of Japan.
no-problem/9902/astro-ph9902383.html
ar5iv
text
# The discovery of a 7–14 Hz Quasi-Periodic Oscillation in the X-ray Transient XTE J1806–246 ## 1. Introduction The new X-ray transient XTE J1806–246 was first detected (Marshall & Strohmayer 1998) with the Rossi X-ray Timing Explorer (RXTE) and later most likely identified in the radio (Hjellming, Mioduszewski, & Rupen 1998) and optical (Hynes, Roche, & Haswell 1998) bands. The source is positionally coincident with the known X-ray transient 2S 1803–245 (Jernigan et al. 1978) and with the X-ray burster SAX J1806.8–2435 (Muller et al. 1998). If the latter identification is correct then this would imply that XTE J1806–246 is a low-mass X-ray binary (LMXB) harboring a low magnetic field strength neutron star and not a black hole. The neutron star LMXBs have been classified into the so-called Z sources and the atoll sources on the basis of their correlated X-ray spectral and X-ray timing behavior (Hasinger & van der Klis 1989). The Z sources trace out a Z shape like track in the X-ray color-color diagram (CD) with the branches called, from top to bottom, the horizontal branch (HB), the normal branch (NB), and the flaring branch (FB). The power spectra of the Z sources show on the HB strong band-limited noise (called low frequency noise or LFN) with a cutoff frequency of several Hertz, and, simultaneous with this, QPOs between 15 and 60 Hz, which are called horizontal branch QPOs or HBOs. On the NB QPOs appear between 5 and 7 Hz which are called normal branch QPOs or NBOs. HBOs and NBOs often occur simultaneously. In several Z sources, the NBOs smoothly merge with 7–20 Hz QPOs seen on the FB, the flaring branch QPOs or FBOs. On all branches two additional noise components are found, one at very low frequencies (the very low frequency noise or VLFN), following a power law, and one at frequencies above 10 Hz (the high frequency noise or HFN), which cuts off between 50 and 100 Hz. The atoll sources trace out a curved branch in the CD, which can be divided in two parts: the island state and the banana branch. When the source is in the island state motion in the CD is not very fast and can take several weeks. The power spectrum is dominated by very strong (sometimes more than 20% rms amplitude) band-limited noise. The band-limited noise is also called HFN but it is at much lower frequencies than the HFN observed in the Z sources and most likely they are not related. Instead, it has been proposed that Z source LFN and atoll source HFN are similar phenomena (van der Klis 1994). On the banana branch the source moves much faster through the CD (on time scales of hours to days) and in the power spectrum only a weak (several percent rms amplitude) power law noise component at low frequencies is observed (the VLFN). The part of the banana branch closest to the island state is called the lower banana branch, the part away the upper banana branch. From the island state via the lower to the upper banana branch the VLFN tends to get stronger and steeper while the HFN becomes much weaker. The physical parameter governing the state changes in a given source is nearly certainly the mass accretion rate (see van der Klis 1995 for a summary of the evidence). Some of the differences between the Z and the atoll sources are also due to the differences in the average accretion rate, but it has been proposed that differences in the neutron star magnetic field strength also play a role (e.g., Hasinger & van der Klis 1989; Psaltis, Lamb, & Miller 1995). With the launch of RXTE at the end of 1995, in both the Z sources and the atoll sources two other types of QPOs were discovered at much higher frequencies, between 200 and 1200 Hz (the kHz QPOs; see van der Klis 1999 for a recent review). In the Z sources, these kHz QPOs are detected on the HB and the upper part of the NB. In the Z source Sco X-1, the kHz QPOs are detected all the way up to the lower part of the FB. In atoll sources, the kHz QPOs are strong in the island state, weaker on the lower part of the banana branch, and absent on the upper part of the banana branch. The similar properties of the kHz QPOs in the Z sources and the atoll sources suggest that these QPOs are most likely due to the same physical mechanism in both types of sources. Using data obtained with RXTE not only kHz QPOs were discovered in the atoll sources, but also QPOs with frequencies around 60 to 80 Hz (Strohmayer et al. 1996; Wijnands & van der Klis 1997; Ford et al. 1997; Homan et al. 1998; Wijnands et al. 1998) and in one atoll source a QPO was observed near 7 Hz (Wijnands, van der Klis, & Rijkhorst 1999). The properties of the 60–80 Hz QPOs suggest that they could be due to the same physical mechanism as the HBOs in Z sources (e.g., Homan et al. 1998; Psaltis, Belloni, van der Klis 1999). Wijnands et al. (1999) tentatively suggest that the 7 Hz QPO can be identified with the NBOs in the Z sources. These recent RXTE results show that whatever causes the phenomenological differences between the Z and the atoll sources, it does not prevent similar QPO phenomena from occurring (albeit with differences in incidence and perhaps strength) in both source types. Here we report on the correlated X-ray spectral and X-ray timing behavior of XTE J1806–246 during the rise, the peak, and the decay of its 1998 outburst. We report the discovery of a 7–14 Hz QPO during the peak of the outburst. A preliminary announcement of this discovery was already made by Wijnands & van der Klis (1998). The correlated X-ray spectral and X-ray timing behavior during the rise and the decay is consistent with that of an atoll source. The outburst peak properties are more reminiscent to Z source normal/flaring branch behavior, which suggests that the lack of phenomena characteristic of the normal/flaring branch in most atoll sources is due to a difference in accretion rate only. When this paper was essentially completed we became aware of a paper by Revnivtsev, Borozdin, & Emelyanov (1999) which uses the same data. They concentrated on the analysis of the 9 Hz QPO and with respect to that feature obtained results consistent with ours. ## 2. Observations and Analysis XTE J1806–246 was observed by the proportional counter array (PCA) onboard RXTE as part of public target of opportunity observations for a total of $``$32 ksec (see Table 1 for a log of the observations). During all observations, data were obtained in 129 photon energy bands covering the energy range 2–60 keV. Simultaneous data were obtained at different epochs with time resolutions of 122 $`\mu `$s or 16 $`\mu `$s in 67 (April 27) or 18 (the other data) photon energy bands covering the range 2–60 keV. During all observations except the April 27 one, data were also obtained with 8 ms time resolution in 16 bands covering the range 2–13 keV. We calculated FFTs using 16-s data segments of the 122 $`\mu `$s and 16 $`\mu `$s data in order to study the $`>`$ 100 Hz variability. We also made 128 s FFTs using the 8 ms and 16 $`\mu `$s data in different photon energy bands in order to study the energy dependence of the QPO and the noise components. Finally, we made 128 s cross-spectra between different energy bands from the same data in order to study the time lags of the QPO. To determine the properties of the 7–14 Hz QPO, we fitted the power density spectra of May 3 with a function containing a constant (representing the dead-time modified Poisson level), a power law (representing the underlying continuum), and a Lorentzian (representing the QPO). To determine the properties of the peaked noise component, we fitted the power density spectra of the other data, after subtracting the dead-time modified Poisson level, with a power law and an exponentially cutoff power law (representing the peaked noise). The uncertainties in the fit parameters were determined using $`\mathrm{\Delta }\chi ^2=1`$. Upper limits correspond to a 95% confidence level. The PCA light curve, the CDs, and the hardness-intensity diagrams (HIDs) were created using 64 s averages of the 16 s data. In the CDs, the soft color is defined as the logarithm of the 3.5–6.4 keV/2.0–3.5 keV count rate ratio and the hard color as the logarithm of the 9.7–16.0 keV/6.4–9.7 keV count rate ratio. The definition of the colors in the HIDs are the same as in the CDs. The count rates in the HIDs are for the photon energy range 2.0–16.0 keV. All count rates quoted in this paper are for 5 detectors. The count rates used in the diagrams are background-subtracted but not dead-time corrected. The dead-time correction is 3%–5%. ## 3. Results The outburst light curve is presented in Figure 4.3. During 1998 April the source was rising from $``$5000 to $``$7200 counts s<sup>-1</sup> (2.0–16.0 keV). It reached a maximum of between 7000–8000 counts s<sup>-1</sup> in the 1998 May 3 observation. After May 3 the count rate decreased; the source became almost undetectable ($``$10–20 counts s<sup>-1</sup>) by July (see Table 1). The CD of all data before 1998 July 1 is presented in Figure 4.3a. The data taken on July 1 and 17 are not plotted because of the very low statistics of the data. In Figure 4.3a, different tracks are traced out at different moments in the outburst. A blow up of the data obtained between April 27 and May 22 is shown in Figure 4.3b. During the rise (the April data) and the decay (the May 17–22 data) the source traced out a curved branch in the CD (Fig. 4.3c). In Figure 4.3a the CD is compared to the hard HID (Fig. 4.3b; the hard color versus the count rate) and the soft HID (Fig. 4.3c; the soft color versus the count rate, note the axes are interchanged). Clearly visible from these figures is that although the count rate can be different by about 2000 counts s<sup>-1</sup>, the hard and the soft color can have the same values (this is best visible by comparing the April data which each other). By comparing the April 27 and the May 17 observation, it is also clear that the soft color can have different values at the same count rates. From Figure 4.3 it can also be seen on which dates the source was at a certain position in the CD. During the June observations, the source was in two distinct places in the CD and HIDs which are separate from the rest of the data (see Fig. 4.3a). During the peak of the outburst the source traced out, in a different part of the CD compared to the rise and decay data, a pattern which we interpret as a two-branched structure (Fig. 4.3d). When examining the CD at higher (16 seconds) time resolution, it is clear that at the beginning of the May 3 observation the source was located on the lower part of the right branch in the CD. The source then gradually moved via the lower part of the left branch to the upper left part. At the same time the count rate increased from about 7100 to about 7400 counts s<sup>-1</sup> (2.0–16.0 keV). After a data gap of about 2000 seconds, due to an Earth occultation of the source, the count rate had increased to about 7500–7600 counts s<sup>-1</sup> (2.0–16.0 keV) and the source was on the upper part of the right branch in the CD (compare also Fig. 4.3a and b). It is impossible to say whether the source jumped to this place in the CD or moved gradually to it. In the latter case it is also not possible to say if this gradual motion of the source followed the same track or followed a different route. The CD is compared to the HIDs in Figure 4.3. By comparing a and c it can be seen what the count rate was at a certain position in the CD. The source not only traced out different tracks in the CD during the rise and the decay with respect to the peak of the outburst, but also the power spectra were remarkably different. During the rise and the decay two noise components can be distinguished in the power spectrum (Fig. 4.3 left): a noise component following a power law at low frequency (the VLFN) and a broad noise component which can be represented by a power law with an exponential cutoff between 5 and 20 Hz (the HFN). The strengths of these noise components and their other properties during the different observations are displayed in Table 1. During the last observation in the decay for which noise could be detected (1998 June 14; see Fig. 4.3 right) the strength of the HFN had increased to about 14%. The VLFN could not be detected down to 0.01 Hz (note that Revnivtsev et al. 1999 reported the presence of the VLFN below 0.01 Hz). During the last two observations (1998 July 1 and 17) no noise could be detected, however, the upper limits of typically 30% rms amplitude on any noise component are not very stringent. During the peak of the outburst the power spectrum is quite different. The VLFN is still present but the HFN component is replaced by a very significant (44$`\sigma `$) QPO (Fig. 4.3 middle) with a frequency of 9.03$`\pm `$0.05 Hz, a FWHM of 5.6$`\pm `$0.2 Hz, and an rms amplitude of 5.30$`\pm `$0.06%. ### 3.1. The QPO In order to study this QPO in more detail, we made 64 s FFTs of the 1998 May 3 data and fitted the QPO with a Lorentzian in each individual power spectrum. In order to correlate the obtained QPO parameters with the position of the source in the CD, we used the $`S_\mathrm{z}`$ parameterization, which has been developed for the Z sources ($`S_\mathrm{z}`$ is a measure of the curve length along the track in the CD; see Wijnands et al. 1997 and Wijnands 1999 and references therein for detailed descriptions of this procedure). We selected arbitrarily the normal points where $`S_\mathrm{z}`$ = 1 and $`S_\mathrm{z}`$ = 2 (see Fig. 4.3d). The behavior of the QPO parameters, the 2.0–16.0 keV count rate, and the $`S_\mathrm{z}`$ values as a function of time are presented in Figure 4.3. The QPO parameters versus the QPO frequency, the 2.0–16.0 keV count rate, and $`S_\mathrm{z}`$ are displayed in Figure 4.3. From these two figures it is clear that the QPO parameters (Figs. 4.3b, e, and g) do not have a strict correlation with the count rate. This is also apparent when comparing Figure 4.3a with Figures 4.3c, d, and e. The QPO rms versus its frequency (Fig. 4.3a), shows two distinct branches. From Figures 4.3c and d it can been seen that the transition between these two branches occurred around 1000 seconds after the beginning of the May 3 observation. After the data gap between 2000 and 4500 seconds (due to an Earth occultation of the source), the source has returned to the original branch. From Figure 4.3c it is clear that these two branches correspond to different places in the CD. The points with the low rms amplitude for the QPO occur below about $`S_\mathrm{z}`$ = 1.5; the points with the high rms amplitude occur when $`S_\mathrm{z}`$ $`>`$ 1.5. This differences in QPO behavior are also apparent in Figures 4.3f and h. Below $`S_\mathrm{z}`$ = 1.5 the FWHM and frequency behave erratically. Above $`S_\mathrm{z}`$ = 1.5 the FWHM and in particular the QPO frequency increase when $`S_\mathrm{z}`$ increases. This means that above $`S_\mathrm{z}`$ $`>`$ 1.5 the QPO frequency is well correlated with the position of the source in the CD. For $`S_\mathrm{z}`$ $`<`$ 1.5 the correlation breaks down. The energy dependence of the QPO is shown in Figure 4.3. There is a clear increase of the rms amplitude of the QPO with photon energy below $``$12 keV. Above 12 keV, the rms amplitude seems to level off. We tried to measure any time lags in this QPO as a function of energy. The obtained time lags between the energy bands 2.8–5.3 keV and 5.3–13.0 keV were consistent with zero ($`0.06`$$`\pm `$0.4 msec). ### 3.2. The noise components during the rise and the decay When excluding the May 3 observations in the CD, a curved branch is present. We investigated the behavior of the noise components as a function of the position of the source on this branch. We divided the branch is five parts (see Fig. 4.3c). Note that only in area 3 are data used of the rise and the decay together. In areas 1 and 2, only data of the rise is used and in areas 4 and 5 only data of the decay. During the rise, the source moved first from area 1 via area 2 into area 3 (April 27), and then back via area 2 (April 28) into area 1 (April 29) (see also Fig. 4.3). During the first observation in the decay (May 17 00:29–00:54), the source was in area 4. In the subsequent observations, the source moved first to area 5, and then back via area 4 (May 17 12:42–13:56) into area 3 (May 22). The motion can be followed when comparing the CD (Fig. 4.3a) with the soft HID (Fig. 4.3c). The results of the fits in the different areas are summarized in Table 2. The VLFN increases in strength and becomes steeper from the upper left part (area 1) to the upper right part of the curved branch (area 5), while the HFN decreases in strength. The index and the cutoff frequency of the HFN do not have a clear correlation with the position of the source in the CD. In area 5, the HFN is not peaked (as is the case in the other areas) and no cutoff frequency could be determined. Both noise components have similar relationships with photon energy as the 7–14 Hz QPO: they increase in strength up to an energy of $``$12 keV above which they remain approximately constant. Although both the HFN and the QPO peak at roughly the same frequency and have similar energy dependence of their strengths, it is unclear whether the peaked HFN evolved into the QPO or was replaced by it. ### 3.3. Kilohertz QPOs We intensively searched for QPOs between 200 and 1500 Hz. None were detected with conservative upper limits on the amplitude (2–60 keV; assuming a FWHM of 150 Hz for the kHz QPOs) of 1%–2% rms during the rise of the outburst, 2.0% rms during the peak of the outburst, 2.0–3.0% rms during the decay of the outburst when XTE J1806–246 was on the curved branch, 4.7% rms on June 8, 12.5% on June 14, and around 80% during the July observations. ## 4. Discussion We have analyzed the correlated X-ray spectral and X-ray timing behavior of the new X-ray transient XTE J1806–246 during its 1998 outburst. During the peak of the outburst we discovered a very significant (44$`\sigma `$) QPO near 9 Hz. This QPO was not detected during the rise and the decay of the outburst, however, a broad peaked noise component near 10 Hz was. Also, the behavior in the CD was different between the rise and the decay of the outburst compared with the peak. During the peak of the outburst a pattern was traced out, which we interpret as a two branched structure. The QPO frequency seemed to be correlated with the position of the source in (at least a part of) the CD: the frequency increased when the source moved from lower part of the left branch to the upper part of the right branch. Outside the peak of the outburst, a broad curved branch was traced out when the count rate was above 3500 counts s<sup>-1</sup> (2.0–16.0 keV). Below this count rate only distinct patches in the CD were formed during the different observations, which were not connect with each other. This pattern most likely is due to the sparsity of data in the decay of the outburst. The strength of the HFN increased slightly when the source moved from the upper right in the CD to the upper left, while the VLFN strength decreased and it became less steep. The HFN increased to about 14% rms amplitude when the count rate had dropped below 350 counts s<sup>-1</sup> (2.0–16.0 keV). The positional coincidence of XTE J1806–246 with the X-ray burster SAX J1806.8–2435 makes it likely that they are one and the same source. This would suggest that XTE J1806–246 contains a neutron star. Although similar frequency QPOs as the 9 Hz QPO in XTE J1806–246 have also been discovered in black-hole candidates, they are more often detected in the neutron star systems (the Z sources, and on one occasion in an atoll source \[Wijnands et al. 1999\]). So, this is in accordance with the idea that XTE J1806–246 contains a neutron star and not a black-hole. ### 4.1. Atoll source versus Z source The neutron star LMXBs have been classified into Z sources and atoll sources (see § 1). The differences between the Z sources and the atoll sources have been interpreted (e.g., Hasinger & van der Klis 1989) as due to a difference in the accretion rate and in the strength of the magnetic field of the neutron star. From the higher intrinsic luminosity of the Z sources compared to that of the atoll sources, it was suggested that the Z sources accrete near the critical Eddington accretion rate (e.g., Hasinger & van der Klis 1989; Smale 1998; Bradshaw, Fomalont, & Geldzahler 1998) but that most atoll sources accrete at a significantly lower accretion rate. The magnetospheric beat-frequency (MBF) model (Alphar & Shaham 1985; Lamb et al. 1985) for the HBOs in the Z sources in combination with the absence of similar QPOs in the atoll sources, suggested that the magnetic field strengths of the neutron stars in the Z sources are higher than those in the atoll sources. X-ray spectral modeling (Psaltis et al. 1995) also suggested that the magnetic field strength in the atoll sources is significantly less than that in the Z sources. Because the magnetic field strengths of the atoll sources are thought to be significantly above zero (e.g., Psaltis et al. 1995), the MBF model predicted that similar QPOs might be observable in the atoll sources, but at a lower strength than in the Z sources. With RXTE similar frequency QPOs have indeed been found in the atoll sources, however, it has still to be determined whether the strengths of these QPOs are significantly less than those of the HBOs in the Z sources. The presence of the 5–20 Hz QPO (the N/FBO) in Z sources at their highest inferred mass accretion rates and the absence of similar QPOs in the atoll sources, had led to models for this QPO in which the accretion rate has to be near the Eddington accretion rate in order for the production mechanism of this QPO to be activated (e.g., Fortner, Lamb, & Miller 1989; Alpar et al. 1992). If these models are correct, then similar QPOs should also be visible in the atoll sources when they reach the Eddington mass accretion rate (if they could). Recently, in the atoll source 4U 1820–30 a 7 Hz QPO was discovered (Wijnands et al. 1999) at times when this source was accreting at its highest observed inferred mass accretion rate. However, this highest observed accretion rate was well below the Eddington mass accretion rate (Wijnands et al. 1999). If this 7 Hz QPO is due to the same physical mechanism as the N/FBOs in the Z sources, then either the production mechanism for this QPO is already activated at accretion rates much lower than the Eddington accretion rate, or the accretion rate estimated from the X-ray flux in 4U 1820–30 is significantly less than the true accretion rate. In the latter situation, a significant part of the energy released near the neutron star must be beamed away from us or has to eventually be emitted not in X-rays but at other wavelengths or in, e.g., ejecta. It remains to be seen to what extent the Fortner et al. (1989) model could accommodate this. It is interesting to investigate if XTE J1806–246 fits in in the above described picture, and, if so, how exactly. During the rise and the decay of the outburst a curved branch was traced out, which resembles the banana branch in atoll sources. Also, the behavior of the noise components with the position of the source on this curved branch (HFN becoming stronger and the VLFN weaker and less steep when the source moves from the right to the left in this diagram) is characteristic of the behavior of the noise components observed in atoll sources when they move from the upper banana branch to the lower banana branch. Interestingly, when the count rate drops further in the decay, distinct patches are formed separated from the curved branch. The HFN increases to about 14% rms amplitude with decreasing count rate and the VLFN becomes undetectable down to 0.01 Hz in the June 14 observation (the last observation which has sufficient statistics to detect noise in the power spectrum). The present of such strong noise and distinct patches in the CD are characteristic of atoll source behavior when these sources are in the island state (see, e.g., Hasinger & van der Klis 1989; Méndez 1999). Note, that similar weak VLFN components, as observed in the June 14 observation (Revnivtsev et al. 1999), have been observed in other atoll sources when they were in their island states (Hasinger & van der Klis 1989). So, XTE J1806–246 during the rise and the fall of its outburst displayed all characteristics typical of an atoll source and, if the source is indeed a neutron star, it should be classified as such. In the original report of the discovery of the QPO in XTE J1806–246 (Wijnands & van der Klis 1998) we suggested, that if the 7–14 Hz QPO in XTE J1806–246 is due to the same physical mechanism as the 7–20 Hz QPOs in the Z sources, XTE J1806–246 might be a Z source. However, the discovery of the 7 Hz QPO in the atoll source 4U 1820–30 (Wijnands et al. 1999) has since shown that QPOs between 7–20 Hz are in neutron star systems not exclusively found in the Z sources. The correlation of the frequency of the QPO with the position of the source in part of the CD is very similar to the behavior of the NBO when the Z sources move from the lower NB onto the lower FB. Such correlated behavior has not yet been seen in other atoll sources. However, the lack of correlation in another part of the CD is different. We conclude that if XTE J1806–246 is a neutron star it is most likely the second example of an atoll source that at high luminosity begins to show a NBO-like QPO. It then is the first example of an atoll source simultaneously exhibiting a NB/FB like pattern in the CD. (A related case may be Cir X-1, which sometimes shows atoll characteristics \[Oosterbroek et al. 1995\] and at other times what appears to be a Z-track and HBO- and NBO-like QPOs \[Shirey et al. 1998, 1999\]). It is possible that XTE J1806–246 during the peak of the outburst reaches near Eddington mass accretion rates, as is required in the Fortner et al. (1989) model for the NBO. The observed absorbed flux between 2 and 30 keV during the peak of the outburst is about 1.9$`\times 10^8`$ ergs cm<sup>-2</sup> s<sup>-1</sup>. The source is located in the direction to the galactic bulge ($`l6.1`$, $`b1.9`$). Assuming a distance of 8 kpc, the intrinsic luminosity of the source is 1.5$`\times 10^{38}`$ ergs s<sup>-1</sup>, which is close to the Eddington accretion limit for a 1.4 solar mass neutron star. However, during the rise and the decay the observed X-ray count rate and flux ($`1.8\times 10^8`$ ergs cm<sup>-2</sup> s<sup>-1</sup>) are very close to those observed during the peak of the outburst, suggesting that also at those epochs XTE J1806–246 was accreting near the Eddington accretion limit. It is unclear why only during the peak of the outburst XTE J1806–246 showed the 7–14 Hz QPO; perhaps this is related to a difference in the accretion flow between persistent and transient sources caused by the large changes in the mass accretion rate in the latter. ### 4.2. Other types of QPOs? Both the Z sources and the atoll sources exhibit, at their lowest observed inferred mass accretion rates, kHz QPOs and QPOs between 15–70 Hz. Therefore, we looked in particular at the data obtained during the lowest count rates for XTE J1806–246 to search for similar QPOs. No significant kHz QPOs were detected but with upper limits which are not inconsistent with the derived values of the kHz QPOs in the Z sources and in some atoll sources. Interestingly, when we combine the power spectra obtained for areas 1 and 2 in Figure 4.3c, we detect a 4.8$`\sigma `$ QPO with a frequency of 70.6$`\pm `$1.6 Hz, a FWHM of 12.6$`{}_{3.2}{}^{}{}_{}{}^{+4.6}`$ Hz, and an rms amplitude of 1.2%$`\pm `$0.1%. This QPO is also marginally visible in Fig. 4.3 left. However, when we include the Poisson level as an unknown parameter in the fit the significance of this QPO drops to about 3.5$`\sigma `$. Taking into account the number of trials involved in our search method this QPO needs confirmation. However, the presence of such a $``$70 Hz QPO is very similar to similar frequency QPOs observed in atoll sources in their lower banana branch and to the HBOs observed in the Z sources on their HB. ### 4.3. Conclusion The behavior observed for XTE J1806–246 during the 1998 outburst is consistent with that of an atoll source which at the peak of the outburst reached a X-ray luminosity level where the NBO mechanism switched on. This makes the source the second atoll source, after 4U 1820–30 (Wijnands et al. 1999), that exhibits QPOs with frequencies near 7–9 Hz. This source can be very useful to study the similarities and the differences between Z sources (including Cir X-1, which is probable a Z source; Shirey et al. 1998; 1999) and atoll sources, allowing us to get a better insight in the processes at work in the inner part of the accretion disk. This work was supported in part by the Netherlands Foundation for Research in Astronomy (ASTRON) grant 781-76-017, by the Netherlands Researchschool for Astronomy (NOVA), and the NWO Spinoza grant 08-0 to E. P. J. van den Heuvel. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. We thank the anonymous referee for his helpful comments on the paper.
no-problem/9902/astro-ph9902102.html
ar5iv
text
# A Relic Neutrino Detector ## Introduction The standard Big Bang model predicts a universal background of relic neutrinos, with an average density of $`100/\mathrm{cm}^3`$ per flavor. Relic neutrinos of mass $`10^3`$ eV would be nonrelativistic today, and contribute to the cosmological energy density an amount $`\mathrm{\Omega }_\nu _im_{\nu _i}/(90h^2\mathrm{eV})`$, where $`h0.65`$ is the Hubble expansion rate in units of $`100\mathrm{km}/(\mathrm{s}\mathrm{Mpc})`$. Massive neutrinos are a natural candidate for the hot component in currently favored “Mixed Hot+Cold Dark Matter (HCDM)” primack98 models of galaxy formation. In this scenario, neutrinos would contribute $``$ 20 %, and CDM (e.g. Wimps and axions) the remainder of the dark matter. Nonrelativistic neutrinos would be clustered around galaxies and move with a typical velocity $`v300\mathrm{km}/\mathrm{s}`$. The Pauli principle tremaine79 restricts the local neutrino number density to $`n_\nu 2\times 10^6\mathrm{cm}^3`$ $`(v_{\mathrm{max}}/10^3c)^3`$ $`_i(m_{\nu _i}/10\mathrm{e}\mathrm{V})^3`$. The detection of relic neutrinos is hindered by the extremely small cross sections and energy deposits expected from interactions with electrons and nucleons. Past proposals have focused on detecting the mechanical force on macroscopic targets due to the “neutrino wind”. Here, spatial coherence increases the cross section of targets smaller than the neutrino wavelength $`\lambda _\nu 100\mu \mathrm{m}(10\mathrm{eV}/m_\nu )`$. In the nonrelativistic limit, one must distinguish between Majorana and Dirac neutrinos. For Dirac $`\mu `$ or $`\tau `$ neutrinos, the cross section is dominated by the vector neutral current contribution $`\sigma _D=(G_F^2m_\nu ^2/8\pi )N_n^2=`$$`2\times 10^{55}\mathrm{cm}^2(m_\nu /10\mathrm{eV})^2N_n^2`$, where $`N_n`$ is the number of neutrons in the target of size $`\lambda _\nu /2\pi `$. For Majorana neutrinos, the vector contribution is suppressed by a factor $`(v/c)^2`$. The Sun’s peculiar motion through the galactic halo will produce a wind force, whose direction is modulated by the Earth’s rotation. For Dirac neutrinos, the acceleration of a target of density $`\rho `$ and radius $`\lambda _\nu /2\pi `$ is shvarts82 $`a=8\times 10^{24}\mathrm{cm}/\mathrm{s}^2`$$`((AZ)/A)^2`$ $`(v_{\mathrm{sun}}/10^3c)^2`$ $`(n_\nu /10^7\mathrm{cm}^3)`$ $`(\rho /20\mathrm{gcm}^3)`$ and is independent of $`m_\nu `$. A harmonic oscillator driven by the neutrino wind on resonance would experience a displacement amplitude of $`\mathrm{\Delta }x10^{15}\mathrm{cm}(\tau /\mathrm{day})`$. A target of size $`\lambda _\nu `$ can be assembled, while avoiding destructive interference, by using foam-like or laminated materials shvarts82 . Alternatively, grains of size $`\lambda _\nu `$ could be randomly embedded in a low density host material. ## Proposed Detector At present, the most sensitive detector of small forces is the “Cavendish” torsion balance, which has been widely used for measurements of $`G`$, searches for new forces, and tests of the equivalence principle. A typical arrangement consists of a dumbbell-shaped test mass suspended by a tungsten fiber. The angular deflection is read out with an optical lever. The most serious noise backgrounds are 1. thermal noise, 2. seismic noise, 3. time-varying gravity gradients. The smallest measurable acceleration is $`10^{12}\mathrm{cm}/\mathrm{s}^2`$ adel91 . Several improvements seem possible: Thermal noise can be decreased by lowering the temperature and by employing a low dissipation (high-$`Q`$) suspension, as seen from the expression for the thermal noise accelerationbrag92book $`a_{\mathrm{th}}=2\times 10^{23}\mathrm{cm}/\mathrm{s}^2`$ $`(T/\mathrm{K})^{1/2}`$ $`(1\mathrm{d}\mathrm{a}\mathrm{y}/\tau _0)^{1/2}`$ $`(10^6\mathrm{s}/\tau )^{1/2}`$ $`(10^{16}/Q)^{1/2}`$, where $`\tau `$ is the measurement time, $`\tau _0`$ is the oscillator period, and $`T`$ is the operating temperature. A promising low-dissipation suspension method uses the Meissner effect. Niobium or NbTi based suspensions have been employed in gravimeters, gyros, and gravitational wave antennas. Generally, the magnetic field applied to the superconductor is limited to $`0.2\mathrm{T}`$ to avoid flux penetration or loss of superconductivity. A remaining problem is flux creep noise and dissipation because of incomplete flux expulsion. An alternative method would employ a passive persistent-mode superconducting magnet floating above a fixed suspension magnet as shown in Fig. 1. This allows a much higher lifting force because the critical field of NbTi wire is several T. Moreover, the flux lines are strongly pinned by artificial wire defects, leading to a small field decay rate of $`\dot{B}/B10^8/\mathrm{hour}`$. The cylindrical symmetry of the suspension magnets allows a very long rotational oscillation period, which can be matched to the diurnally varying neutrino wind by applying a suitable restoring force. The effects of seismic noise and gravity gradients can be reduced with a highly symmetric target (see Fig. 1). With the c.m. of the target centered below the suspension support, the leading order gravitational torques arise from the dipole and quadrupole moments of the target which need to be minimized by balancing. Braginsky et.al. brag77 have given estimates of the seismic power spectra for vertical and horizontal, and rotational seismic modes. For example, the horizontal acceleration at $`\omega 10^4\mathrm{s}^1`$ is $`a10^{12}\mathrm{cm}/\mathrm{s}^2`$$`(\mathrm{\Lambda }/100\mathrm{km})`$$`(10^6/\tau )^{1/2}`$, where $`\mathrm{\Lambda }`$ is the seismic wavelength. Hence, the coupling of this mode to the wanted rotational mode of the torsion oscillator must be made very small. Especially worrisome is rotational seismic noise which will directly mask the signal and needs to be compensated. The proposed angular rotation readout is composed of a parametric transducer, which converts the angle to an optical frequency. As shown in Fig. 2, the transducer consists of a high-$`Q`$ optical cavity of length $`l`$, tuned by a Brewster-angled low loss dielectric plate of thickness $`d`$. A cavity finesse $`F10^5`$ should be obtainable with dielectric mirrors for the Gaussian $`\mathrm{TEM}_{00p}`$ modes. The frequency tuning sensitivity is $`\mathrm{\Delta }f/f(d/l)\mathrm{\Delta }\theta `$ yielding a resolution of $`10^{14}\mathrm{rad}/\mathrm{Hz}`$. The angular measurement precision depends on the number of photons $`N`$ and laser wavelength $`\lambda `$ via $`\mathrm{\Delta }\theta \lambda /(dF\sqrt{N})`$. This is a factor $`F`$ better than the optical lever for the same laser power. Cryogenic optical resonators have excellent long-term stability and have been proposed as secondary frequency standards. The measured frequency drifts range from $`1\mathrm{Hz}`$ over minutes to $`100\mathrm{Hz}`$ over daysseel97 . For the measurement of the rotation angle, a stable reference frequency will be required. This can be implemented using a laser locked to a second (untuned) cavity. Because of its symmetry, the described angle readout has high immunity against lateral, tilt, and vertical seismic noise, but couples to rotational noise. A possible solution would be to suspend the target as well as the optical cavity in order to suppress the common rotation mode. Additional background forces will arise from gas collisions, cosmic ray hits, and radioactivity, etc., resulting in a Brownian motion of the target. The equivalent acceleration is $`a(\overline{p}/m)(\mathrm{\Gamma }/\tau )^{1/2}`$, where $`\overline{p}`$ is the average momentum transfer, $`\mathrm{\Gamma }`$ is the collision or decay rate, and $`m`$ is the test mass. The residual gas pressure must hence be kept very low by cryopumping and the use of getters. The cosmic muon flux at sea level is $`100/(\mathrm{m}^2\mathrm{s})`$ and can be reduced by going underground. Further disturbing forces may be caused by time-varying electric and magnetic background fields, which can be shielded with superconducters. Thermal radiation and radiometric effects are much reduced by a temperature controlled cryogenic environment. Finally, there is a fundamental limit imposed by the uncertainty principle, with a minimum measurable acceleration given by $`a_{\mathrm{SQL}}=5\times 10^{24}\mathrm{cm}/\mathrm{s}^2`$$`(10\mathrm{k}\mathrm{g}/m)^{1/2}`$ $`(1\mathrm{d}\mathrm{a}\mathrm{y}/\tau _0)^{1/2}`$ $`(10^6\mathrm{s}/\tau )`$. In our proposed position readout, the disturbing back action force arises from spatial fluctuations in the photon flux passing through the central tuning plate. There is hope that relic neutrinos will be detected in the laboratory early in the next century, especially if they have masses in the eV range and are of Dirac type. The most viable way seems to be a much improved torsion balance operating underground. Naturally, a slightly modified balance could be used to test the equivalence principle roll64 and to search for new macroscopic forces adel91 . ## Acknowledgements This research was performed under the auspices of the U.S. Department of Energy by the Lawrence Livermore National Laboratory under contract W-7405-ENG-48.
no-problem/9902/astro-ph9902328.html
ar5iv
text
# Intraday Radio Variability in Active Galactic Nuclei ## 1 Introduction Flux density variations of extragalactic radio sources on timescales of the order of several weeks to years are well-known since the mid-sixties (e.g. Kellermann & Pauliny-Toth, (1968) and references therein). They are used to study the physics of AGN, and have led – together with early VLBI results – to the development of the relativistic jet-model. In 1985, observations with the 100m telescope of the MPIfR in Effelsberg detected significantly faster intensity variations (on timescales of a few days down to several hours), the so-called IntraDay Variability (IDV) (Witzel et al.,, 1986; Heeschen et al.,, 1987). These rapid variations were studied in some detail in the following years, and it turned out that they are quite common in compact extragalactic radio sources. Recently, IDV was also discovered in sources in the southern hemisphere (Kedziora-Chudczer et al.,, 1998). ## 2 Observations and Results Our observations of IDV in AGN have been carried out at the 100m telescope of the MPIfR in Effelsberg and at the VLA, studying variations of the total flux density, and — more recently — also of the (linear) polarization. For the total intensity, elevation- and time-dependent effects have been corrected using steep-spectrum sources, which do not show any IDV. For the polarization observations, we correct the instrumental polarization and the “cross-talk” between the Stokes-channels, applying the Matrix-Method proposed by Turlo et al., (1985). With these procedures, we are able to reach relative measurement errors of 0.3–1.2 % (depending on the wavelength and the weather conditions) for the total flux density, 3–5 % for the polarized flux density, and 2–5 for the polarization angle (for the rare highly polarized sources the measurement errors of the latter two quantities can be somewhat smaller). Since 1985, we have observed 73 AGN (some repeatedly) in search for IDV; this includes the complete subsample of flat-spectrum sources of the 1-Jy-catalog north of $`\delta =50`$. It turned out that the rapid variability is a common phenomenon in compact flat-spectrum radio sources: one third of the observed sources show variations with timescales of $`2`$ days, one third show variability on longer timescales, and only one third of those sources never showed short timescale variations. Furthermore, the observations revealed that IDV is not only present in total intensity $`I`$, but is usually accompanied by variability of the linear polarization (intensity $`P`$ and position angle $`\chi `$). While total intensity variations range from a few percent up to 35 % (e.g. in the case of the QSO 0804+499, Quirrenbach et al., (1992)), variability in the polarized intensity is usually larger and can reach a factor of two e.g. in the QSO 0917+624 (Kraus et al.,, 1999). So far, we have found no significant correlation between the strength of the variations or the timescales with either the redshift of the source, the galactic latitude, or the spectral index. In Fig. 1, we show, as an example of the rapid variations, the variability observed in the quasar 0917+624 in total and polarized flux density and polarization angle (from top to bottom). An anti-correlation between the total and the polarized flux density is clearly present in the light curves. This can be supported further by the computation of the Cross-Correlation Function (CCF, e.g. Edelson & Krolik, (1988)) between $`I`$ and $`P`$ which is plotted in Fig. 2. The minimum close to the timelag $`\mathrm{\Delta }\tau =0`$ confirmes this anti-correlation. Due to the quasi-periodicity of both light curves, the polarized flux density variability seems to be phase-shifted with respect to the total intensity variations. This corresponds to the maximum of the CCF at $`\mathrm{\Delta }\tau 0.7`$ days. The anti-correlation between $`I`$ and $`P`$ is seen frequently in 0917+624, while other sources (e.g. the BL Lac object 0716+714) rather show a direct correlation between both values (e.g. Wagner & Witzel, (1995), Kraus et al., in preparation). Similar polarization angle variability was also observed in other sources. In 0917+624 the angular variations usually are larger than shown in Fig. 1 (e.g. Kraus et al., (1999)). Once, even a 180-swing was observed (Quirrenbach et al.,, 1989). In the BL Lac object 0716+714, we observed a direct correlation between the radio and the optical flux density variations (Quirrenbach et al.,, 1991), and discovered in April 1993 even faster variations (on timescales of two hours) than in any other source before (Fig. 3). It is unclear whether such rapid variability occured only in this source, or has not been found before because of undersampling in time. ## 3 Discussion and Conclusions Thirteen years after its discovery, IDV is still not fully understood. In the case of an intrinsic origin, the timescale of the variations corresponds directly to the size of the emitting region, via the light travel time relation ($`Rct_{obs}`$). From this and the Rayleigh-Jeans law the brightness temperature of the variable component can be derived (Wagner & Witzel,, 1995), resulting in values for $`T_B`$ in the range of $`10^{17}`$$`10^{21}`$ K, far in excess of the inverse Compton limit of $`10^{12}`$ K (Kellermann & Pauliny-Toth,, 1969). This fact seriously challenges existing models proposed to explain AGN variability. On the other hand, assuming an intrinsic origin of IDV, the investigation of IDV offers a method to study the physical properties of AGN on very small scales (in the order of light days or even smaller). A correlation between the variations in the radio and the optical bands (seen e.g. in 0716+714 by Quirrenbach et al., (1991)) argues in favour of an intrinsic origin of IDV. In addition, the lack of a clear dependence of the strength or the timescale of IDV on the frequency or the galactic latitude speaks against interstellar scattering (ISS, Rickett et al., (1995)) as the exclusive cause of IDV. (Nevertheless, owing to the small source sizes $`Rct_{obs}`$ involved, ISS should be present as additional effect in the radio band.) Gravitational microlensing as an alternative extrinsic explanation is implausible because of the high duty cycle and short timescales of the variations and the fact that source sizes of the order of tens of $`\mu `$as are needed (Wagner & Witzel,, 1995). It is clear, that the variations of the linear polarization, especially the polarization angle variations, require special models. The easiest assumption might be a model with two or more independent components, taking into account the vector addition for the polarization vector. In fact, for extrinsic explanations like ISS or microlensing, this scheme is inevitable (e.g. Wagner & Witzel, (1995)). In the light of the shock-in-jet models, Königl & Choudhuri, (1985) explain changes of the polarization angle by the successive illumination of cross-sections with different magnetic field orientations in the jet. In the case of a small viewing angle of the jet, even 180-swings (as observed in 0917+624 by Quirrenbach et al., (1989)) are possible. In an alternative model (Qian et al., in preparation), a thin sheet of relativistic electrons moves along magnetic field lines with a very high Lorentz factor ($`\mathrm{\Gamma }20`$–25). The observed variability is then explained by minor changes of the viewing angle (by only a few degrees) which give rise to large variations of the aberration angle and, therefore, of the observed synchrotron emission. Adopting an intrinsic mechanism for IDV, Qian et al., (1991, 1996) considered the propagation of a thin shock through the jet plasma in a cylindrical geometry with periodic boundaries. They found this model capable of explaining the variations and the apparent high brightness temperatures by $`T_B^{\mathrm{app}}=\gamma _s^2\delta ^3T_B^{\mathrm{true}}`$. Therefore, the high brightness temperatures can be reached easier, although even in this case Doppler factors higher than usually observed are needed. Recently, Spada et al., in press discussed a model in which the radiating electrons are accelerated by shocks in a conical geometry. If the injection times are shorter than the variability timescale, brightness temperatures of up to $`10^{17}`$ K can be explained with moderate Lorentz factors ($`\mathrm{\Gamma }10`$). Alternatively, collective emission processes proposed e.g. by Benford, (1992) can avoid the violation of the inverse Compton limit. At present, however, it is unclear whether this process can produce correlated broad-band (i.e., radio-optical) variations. Thus, coordinated multifrequency observing campaigns covering a large range of the electromagnetic spectrum are needed to distinguish between the various models proposed. We thank A.P. Lobanov and E. Ros for critically reading the manuscript.
no-problem/9902/astro-ph9902330.html
ar5iv
text
# ROSAT HRI X-ray Observations of the Open Globular Cluster NGC 288 ## 1. Introduction Globular clusters are thought to have two distinct populations of X-ray sources (e.g., Hertz & Grindlay 1983). The high luminosity X-ray sources have $`L_X>10^{34.5}`$ ergs s<sup>-1</sup> and are thought to be binary systems with an accreting neutron star (the so-called Low Mass X-ray Binaries or LMXBs; e.g., Verbunt 1996). The nature of Low Luminosity Globular Cluster X-ray sources (LLGCXs) with $`L_X<10^{34.5}`$ ergs s<sup>-1</sup> has been elusive. While a neutron star origin has not been ruled out for the more luminous of the LLGCXs, they are more commonly thought to be associated with white dwarf systems, perhaps related to cataclysmic variables (CVs) which might be produced by stellar interactions in dense cluster cores (Di Stefano & Rappaport 1994). Such a model is not without difficulties; e.g., clusters with high interaction rates are rich in neither CVs nor LLGCXs (Shara & Drissen 1995; Johnston & Verbunt 1996). It would be useful to search for X-ray sources in relatively open globular clusters where stellar interactions are less likely. Part of the difficulty in determining the origin of LLGCXs arises because of the problem of finding their optical counterparts in crowded globular cluster fields. If accurate X-ray positions are known and the optical field is not too crowded, it may be possible to associate the X-ray sources with unusual optical or UV objects. For example, blue variable (flickering) objects (candidate CVs) have been found to fall within the error boxes of some LLGCXs (Paresce, De Marchi, & Ferraro 1992; Cool et al. 1993, 1995). Alternatively, unusual optical/UV colors may be used to identify the LLGCX. Recently, a number of stars with very unusual and strong UV excesses have been found which appear to be associated with LLGCXs in the globular clusters M13 (Ferraro et al. 1998a) and M92 (Ferraro et al. 1998b). These objects lie far from the traditional globular cluster sequences in UV color-magnitude diagrams. In M13, there are only 3 such objects in a sample of $`>`$12,000 stars. Two of the three UV outliers are within a few arcsec of the positions of X-ray sources observed about 40″ from the center of M13 (Fox et al. 1996). The third object has no associated X-ray peak, but LLGCXs are known to be highly variable (Hertz, Grindlay, & Bailyn 1993). If this connection between UV objects and LLGCXs could be solidified, it would produce a valuable new technique for studying these mysterious objects. To search for associations of LLGCXs and unusual optical/UV objects, one requires accurately determined X-ray positions in regions of moderate stellar density, and high resolution optical and UV observations. This suggests that one compare ROSAT HRI observations of relatively open globular clusters with HST photometry of the same regions. In addition to adding to our understanding of LLGCXs, X-ray observations of globular clusters might lead to a better understanding of the X-ray emission from elliptical galaxies. In X-ray bright elliptical galaxies (galaxies with a high ratio of X-ray to optical luminosity $`L_X/L_B`$), the soft X-ray emission is mainly due to thermal emission from diffuse hot gas, which is not generally present in globulars. However, in X-ray faint ellipticals (galaxies with a low ratio of X-ray to optical luminosity $`L_X/L_B`$), much of the emission might be due to stellar sources. Globular clusters are the best local analogs in which we can observe and possibly determine nature of the X-ray sources in old stellar populations. This is particularly interesting for the very soft X-ray component observed from X-ray faint elliptical galaxies. In early-type galaxies, this soft component has an X-ray to optical luminosity ratio of $`L_X/L_B10^{29.6}`$ ergs s<sup>-1</sup> $`L_{}^1`$ (0.1–2.0 keV; Kim, Fabbiano, & Trinchieri 1992; Irwin & Sarazin 1998a,b), and a spectrum with a temperature of about 0.2–0.3 keV (Fabbiano, Kim, & Trinchieri 1994; Kim et al. 1996). The origin of this elliptical galaxy soft X-ray component is very uncertain. First, is the emission due to stars or to interstellar gas? If it is stellar, what is its origin? Stellar sources known to have soft X-ray spectra such as M-dwarfs, RS CVn stars, and super soft sources seemed like promising explanations, but none of these sources have the required X-ray characteristics to fully account for the soft X-ray emission (Pellegrini & Fabbiano 1994; Irwin & Sarazin 1998b). However, recent work has suggested that LMXBs are a viable candidate for the soft X-ray emission (Irwin & Sarazin 1998a,b). If the soft emission in ellipticals is interstellar, it should not be present in globular clusters. Alternatively, since globulars contain an old stellar population which is similar to that in ellipticals (albeit with lower metallicity), one might expect that globulars would contain any stellar X-ray emitting component found in ellipticals, as long as the X-ray emission was not very strongly dependent on metallicity (however, see Irwin & Bregman 1998). For a globular cluster with an optical luminosity of the order of $`10^5L_{}`$, the expected soft X-ray luminosity would be $`10^{34.6}`$ ergs s<sup>-1</sup>. High spatial resolution X-ray observations of globulars can be used to separate brighter X-ray sources from any diffuse background. We will use the present observations to detect or limit the total diffuse soft emission from NGC 288. Such observations are particularly important to test stellar models for soft X-ray emission from ellipticals in which a large number of individually faint stars (e.g., M dwarfs) would produce the observed X-rays. For the purpose of studying the soft X-ray component from ellipticals, the best globulars are those with low stellar densities (to avoid the effects of interactions of the stellar population) and low absorbing columns. Here, we report on new X-ray observations of the globular cluster NGC 288. There have been no previous Einstein or pointed ROSAT observations of this cluster. The ROSAT All Sky Survey only had an exposure of 341 s at this location, and only gave a weak upper limit on the total X-ray flux of $`<1.3\times 10^{13}`$ ergs cm<sup>-2</sup> s<sup>-1</sup> (0.5–2.5 keV; Verbunt et al. 1995). We have a series of deep HST observations of NGC 288 including UV images, which will be presented in a future paper. In many ways, NGC 288 is an ideal cluster for the identification of soft X-ray sources and the comparison to optical/UV candidates. First, it is a fairly open cluster, with a core radius of $`r_c=85`$″ and a half-light radius of $`r_{1/2}=135`$″ (Trager, Djorgovski, & King 1993) This makes it easier to identify X-ray sources. It is also less likely that binary sources have been affected by core collapse, and more likely that any sources are due to the original binary population. Although initially it was expected that LLGCXs would be associated with clusters with large stellar interaction rates, the trend in that direction is not particularly strong (Johnston & Verbunt 1996). Finding LLGCXs in NGC 288 could be especially pertinent to understanding the soft X-ray emission from elliptical galaxies. Second, NGC 288 is near the South Galactic Pole ($`b^{II}=89.3^{}`$), which reduces the effects of interstellar absorption on the spectrum and diminishes the chances of superposed Galactic X-ray sources. The reddening of the cluster is fairly low ($`E_{BV}=0.03`$; Peterson 1993), which corresponds to an absorbing column to the cluster of about $`N_H=1.6\times 10^{20}`$ cm<sup>-2</sup>, using the relation $`N_H=5.3\times 10^{21}E(BV)`$ (Predehl & Schmitt 1995). At a distance of $`d=8.4`$ kpc toward the South Galactic Pole (Peterson 1993), the cluster is likely to lie behind essentially all of the Galactic gas. The total Galactic H I column in this direction is $`N_H=(1.5\pm 0.1)\times 10^{20}`$ cm<sup>-2</sup> (Stark et al. 1992), which is consistent with the reddening of the cluster. The mass and optical luminosity of the cluster are $`M=8\times 10^4`$ $`M_{}`$ and $`L_V=4.0\times 10^4`$ $`L_{}`$, respectively. In § 2, we discuss the X-ray observations. The resulting X-ray sources are listed in § 3. We derive a limit on the diffuse X-ray flux from NGC 288 in § 4. Finally, our conclusions are summarized in § 5. ## 2. X-ray Observation The globular cluster NGC 288 was observed with the ROSAT High Resolution Imager (HRI) during the period 6-7 January, 1998. The total exposure time was 19,891 s, which is reduced to 19,692 s after correction for deadtime. In addition to the normal processing of the data, we examined the light curve of a large source-free region to check for any periods of enhanced background, and none were found. Because we are interested in accurate positions for any X-ray sources located within the globular cluster, we also examined the aspect history for any anomalies during the accepted time in the image. None were found. These data were taken during a period when the standard data pipeline processing included an error in the boresight, but the data presented here were reprocessed after this error was corrected. The X-ray image was corrected for particle background, exposure, and vignetting using the SXRB software package of Snowden (1995). For the purposes of display, the image was adaptively smoothed to a minimum signal-to-noise ratio of 5 per smoothing beam (Huang & Sarazin 1996). Each smoothing beam was also required to have a larger FWHM than the ROSAT HRI Point Spread Function (PSF). A contour plot of the inner approximately 25′$`\times `$25′ region of the X-ray image is shown in Figure 2. Several point sources are obvious (§ 3), but there is no evidence for any extended diffuse emission (§ 4). The center of the globular cluster NGC 288 is located very near to the position of Src. 1 in this image. ## 3. X-ray Sources Maximum-likelihood and local detection algorithms were used to detect point sources in the HRI image. A final detection criterion of 3$`\sigma `$ was adopted. Table 1 list the 10 sources which were detected. The sources are also labeled on Figure 2. For each source, the Table gives its position, its count rate and the 1$`\sigma `$ error, its final signal-to-noise ratio SNR, its projected distance $`D`$ from the center of the globular cluster, and a comment on the identification. The source count rates were corrected for the instrument PSF, for background, and for vignetting. The positions in Table 1 are at the epoch J2000. The statistical errors in the positions are generally less than 3″; there may be a similar systematic error in the ROSAT HRI absolute positions. None of the sources were clearly extended; however, for sources far from the center of the field where the instrumental point-spread-function is very broad, the upper limits on their size are large. Src. 4 appeared possibly to be extended, but was also near the detection limit. Srcs. 8, 9, and 10 are identified with the quasars QSO 0050.5-2641, QSO 0049-2653, and QSO 0051-267, respectively. The X-ray positions agree with the optical positions of these quasars to better than 4″. Src. 7 may be associated with the blue stellar object PHL 3043, which is also likely to be quasar or other distant AGN. We tried to use the positions of the known quasars to improve the absolute positions of all of the sources, including the central X-ray source located near the cluster center. We determined more accurate optical positions for the quasars from the Digital Sky Survey (DSS) image. However, the differences between the optical and X-ray positions of these three quasars do not show any simple systematic pattern, and are consistent with the expected random measurement errors. Thus, they do not allow us to improve the positions of the other sources. In Table 1, the sources are ordered by increasing distance $`D`$ from the center of NGC 288. We assume that the center is located at R.A. = 00<sup>h</sup>52<sup>m</sup>45$`\stackrel{\mathrm{s}}{\mathrm{.}}`$3 and Dec. = $``$26°34′43″ (J2000; Webbink 1985). Note that this position differs considerably from the more recent determination by Shawl & White (1986). Comparison to the Digital Sky Survey (DSS) image of the cluster shows that the Shawl & White position does not agree with the apparent center of the cluster, at least within the cluster core, while the Webbink position agrees reasonably well with the DSS image. The count rate limit corresponding to our detection threshold is about $`1.2\times 10^3`$ cts s<sup>-1</sup> near the center of the image; vignetting and broadening of the PSF at large distances from the center of the field increase the detection threshold there. We have estimated the number of serendipitous X-ray sources expected in this HRI observation using the deep source counts in Hasinger et al. (1998). For comparison, the count rate was converted into an unabsorbed physical flux using the same assumptions as in Hasinger et al. (1998), but assuming an absorbing column of $`N_H=1.6\times 10^{20}`$ cm<sup>-2</sup>. Based on the source counts in Hasinger et al., we would expect about $`8\pm 4`$ serendipitous sources in our observations. This is obviously consistent with the observed number of 10 sources; there is no overall excess in the number of X-ray sources in this field. Figure 1 shows the optical image of NGC 288 from the DSS. The small crosses near the center and at the lower right side of the image show the positions of Src. 1 (RXJ005245.0$``$263449) and Src. 2. When examined at higher resolution, we find that the DSS image does not show any very bright star which is within 5″ of the position of either Src. 1 or 2. With the exception of Src. 1, the sources appear randomly distributed throughout the HRI image, and are not concentrated to NGC 288. On the other hand, Src. 1 (RXJ005245.0$``$263449) is located very close to the center of the globular cluster. It is unlikely that an unrelated foreground or background source would be accidentally projected this close to the center of the cluster. Including the slight decrease in the detector sensitivity to sources at large radii from the center of the cluster, the probability that such a close association would occur at random is less than 0.2%. If we increase the projected distance of RXJ005245.0$``$263449 from the cluster center by the maximum amount permitted by the errors in the X-ray and cluster center positions, the probability of a source being this close to the center by accident is still less than 1%. Thus, we believe that RXJ005245.0$``$263449 is associated with the globular cluster NGC 288. On the other hand, the other sources (including Src. 2) are located at projected distances from the cluster center at which serendipitous sources would be likely. All of these sources are at projected radii of at least two half-light radii ($`r_{1/2}=135`$″) where the density of cluster stars is low. Thus, although it is possible that Src. 2 (or some of the others) might be associated with NGC 288, it seems much more likely that they are only serendipitous foreground or background sources. If we model the spectrum of RXJ005245.0$``$263449 as 1 keV thermal bremsstrahlung with an absorbing column of $`N_H=1.6\times 10^{20}`$ cm<sup>-2</sup>, the count rate of $`(1.59\pm 0.40)\times 10^3`$ s<sup>-1</sup> corresponds to an unabsorbed flux of $`(6.5\pm 1.6)\times 10^{14}`$ ergs cm<sup>-2</sup> s<sup>-1</sup> (0.1–2.0 keV). The flux is only weakly dependent on the form of the spectrum assumed. This implies a luminosity of $`L_X=(5.5\pm 1.4)\times 10^{32}`$ ergs s<sup>-1</sup> (0.1–2.0 keV), which places RXJ005245.0$``$263449 among the LLGCXs. RXJ005245.0$``$263449 appears to be a variable X-ray source. Figure 2 shows a histogram of the cumulative fraction of the counts from the 20″ radius circle centered on the source as a function of the cumulative exposure time. The dashed line shows the expected linear increase with exposure time, assuming the source and background are constant. The assumption that the source and background are constant can be rejected at a confidence level of greater than 95% for either the Kolmogorov-Smirnov or Cramer-von Mises test. Figure 2 shows that most of the photons arrived during the end of the cumulative exposure time. Because the exposure time of 19,891 s was broken into 5 observing intervals spread over about $`1.2\times 10^5`$ s, this indicates that the source in probably variable on a time scale of $``$1 day. Unfortunately, the number of source counts is too low to allow any more detailed analysis of the source variability. ## 4. Limit on Diffuse X-ray Emission We also used the ROSAT HRI observations to place a limit on any diffuse X-ray emission from the globular cluster. We determined the emission from within the projected area of the half-light radius of the cluster, which is $`r_{1/2}=135\mathrm{}`$. Any point sources were removed from within the regions used for determining the cluster emission and the background. The background was determined in two ways. First, the Snowden (1995) SXRB routines were used to determine the particle background and the exposure map of the HRI image. The particle background was removed from the image, and the image was flat-fielded by dividing by the exposure map. The X-ray background was determined from an annulus from 8′ to 15′. Second, the background was determined from the background image provided with the standard data products, using the same region as was used to collect the counts from the cluster. These two methods gave consistent results, but the second technique gave a more conservative (larger) upper limit on the X-ray flux, which is the one we adapt. No diffuse emission was detected from the cluster; the net count rate was actually very slightly negative (although consistent with zero). The 90% confidence upper limit on the count rate from within the half-light radius was $`4.5\times 10^3`$ counts s<sup>-1</sup>. As discussed above, we adapt a hydrogen column to the cluster of $`N_H=1.6\times 10^{20}`$ cm<sup>-2</sup>. In order to permit a more detailed comparison to the soft X-ray emission of X-ray faint elliptical galaxies and to that expected from M stars (which also have a very soft spectrum), the diffuse emission is modeled as a 0.2 keV thermal spectrum with solar abundance. The upper limit on the count rate corresponds to an unabsorbed flux limit of $`F_X^h<1.13\times 10^{13}`$ ergs cm<sup>-2</sup> s<sup>-1</sup> in the 0.52–2.02 keV ROSAT hard band, and $`F_X^s<1.10\times 10^{13}`$ ergs cm<sup>-2</sup> s<sup>-1</sup> in the 0.11–0.41 keV ROSAT soft band. The hard band flux limit is nearly independent of the spectrum assumed, but the soft band limit is affected by the spectral model. At an assumed distance of $`d=8.4`$ kpc, these flux limits correspond to luminosity limits of $`L_X^h<9.5\times 10^{32}`$ ergs s<sup>-1</sup> (ROSAT hard band 0.52–2.02 keV), and $`L_X^s<9.3\times 10^{32}`$ ergs s<sup>-1</sup> (ROSAT soft band 0.11–0.41 keV). If the luminosity of RXJ005245.0$``$263449 is added to these numbers to give a limit on the total X-ray luminosity of the cluster (either diffuse of resolved), the luminosity limits are increased by 35%. The total $`B`$-band optical luminosity of the cluster is $`L_B=3.91\times 10^4L_{}`$ (Peterson 1993), and the luminosity from within the projected half-light radius is obviously one-half of this value. This leads to upper limits on the diffuse X-ray–to–optical luminosity ratio of the cluster of $`L_X^h/L_B<4.9\times 10^{28}`$ ergs s<sup>-1</sup> $`L_{}^1`$ and $`L_X^s/L_B<4.8\times 10^{28}`$ ergs s<sup>-1</sup> $`L_{}^1`$. ## 5. Conclusions We have obtained the first deep X-ray image of the open globular cluster NGC 288, which is located near the South Galactic Pole. We detect a Low Luminosity Globular Cluster X-ray (LLGCX) source RXJ005245.0$``$263449, which is located within $``$10″ of the cluster center. The X-ray luminosity is $`L_X=(5.5\pm 1.4)\times 10^{32}`$ ergs s<sup>-1</sup> (0.1–2.0 keV). There is evidence that RXJ005245.0$``$263449 is variable, and that the X-ray flux increased during the $``$1 day period of the observation. The fact that this LLGCX is present in such an open cluster adds evidence to the argument that dense stellar systems with high interaction rates are not needed to form LLGCXs (Johnston & Verbunt 1996). On the other hand, the fact that RXJ005245.0$``$263449 is located so close to the cluster center (its projected distance from the center is less that one-tenth of the core radius) is consistent with its being a binary system more massive than the typical cluster star. A series of deep HST images of NGC 288 have been obtained (observation 6804) and are now being analyzed; these include UV images. It may be possible to identify RXJ005245.0$``$263449 on these images. It should also be possible to test the hypothesis that LLGCXs are associated with stars with extremely blue UV colors (Ferraro et al. 1998a). Obviously, one would expect RXJ005245.0$``$263449 to be identified with such a UV bright star. Also, one would not expect to find a large number of other UV bright sources in the HST image, since RXJ005245.0$``$263449 is the only X-ray source detected in the central region of the cluster. On the other hand, LLGCXs are highly variable, so the detection of a single UV star without an X-ray counterpart would not prove that LLGCXs are not associated with UV stars. For example, RXJ005245.0$``$263449 seems to be variable, and might not have been detected if it had been faint throughout our observation. We also searched for diffuse X-ray emission from NGC 288. Upper limits (90%) on the X-ray luminosities of $`L_X^h<9.5\times 10^{32}`$ ergs s<sup>-1</sup> for the ROSAT hard band (0.52–2.02 keV) and $`L_X^s<9.3\times 10^{32}`$ ergs s<sup>-1</sup> for the ROSAT soft band (0.11–0.41 keV) are obtained within the optical half-light radius of the cluster, $`r_{1/2}=135\mathrm{}`$. These imply upper limits to the diffuse X-ray to optical light ratios of $`L_X^h/L_B<4.9\times 10^{28}`$ ergs s<sup>-1</sup> $`L_{}^1`$ and $`L_X^s/L_B<4.8\times 10^{28}`$ ergs s<sup>-1</sup> $`L_{}^1`$. These upper limits are lower than the values observed for most X-ray faint early-type galaxies (Irwin & Sarazin 1998a,b). This indicates that the soft X-ray emission in X-ray faint elliptical galaxies is due to a component which is not present in globular clusters (e.g., interstellar gas, or a stellar component which is not found in low metallicity Population II systems), or that the soft emission comes from a relatively small number of bright X-ray sources (e.g., LMXBs; Irwin & Sarazin 1998a). If the emission were due to a small number of bright X-ray sources, then the expected number in a typical globular would be less than unity. We thank Flavio Fusi Pecci for helpful comments. We would also like to the thank the referee for useful suggestions which improved the presentation. C. L. S. was supported in part by NASA grants NAG 5-4516, NAG 5-3057, and NAG 5-8390. R. T. R. is supported in part by NASA Long Term Space Astrophysics Grant NAG 5-6403 and STScI/NASA Grant GO-6607. F. R. F. acknowledges the ESO Visiting Program for its hospitality. The optical image of NGC 288 is from the Digital Sky Survey, which was produced at the Space Telescope Science Institute using photographic data obtained using the Oschin Schmidt Telescope on Palomar Mountain and the UK Schmidt Telescope.
no-problem/9902/astro-ph9902335.html
ar5iv
text
# Untitled Document EVOLUTION OF DISK ACCRETION NURIA CALVET\** Also Centro de Investigaciones de Astronomía and LEE HARTMANN Harvard-Smithsonian Center for Astrophysics and STEPHEN E. STROM Five College Astronomy Department, University of Massachusetts\**\** Now at National Optical Astronomy Observatories We review the present knowledge of disk accretion in young low mass stars, and in particular, the mass accretion rate $`\dot{M}`$ and its evolution with time. The methods used to obtain mass accretion rates from ultraviolet excesses and emission lines are described, and the current best estimates of $`\dot{M}`$ for Classical T Tauri stars and for objects still surrounded by infalling envelopes are given. We argue that the low mass accretion rates of the latter objects require episodes of high mass accretion rate to build the bulk of the star. Similarity solutions for viscous disk evolution suggest that the inner disk mass accretion rates can be self-consistently understood in terms of the disk mass and size if the viscosity parameter $`\alpha 10^2`$. Close companion stars may accelerate the disk accretion process, resulting in accretion onto the central star in $``$ 1Myr; this may help explain the number of very young stars which are not currently surrounded by accretion disks (the weak emission T Tauri stars). I. INTRODUCTION The initial angular momenta of star-forming molecular cloud cores must be responsible for the ultimate production of binary (and multiple) stellar systems and circumstellar disks. Unless the protostellar cloud core is very slowly rotating, much or most of the stellar mass is likely to land initially on a disk, and must subsequently be accreted from the disk onto the protostellar core. In the early phases of this accretion, the circumstellar disks may be relatively massive in comparison with the central protostar, and so gravitational instabilities may be important in angular momentum transport and consequent disk evolution. At later phases, disk evolution is likely to be driven by viscous processes, perhaps limited by condensation of bodies. Stellar magnetic fields may ultimately halt the accretion of material onto the central star. There are substantial uncertainties in our theoretical understanding of these processes, and we must rely on observations for guidance. In this chapter we review the present knowledge of the disk accretion process around low mass stars, in particular the rate at which mass is transferred onto the star, $`\dot{M}`$, and its evolution with time. Knowledge of $`\dot{M}`$ will help put lower limits on the disk mass at a given age, independent of uncertainties in dust opacities. It puts constraints on disk physics, and in particular on temperatures and surface densities at a given age, and thus on conditions which obtain during the time when solid bodies agglomerate. We begin with a description of the different methods of determining disk accretion rates and the values of $`\dot{M}`$ obtained applying these methods to stars in different groups and environments. We then discuss the implications of the observational results for disk accretion physics and evolution, and in particular investigate whether the data are consistent with simple models of viscous disk evolution. II. DETERMINATION OF $`\dot{M}`$ A. Infrared excesses When the infrared excess emission in Classical T Tauri stars (hereafter CTTS) was recognized as emission from dust at low temperatures distributed in a circumstellar disk, it was thought that the excess energy would be a direct measurement of the accretion luminosity ($`L_{acc}=G\dot{M}M_{}/R_{}`$, where $`M_{}`$ and $`R_{}`$ are the stellar mass and radius). However, it soon became apparent that their spectral energy distributions (SEDs) did not follow the law $`\lambda F_\lambda \lambda ^{4/3}`$ expected for standard accretion disks, but were much flatter (Rydgren and Zak 1987; Kenyon and Hartmann 1987). It has now been realized that the major agent heating the disks of many CTTS is irradiation by the central star (Adams and Shu 1986; Kenyon and Hartmann 1987; Calvet et al. 1991, 1992; Chiang and Goldreich 1997; D’Alessio et al. 1998). The optically-thick disks of CTTS are probably “flared”, i.e., have “photospheres” that curve away from the disk midplane; this makes them more efficient in absorbing light from the central star; this extra heating helps to increase the disk scale height and thus increases the flaring. Self-consistent calculations (in various approximations) indicate that the flaring is especially important at large radii (Kenyon and Hartmann 1987; Chiang and Goldreich 1997; D’Alessio et al. 1998). This makes it extremely difficult, if not impossible, to say anything about accretion energy release, and thus accretion rates, in outer disk emission. For many CTTS it is very difficult to extract quantitative estimates of mass accretion rates even from emission from the inner, flat parts of the disk. The basic reason is that the accretion luminosities of many CTTS are smaller than the luminosity the optically-thick inner disk produces as a result of absorbing light from the central star. The effective temperature $`T`$ of the disk is determined by internal viscous dissipation and external irradiation by the central star; at a given annulus, $$T^4(r)T_{v}^{}{}_{}{}^{4}+T_{i}^{}{}_{}{}^{4},$$ $`(1)`$ where $`T_v=[(3GM_{}\dot{M}/8\pi \sigma r^3)f]^{1/4}`$ is the effective temperature that would result from accretion without irradiation, with $`f=1\sqrt{(R_{}/r)}`$, and $`T_{i}^{}{}_{}{}^{4}=(2/3\pi )T_{}^{}{}_{}{}^{4}(R_{}/r)^3`$ is the effective temperature for the case of only irradiation (the flat disk approximation). In these expressions, we assume that the mass accretion rate through the inner disk is constant and equal to the rate at which mass is transferred onto the star. The mass accretion rate at which $`T_iT_v`$ is $$\dot{M}_c2\times 10^8M_{}yr^1(T_{}/4000K)^4(R_{}/2R_{})^3(M_{}/0.5M_{})^1,$$ calculated for typical Taurus CTTS parameters (K7-M0, age $``$ 1 Myr) and $`r3R_{}`$. Since the median value of $`\dot{M}`$ in CTTS is $`10^8M_{}yr^1`$ (see below), this implies that irradiation dominates the inner disk heating for a large number of the stars and determines the amount of flux excess. Only for objects with significantly high $`\dot{M}`$ will the (near-infrared) disk emission depend upon the mass accretion rate. Other considerations affect a disk’s near-infrared emission. It was originally thought that CTTS disks extended all the way into the star, and disk material joined the star through a “hot boundary layer”, where half of the accretion energy was dissipated, while the other half was emitted in the disk (cf. Bertout 1989). It has now become apparent that for typical values of the stellar magnetic field ($`<1`$ kG, Basri et al. 1992) and of the mass accretion rate, the inner disk of CTTS can be disrupted by the magnetic field (Königl 1991; Najita et al., this volume); material falls onto the star along the magnetic field lines, forming a magnetosphere, and merges with the star through an accretion shock at the stellar surface. In support of this model, fluxes and line profiles of the broad (permitted) emission lines in CTTS are well explained by the magnetospheric flow (Calvet and Hartmann 1992; Hartmann et al. 1994; Edwards et al. 1994; Muzerolle et al. 1998a,b,c; Najita et al., this volume), while the ultraviolet and optical excess fluxes are well accounted for by the accretion shock emission (Calvet and Gullbring 1998). For truncation radii $`R_i35R_{}`$, the maximum temperature in a disk with typical parameters will be $`1200850K`$, so that disk emission drops sharply below $`23.5\mu `$m. The calculation of the disk luminosity from the observed excess also depends upon the cosine of the inclination angle, which is generally unknown. As an illustration of these points, Meyer et al. (1997) have shown that the so-called “CTTS locus” in JHK and HKL diagrams, i.e., the region populated by CTTS outside that corresponding to reddened main sequence stars (Meyer et al., this volume) could be explained in terms of emission by irradiated accretion disks with inner holes of different sizes (but inside co-rotation radii), and a random distribution of inclinations. However, no one-to-one correlation could be found between the excess and $`\dot{M}`$. In a sample of stars in Taurus with known reddening and $`\dot{M}`$, the largest excesses in the locus were produced by stars with the highest $`\dot{M}`$, but the converse was not true, due to the effects of holes and inclination. Finally, in objects surrounded by infalling dusty envelopes, emission from the dust destruction radius, which peaks at $`2\mu `$m, may contribute significantly to the K band and longward (Calvet et al. 1997), hiding the intrinsic disk emission. The difficulty of deriving reliable measures of $`\dot{M}`$ from excess infrared emission requires using alternative methods of estimating $`\dot{M}`$’s. We will consider such methods in the next two sections. B. Veiling luminosities The best evidence for CTTS disk accretion generally comes from the interpretation of the ultraviolet and optical spectra. The photospheric absorption lines are “veiled”, that is, they are less deep than standard stars of the same spectral type. This veiling is produced by (mostly) continuum emission from a region hotter than the stellar photosphere. The luminosity of the veiling or excess continuum in CTTS is typically $``$ 10 % of the total stellar luminosity. It is difficult to account for this extra energy with a stellar origin, and impossible in the case of most extreme CTTS, where the excess continuum luminosity is several times the stellar luminosity. This conclusion is reinforced by the existence of the weak-emission T Tauri stars or WTTS. The WTTS have similar masses and ages as the CTTS in many regions, but do not exhibit the UV-optical veiling continuum of CTTS, showing that the excess emission is not an intrinsic property of young stars. The veiling emission is observed only when there is excess near-infrared emission (Hartigan et al. 1995; Najita et al., this volume), strongly supporting the idea that accretion from a disk is occurring, producing both the infrared excess as well as the hot continuum, as originally envisaged by Lynden-Bell and Pringle (1974). In the magnetospheric model, the disk is truncated in the inner regions, which is precisely where an accretion disk would emit most of its energy. The total disk accretion luminosity is expected to be about $$L_{acc}(disk)GM_{}\dot{M}/(2R_i)+L_{diss}$$ where $`L_{diss}`$ is any energy that might be dissipated in the disk by the stellar magnetic field lines (Kenyon et al. 1996). If we assume that the effect of the stellar magnetic field is to substantially reduce the angular momentum of the disk material, so that it starts out nearly at rest at $`R_i`$ before falling onto the star along the magnetic field lines, then the infalling material should dissipate its energy at the stellar surface at a rate $`(GM_{}\dot{M}/R_{})(1R_{}/R_i)`$. For disk truncation radii $`35R_{}`$, most of the accretion energy is released at the stellar surface in the hot accretion shock whose radiation is observed as the veiling continuum (Königl 1991; Calvet and Gullbring 1998). Accretion luminosities can be obtained from measurements of the veiling of the absorption lines by the following procedure in which excess and intrinsic photospheric emission components are separated. The observed fluxes are fitted by the scaled, dereddened fluxes of a standard star, assumed to be of the same spectral type as the object star, plus a continuum flux, which produces the observed veiling at each absorption line wavelength. This fit yields the spectrum of the excess continuum and the reddening towards the star. The total excess luminosity is calculated from the measured luminosity using a model to extrapolate the emission to unobserved wavelengths. Finally, the mass accretion rate can be calculated from stellar mass and radius estimated from the location of the star in the HR diagram and comparison with evolutionary tracks. Accretion luminosities for CTTS in Taurus have been determined from veiling measurements by a number of authors, including Hartigan et al. (1991), Valenti et al. (1993), and Hartigan et al. (1995). The mass accretion rates derived from these studies range from $`10^7M_{}yr^1`$ for Hartigan et al., to $`10^8M_{}yr^1`$ for Valenti et al. More recently, Gullbring et al. (1998, GHBC), re-measured accretion rates for a sample of Taurus CTTS using spectrophotometry covering the blue and Balmer jump region of the spectrum down to the atmospheric cutoff, and found their results to agree with the previous lower estimates. In a sample of 17 CTTS in Taurus, GHBC found a median value of $`\dot{M}`$ $`10^8M_{}yr^1`$, with a factor of $``$ 3 in estimated error. These differences reflect the cumulative effects of a number of differences in working assumptions rather than a fundamental difference in approach. Among them we can list the following: (1) Adopted evolutionary tracks; (2) Adopted model of accretion. The magnetosphere model predicts more luminosity per mass accretion rate than the old boundary layer model; (3) Adopted physics. Some treatments assumed that a substantial fraction of the emitted accretion energy was absorbed by the star, which seems unlikely (Hartmann et al. 1997); (4) Differing methods used to determine extinction corrections. The photospheres of T Tauri stars show color anomalies relative to standards, which render the determination of reddening uncertain. In particular, GHBC found large color anomalies in some non-accreting WTTS often used as templates; this would cause the extinction to be overestimated in all objects. These anomalies are probably due to spots on the stellar surface and/or unresolved, cooler companion stars. To provide a powerful tool to determine mass accretion rates for large samples of stars for which short wavelength spectrophotometry is difficult to obtain, GHBC determined the relationship between the accretion luminosity and the excess luminosity in the U photometric band, $`L_U`$, for the stars in their sample. Figure 1a shows the excellent relationship between log $`L_{acc}`$ and log $`L_U`$. A least-squares fit to the line yields: $$log(L_{acc}/L_{})=1.04_{0.18}^{+0.04}log(L_U/L_{})+0.98_{0.07}^{+0.02}$$ $`(2)`$ The spectrophotometric sample from which equation (2) was derived is formed by stars in the K5-M2 range, with most of the stars being K7-M0. The application of this calibration to a wider spectral type/mass range remains to be confirmed, but theoretical models of accretion shock emission from stars of differing mass and radii indicate that for the characteristic energy fluxes found in the accretion columns of CTTS, the spectrum of the excess emission does not depend on the underlying star, and the proportion of the total excess luminosity in the U band ($``$ 10%) displayed by equation (2) holds (Calvet and Gullbring 1998). Mass accretion rates have been determined for a larger sample of TTS in Taurus using equation (2) by Hartmann et al. (1998, HCGD). The median mass accretion rate is $`10^8M_{}yr^1`$, similar to that of the spectrophotometric sample, but the errors are larger, given the lack of simultaneity in the photometric measurements and the high degree of variability of CTTS. HCGD also used spectral types and photometry from the compilation of Gauvin and Strom (1992) to estimate mass accretion rates for K5-M3 stars in the Chamaeleon I association. The median mass accretion rate in Chamaeleon I is $`4\times 10^9M_{}yr^1`$. A histogram of the mass accretion rate for TTS in Taurus and Chamaeleon I determined from the ultraviolet excess is presented in Fig. 2a. C. Magnetospheric emission lines The measurement of mass accretion rates from the optical and ultraviolet excess fluxes, either spectrophotometrically or from broad-band photometry, is sensitive to reddening corrections. In heavily-extincted stars, the UV fluxes are either very uncertain or unobservable; thus, in early stage of evolution such as the infall/protostellar phase, or in very young, dense regions of star formation, such methods cannot be used. For these objects, it is necessary to devise methods to measure $`\dot{M}`$ in spectral regions that are less affected by intervening dust. Most emission lines present in the spectra of young objects are thought to be produced in the magnetospheric flow (Najita et al., this volume). Since the material flows through the magnetosphere with a rate similar to $`\dot{M}`$ in the inner disk, the emission fluxes of the lines formed in this flow are expected to depend upon the mass accretion rate. Theoretical models show this to be the case, but other parameters, such as the unknown temperature structure and the characteristic size play a role too, as well as the optical depth of the line (Muzerolle et al. 1998a). For these reasons, empirical correlations between line luminosities and accretion luminosities have been investigated, leaving the interpretation to future theoretical work. Muzerolle et al. (1998b,c, MHCb,c) have undertaken spectroscopic studies in the red and infrared of the sample of stars with accretion rates determined from spectrophotometric measurements, and have found remarkably good correlations between the luminosity of the Ca II triplet 8542, Pa$`\beta `$, and Br$`\gamma `$ lines with accretion luminosities. Figure 1b show the correlation between $`L_{acc}`$ and the luminosity in Br$`\gamma `$, as an example. Least-squares fits to the data give: $$log(L_{acc}/L_{})=(0.85\pm 0.12)log(L_{CaII8542}/L_{})+(2.46\pm 0.46),$$ $`(3)`$ $$log(L_{acc}/L_{})=(1.03\pm 0.16)log(L_{Pa\beta }/L_{})+(2.80\pm 0.58),$$ $`(4)`$ and $$log(L_{acc}/L_{})=(1.20\pm 0.21)log(L_{Br\gamma }/L_{})+(4.16\pm 0.86).$$ $`(5)`$ These subsidiary calibrators of the accretion rate provide the means to determine accretion rates in heavily extincted objects for the first time. MHCc used luminosities of the Br$`\gamma `$ line for extincted Class II sources in $`\rho `$ Oph from Greene and Lada (1996) to make the first determination of accretion luminosities for these sources. They find that the distribution of accretion luminosities is similar to that in Taurus. Using approximate spectral types from Greene and Meyer (1995), the estimated median mass accretion rate is $`1.5\times 10^8M_{}yr^1`$, although the span of spectral types covered is wider than in the Taurus sample. Even more far reaching, MHCc determined accretion luminosities for the deeply embedded Class I objects (sources still surrounded by infalling envelopes) for the first time, using Br$`\gamma `$ luminosities. Since typically $`A_V2030`$ for Class I objects, significant extinction is expected at Br$`\gamma `$ ($`A_K23`$), which introduces large uncertainties in the determination of the line luminosities. The light from the central star+disk and from the envelope itself is absorbed and scattered by the infalling envelope, in proportions that depend on uncertain parameters such as the geometry of the inner envelope (Calvet et al. 1997). Following Kenyon et al. (1993b), MHCc calculate a correction factor K-K0, by assuming that the central object has J-K colors similar to CTTS, and estimating the intrinsic J magnitude from the bolometric luminosity. Figure 3 shows accretion luminosities estimated by MHCc for a small number of Class I sources in Taurus and in $`\rho `$ Oph which have data on Br$`\gamma `$ (Greene and Lada 1996) plotted against the total luminosities. The accretion luminosities of Class I sources are significantly lower than the system bolometric luminosity; in fact, the mean accretion luminosity is $``$ 10 - 20% of the mean bolometric luminosity of the sample. This fact implies that the luminosity of Class I is dominated by the stellar component, and gives a natural explanation to the similarity of the distributions of luminosities of Class I and optically visible T Tauri stars in Taurus (Kenyon et al. 1990). Determining the mass accretion rate from $`L_{acc}`$ requires knowledge of $`M_{}/R_{}`$. We can obtain this ratio assuming that the stars are still on the birthline (Stahler 1988), which seems justified since they are still accumulating mass from the envelope/disk. We used the observed bolometric luminosity to locate the object along the birthline (calculated for an infall rate of $`2\times 10^6M_{}yr^1`$). Figure 2b shows the distribution of $`\dot{M}`$ thus obtained for Class I sources in Taurus and $`\rho `$ Oph. The mean mass infall rate in the envelope, $`\dot{M}_i`$, has been estimated in Taurus from fitting the spectral energy distributions and scattered light images of Class I sources to be $`4\times 10^6M_{}yr^1`$ (Kenyon et al. 1993a), a value which is consistent with theoretical expectations (Shu et al. 1987). The present determination of disk accretion rate shown in Fig. 2b indicates that many Class I objects are slowly accreting from their disks, despite the fact that mass is being deposited in the outer disk at a higher rate. This discrepancy was first recognized by Kenyon et al. (1990), as the so-called luminosity problem of Class I sources. If infall were spherical, the luminosity of the system would be given by an accretion luminosity $`GM_{}\dot{M}_i/R_{}`$. But in this case, the predicted accretion luminosity should be of order $`10L_{}`$, while the mean luminosity of Taurus Class I sources is $`12L_{}`$ (Kenyon et al. 1990, 1994). Kenyon et al. (1990) pointed out that the mass accretion rate in the disk onto the star does not necessarily equal the mass infall rate from the envelope onto the disk, since they are regulated by different physical processes. The imbalance between the infall and accretion rates in this picture leads to an accumulation of mass in the disk. These disks could eventually become gravitationally unstable (Larson 1984), with consequent rapid accretion until sufficient mass has been emptied out of the disk. Kenyon et al. (1990) suggested that these episodes could be related to the FU Orionis outbursts. D. The FU Ori disks The FU Orionis outbursts are now recognized as a transient phase of high mass accretion in the disk around a forming low mass star (see review by Hartmann and Kenyon 1996 and references therein). Although so far FU Ori objects have been mostly studied as isolated phenomena, it is becoming increasingly clear that episodes of high mass accretion rate may be a crucial, if not dominant, process in the formation of stars. We briefly review here the determination of $`\dot{M}`$ for these objects. The canonical FU Ori objects were discovered from their increase in brightness by several magnitudes over time scales of months to years (Herbig 1977), during which the luminosity increased from values typical to CTTS to a few hundred $`L_{}`$. Since the emission is dominated by the accretion disk in FU Ori objects, the accretion luminosity can be readily determined from the observed SED. Additional information is required to independently obtain $`M_{}/R_{}`$ and $`\dot{M}`$, which can be estimated from the surface temperature in the inner disk ($`R`$ 2 - 3 $`R_{}`$), $`T_{max}7000K(L_{acc}/100L_{})^{1/4}(R_{}/2R_{})^{1/2}`$ (from $`T_v`$ in equation ), and measurements of the rotational velocity. Using this method, $`\dot{M}10^4M_{}yr^1`$ has been inferred for the canonical objects for which outbursts have been observed, consistent with their high luminosities, $`>100L_{}`$. A significant number of objects have been identified as FU Ori objects in recent years, mostly at IR wavelengths. Photometric variability indicative of outbursts has been detected in only a few objects; their classification as FU Ori systems has been based mostly on the presence of the near-infrared first overtone bands of CO in deep absorption, comparable to late type giants and supergiants and to the canonical FU Ori objects. Most of these objects are in very early stages of evolution, as is their prototype L1551 IRS 5, embedded in infalling envelopes and associated with Herbig-Haro objects, jets and/or molecular outflows (Elias 1978, Graham and Frogel 1985, Reipurth 1985; Staude and Neckel 1991; Kenyon et al. 1993; Hanson and Conti 1995; Hodapp et al. 1996; Reipurth and Aspin 1997; Sandell and Aspin 1998). The luminosities of the objects identified so far as members of the FU Ori class range from $`10L_{}`$ to $`800L_{}`$. Since $`\dot{M}2\times 10^6M_{}yr^1(L_{acc}/10L_{})`$$`\times [(M_{}/R_{})/0.18M_{}/R_{}]^1`$, this range implies $`\dot{M}`$ few $`\times 10^6M_{}yr^1`$ (assuming they are on the birthline, with typical $`M_{}/R_{}0.18M_{}/R_{}`$). This value is consistent with the presence of CO in absorption, since disk atmosphere calculations indicate that the temperature of the continuum forming region is higher than the surface temperature (neglecting any wind contribution) for $`\dot{M}>10^6M_{}yr^1`$ (Calvet et al. 1992). The nature of the low luminosity FU Ori objects and their relationship to the canonical objects remains to be elucidated. One possibility is that disks undergo instabilities driving outbursts of different magnitude. Alternatively, these objects could be in the phase of decay from a canonical FU Ori outburst. An argument in favor of the second possibility comes from comparison of the mass loss rate required to drive the molecular outflow and the present mass accretion rate. For L1551 IRS5 and PP13S ($`L=10L_{}`$ and $`30L_{}`$, respectively), the momentum flux of the molecular flow is $``$ few $`\times 10^4M_{}yr^1\mathrm{kms}^1`$. (Moriarty-Schieven and Snell 1988; Sandell and Aspin 1997). For typical velocities of the jet of few $`\times 100\mathrm{kms}^1`$, and assuming momentum conservation, the momentum flux implies mass loss rates of the same order to the inferred mass accretion rates, while both theoretically and observationally, the ratio between the mass loss rate and mass accretion rate is close to $`0.1`$ (Calvet 1997). This discrepancy may imply that the mass accretion rate of the disk was much higher when the material giving rise to the molecular outflow was ejected. III. THE EVOLUTION OF ACCRETION A. Observed $`\dot{M}`$ vs. age Figure 4 shows mass accretion rate vs. age for CTTS in Taurus, $`\rho `$ Oph, and Chamaeleon I, for which significant infall from an envelope has stopped. We also show the range of $`\dot{M}`$ covered by Class I sources in Taurus, assuming a median age of 0.1 Myr (estimated from the ratio of the number of Class I sources to T Tauri stars in Taurus, and adopting a mean age of 1 Myr for the latter, Kenyon et al. 1990). The data indicates a clear trend of accretion decaying with time (HCGD; see also Hartigan et al. 1995), even in the relatively short age spread covered by the observations. There is also a very large spread in mass accretion rate at a given age, which makes it very difficult to quantify the rate of decay. HCGD shows that the slope of a least-squares-fit to the data is highly dependent on the errors of $`\dot{M}`$ and age. If errors are assumed larger in $`\dot{M}`$ than in age (the most likely situation, cf. HCGD), then the slope is $`1.5`$ (with large uncertainty). B. Disk Masses We can use the observed mean decay of $`\dot{M}`$ with time to obtain an estimate for the mass of the disk for comparison with masses estimated from dust emission. The amount of mass accreted by a CTTS from the present time to infinity constitutes a lower limit to the mass remaining in the disk. If $`\dot{M}(t)\dot{M}(t_o)(t/t_o)^\eta `$, then this limit to the disk mass is $`M_{acc}=\dot{M}(t_o)t_o/(\eta 1)2\dot{M}(t_o)t_o`$, with $`\eta 1.5`$. This gas mass can be compared to the disk mass estimated from dust emission in the submillimeter and millimeter range, $`M_{mm}`$, which is very dependent on the assumed opacities (Beckwith et al. 1990; Osterloh and Beckwith 1995, masses corrected by a factor of 2.5, HCGD). The comparison (for single stars) yields $`log(2\dot{M}\times age/M_{mm})=0.07\pm 0.21`$, indicating that masses inferred from the current mass accretion rates and age estimates, which are probing the gas, are consistent with disk masses estimated independently from mm-wave dust opacities, which in turn suggests that the dust opacities used in the latter estimates are appropriate. C. Early stages Figure 4 indicates that the mean mass accreted onto the star during the CTTS phase is $`10^2M_{}`$. This suggests that by the time stars reach the optically visible CCTS stage, the remaining disk mass is relatively low and little mass is added to the forming star. This picture implies that the bulk of stellar accretion must occur during the (highly-extincted) infall phase, when the disk is being continually replenished by the collapsing envelope. However, during the infall phase, our results suggest that disk accretion rates during quiescent phases are only slightly higher than those in the CTTS stage, precluding addition of more than $`0.01M_{}`$ during these phases. Significant additions to the mass of the growing protostar must therefore come via material accreted during transient episodes of high accretion. André et al. (1993) suggest that the Class 0 objects are in the earliest phases of envelope collapse, when central accretion rates are expected to be the highest, and argue that the Class 0 sources are the true protostars. These very heavily extincted objects have somewhat higher luminosities than the mean Class I luminosity in Taurus, and so may have higher accretion rates onto the central star. However, Class 0 sources tend to be less frequent than Class I sources, especially in Taurus; thus, the lifetime of the Class 0 phase may be too short to account for most of the stellar accretion. The high accretion rate episodes required to explain the accretion of the stellar mass can be attributed to instabilities in quiescent low $`\dot{M}`$ disks, which result in outbursts and transient periods of high mass accretion rates. Several models have been presented to trigger those outbursts. The most accepted model is that of thermal instabilities in the inner disk (Lin and Papaloizou 1985; Clarke et al. 1990; Kawazoe and Mineshige 1993; Bell and Lin 1994; Bell et al. 1995). This model can naturally explain the occurrence of outbursts during the infall phase because it requires a high background mass accretion rate from the outer disk, of the order of a few $`\times 10^6M_{}yr^1`$, to match the observations. Moreover, according to this model outbursts will be triggered in the disk as long as mass is deposited in the outer disk at this rate, ensuring that mass will be transferred from the initial cloud into the star. The number of currently-known FU Ori objects is insufficient to explain the formation of typical stars mostly through outbursts (Hartmann and Kenyon 1996). However, the known sample of FU Ori disks is very incomplete, because the identifying characteristic of high mass accretion rate disks is the near-infrared bands of CO in absorption and many objects may be too heavily extincted to be detected at $`2.2\mu `$m. More and more sensitive observations of embedded objects in the near-infrared are necessary to test this hypothesis. D. Viscous evolution in TTS disks Once the main infall phase is over and the envelope is no longer feeding mass and angular momentum into the disk, we expect disk evolution to be driven mostly by viscous processes, namely, those in which the angular momentum transport is provided by a turbulent viscosity. In this and the next section, we attempt to interpret the observed properties of CTTS disks in terms of viscous evolution, with the aim of understanding the main physical processes at play and the role of initial and boundary conditions. Similarly, physical models for disk evolution help us relate properties and evolution of the inner disk, as measured by $`\dot{M}(t)`$, to those of the outer disk, such as radius and mass, $`R_d`$ and $`M_d`$. The disk angular momentum is $$J_d=_0^{M_d}𝑑M\mathrm{\Omega }R^2M_dR_d^{1/2}.$$ $`(6)`$ where $`\mathrm{\Omega }`$ is the Keplerian angular velocity. If we neglect the (small amount) of angular momentum being added to the star, or the possible angular momentum loss to an inner disk wind (Shu et al. 1994; Shu et al., this volume), then the disk angular momentum is approximately constant. In turn, this requires $`R_dM_{d}^{}{}_{}{}^{2}`$, so angular momentum conservation implies that the disk expands as the mass of the disk is accreted to the star. Evolution occurs on the viscous time scale, $`t_{visc}R^2/\nu `$, where $`\nu `$ is the viscosity (Pringle 1981). If $`\nu R^\gamma `$, then $`dR_d/dtR_d/t_{visc}R_{d}^{}{}_{}{}^{\gamma 1}`$, so $`R_dt^{1/(2\gamma )}`$, $`M_dt^{1/2(2\gamma )}`$, and $`\dot{M}t^{(5/2\gamma )/(2\gamma )}`$. Therefore and in principle, from the observed decay of $`\dot{M}`$ with time, $`\dot{M}(t)t^\eta `$, we can obtain $`\gamma =(2\eta 5/2)/(\eta 1)`$, and the evolution of radius and mass of the disk can be predicted. As Lynden-Bell and Pringle (1974) showed, analytic similarity solutions describing the evolution of disk properties exist for the case of power-law viscosity (see also Lin and Bodenheimer 1982). These analytical similarity solutions have been applied by HCGD as a first approximation to the evolution of T Tauri disks. (More complex models have been used by Stepinski (1998) to consider similar issues; whether the observational constraints justify approaches with more assumed parameters is not clear.) HCGD argue that the use of a power-law viscosity can be justified on approximate grounds. Using the $`\alpha `$ prescription for the viscosity (Shakura and Sunyaev 1973), $`\nu =\alpha c_sHc_{s}^{}{}_{}{}^{2}/\mathrm{\Omega }(R)T(R)R^{3/2}`$, where $`c_s`$ is the sound speed and $`H`$ the scale height. If $`T(R)R^q`$, then $`\nu R^{3/2q}`$. With $`q1/2`$, corresponding to irradiated disks at large distances from the star (Kenyon and Hartmann 1987; D’Alessio et al. 1998), and also found by empirical fitting to apply to most disks in CTTS (Beckwith et al. 1990), then $`\gamma 1`$, which is roughly consistent with the observed slope of the $`\dot{M}`$ vs. age data, $`\eta 1.5`$ (section III.A). HCGD calculated similarity solutions for viscous evolution for a range of initial conditions applicable to CCTS disks. Figures 5a and 5b show the evolution of mass accretion rate and disk mass for a subset of models in this study. Initial disk masses have been taken as $`M_d(0)=0.1M_{}`$, consistent with small disk masses remaining after disk-draining episodes of high $`\dot{M}`$ (III.C). Values for other values of $`M_d(0)`$ can be obtained by simple scaling. Model results are shown for three values of the initial radius, 1, 10, and 100 AU, which cover an order of magnitude in angular momentum, consistent with the spread of angular momentum between half of the binaries in the solar neighborhood (Duquennoy and Mayor 1991). The calculations assume a temperature of 10 K at 100 AU, as suggested by irradiated disk calculations (D’Alessio et al. 1998), and a central stellar mass of $`0.5M_{}`$. The viscosity parameter $`\alpha `$ has been taken as 0.01, except when stated otherwise. The observed values of disk masses and mass accretion rates, shown in Fig. 5a and 5b lie within the region bounded by the assumed range of initial conditions. Disks with larger initial radii take longer to start evolving, since the viscous time scale is $`t_{visc}R^2/\nu R`$. The surface density of the similarity solutions behaves with radius as $$\mathrm{\Sigma }\frac{e^{(R/R_1)/(t/t_s)}}{(R/R_1)},$$ $`(7)`$ where $`R_1`$ and $`t_s`$ are characteristic radius and time (see HCGD for details); it goes like $`R^1`$ at small radii and falls sharply at large distances. This last property determines important differences in the disk “sizes” measured at different wavelengths, and naturally explains the observed disparity between the optical and mm sizes. Disk radii measured at millimeter wavelengths are of the order of a few hundred AU (Dutrey et al. 1996), while the radii of the disks seen in silhouettes in the Orion Nebula cluster at 0.6 $`\mu `$m are much larger, $``$ 500 - 1000 AU (McCaughrean and O’Dell 1996). Figure 6 shows the predicted radii of models in Fig. 5 at 2.7 mm and 0.6$`\mu `$m, compared to the observations. Circles indicate the Dutrey et al. (1996) observations, while the error bar indicates the range of sizes of the Orion silhouettes. To calculate the millimeter sizes the two-dimensional brightness distribution of the disk model has been convolved with a Gaussian with the appropriate beam size. Radii at other wavelengths correspond to the radii where the optical depth is $`1`$ at the given wavelength (HCGD). The theoretical predictions compare well with the observations. Since dust opacity increases rapidly towards the optical, the outer tenuous regions of the disk can effectively absorb background light and produce a large apparent size. In contrast, in the millimeter range these outer cooler, low density regions contribute little to the surface brightness and the observed sizes are consequently smaller (HCGD). The larger millimeter size of the disk in the binary (open circle) may be indicative of a circumbinary disk, for which the present calculations do not apply. Figure 6 shows the predicted size of one of the models at 1.87$`\mu `$m. At an age of $`0.5Myr`$, typical of the Orion Nebula cluster, the infrared sizes are $``$ 20% smaller than the optical sizes, in good agreement with observations (McCaughrean et al. 1998). The sharp decline of surface density with radius predicted by viscously evolving models naturally explains the observed sizes, without the need for heavily truncated edges. Figure 6 also shows with long dashes the predictions for the observed sizes in the millimeter range of a disk model with $`\alpha =0.001`$. Since $`t_{visc}R^2/\nu 1/\alpha `$, disks with small $`\alpha `$ take much longer to start evolving and growing, resulting in sizes much smaller than observed at the typical ages of the young population. Thus, measurements of disk sizes as a function of time will place important constraints on the characteristic value of $`\alpha `$ in CTTS disks. Disk evolution could be considerably different if angular momentum is lost from outer disk regions through a wind (e.g., Pudritz and Norman 1983, Königl 1989). We have argued elsewhere (Hartmann 1995) that this is not the case. Similarly, coagulation of disk material into bodies that sweep clear the gas will significantly modify this simple picture of disk evolution. Nevertheless, it is encouraging that the observations can be explained with a viscosity ($`\alpha 10^2`$) comparable to that estimated in simulations of the Balbus-Hawley magnetorotational instability (Stone et al. 1996; Brandenburg et al. 1996). E. Effects of companion stars There is a large spread in the mass accretion rates and disk masses as a function of age. Some of this range is probably due to uncertainties in age determinations, errors in accretion rates (for example, due to ignoring inclination effects), time-variability, and a range in initial conditions. However, the potential effects of companion stars cannot be ignored; since at least $`2/3`$ of all systems are binaries (Duquennoy and Mayor 1991), it is important to consider the effects of a binary companion on the evolution of disk accretion when comparing with observations. A companion star may prevent the formation of a disk in its immediate vicinity, and will try to open gaps on either side of its orbit (cf. Artymowicz and Lubow 1994, AL; Lubow and Artymowicz, this volume). While the “initial” effects of a binary companion can strongly limit disk structure and accretion, it is important to realize that there may be secondary effects as well. Specifically, even with a relatively distant companion, the inner structure of a viscously-evolving disk will eventually be affected by the companion, even though the tidal forces are negligible in this region. The reason is that the isolated viscously-evolving disk can only accrete if its outer regions expand to take up the necessary angular momentum. In the similarity solution described above, the mass accretion rate decreases as a power-law in time, because as the disk empties out it also expands and thus has an increasingly long viscous time. In contrast, if a binary companion limits the expansion of the disk, once the disk reaches its maximum size, the viscous time remains constant, and so the (inner) regions empty out exponentially with time. (Note that these considerations are relevant only to circumstellar disks, not circumbinary disks, which can expand.) Figures 5c and 5d show a very simple calculation of this type of effect, using the same power-law viscosity used in the standard model, but now not allowing the disk to expand beyond a certain outer radius, using the boundary conditions discussed by Pringle (1991). One observes that when the disk expands to the limiting radius, the accretion rate first increases slightly, and then drops precipitously as the disk empties out rapidly. The significance of this estimate can be seen by noting that the median binary separation is roughly 30 AU (Duquennoy and Mayor 1991). With the reference disk model used, and estimating a truncation radius $`1/3`$ of the binary separation (AL), this would mean that even if all binaries originally had circumstellar disks, half of those binaries would have their disks empty out by an age of 1 Myr. These estimates are roughly consistent with the percentage of $``$ 1 Myr WTTS in Taurus, $``$ 45 % (Kenyon and Hartmann 1995); the predicted fraction of WTTS could be even higher if indeed the fraction of binaries in Taurus is higher than in the solar neighborhood (Simon et al. 1995). These estimates of binary effects on disk evolution are rough, and the model ignores the effects of the (significant) eccentricities of binary orbits (AL), but they serve to illustrate the importance of identifying stellar companions to understand disk evolution in individual systems. Figures 5c and d do not show a very strong correlation of mass accretion rates with binarity, though there is a significant effect on disk masses (cf. Jensen et al. 1994; Mathieu et al. 1995; Osterloh and Beckwith 1995). In any case, many of the systems shown have not been studied carefully for potential companions, and in general much work remains to be done in this area. IV. SUMMARY AND IMPLICATIONS Figure 7 summarizes the ideas presented so far in a sketch of disk evolution with time for a single star. After maybe a short initial period of high mass accretion rate, the disk remains most of the time in a quiescent state. Episodes of high mass accretion rate are triggered mostly during the phase where the disk is still immersed in the infalling envelope, in which we expect most of the star to be built. After the infall ceases and the star emerges as a T Tauri star, the disk evolves viscously, $`\dot{M}`$ slowly decreases with time, at the same time that the disk expands and its mass decreases. There are several implications of these results for star and planet formation. First, disk accretion during the protostar phase appears to be highly variable. This may call into question theories of the birthline, or the initial position of stars in the HR diagram (Stahler 1988; Hartmann et al. 1997), which assume steady accretion at the rates of infall of the protostellar envelope. It is conceivable that planetesimals or other bodies form in the disk during this phase but are swept into the star as the disk accretes, perhaps partly accounting for some of the accretion variability - our observations really only probe energy release in the inner disk, near the star. Second, there appears to be a wide variety of disk masses and accretion rates (say, a range of an order of magnitude) during the T Tauri phase, produced in part by differences in initial angular momenta. This may mean that any consequent planetary systems which form could have quite different properties. Third, the ability of viscous accretion disk models to explain the observations so far suggests that substantial migration of material occurs during the T Tauri phase; this migration as disks actively accrete may be important in explaining some of the extrasolar planets which lie close to their star. Fourth, the presence of binary companions obviously does not always prevent disk formation, but they may accelerate (circumstellar) disk accretion. Improved mm and submm interferometry, as well as infrared speckle searches for companion stars and improved radial velocity studies to search for close, low-mass stellar companions, will lead to a greatly improved understanding of disk evolution. Acknowledgements We thank a number of people for useful discussions and valuable insight, including James Muzerolle, Erik Gullbring, Paola D’Alessio, Cesar Briceño, Ray Jayawardhana, Suzan Edwards, Lynne Hillenbrand, Michael Meyer, Bo Reipurth, and David Wilner. This work was supported in part by NASA grant NAG5-4282. REFERENCES Adams, F. C. and Shu, F. H. 1986. Infrared Spectra of Rotating Protostars. Astrophys. J. 308:836-853. André, P., Ward-Thompson, D., and Barsony, M. 1993. Submillimeter continuum observations of Rho Ophiuchi A - The candidate protostar VLA 1623 and prestellar clumps. Astrophys. J. 406:122-141. Artymowicz, P. and Lubow, S. H. 1994, Dynamics of binary-disk interaction. 1: Resonances and disk gap size. Astrophys. J. 421:651-667. Basri, G., Marcy, G.W. and Valenti, J. A. 1992. Limits on the magnetic flux of pre-main sequence stars. Astrophys. J. 390:622-633 Beckwith, S. V. W., Sargent, A. I., Chini, R. S. and Guesten, R. 1990. A survey for circumstellar disks around young stellar objects. Astron. J. 99:924-945. Bell, K. R. and Lin, D. N. C. 1994. Using FU Orionis outbursts to constrain self-regulated protostellar disk models. Astrophys. J. 427:987-1004. Bell, K. R., Lin, D. N. C, Hartmann, L. and Kenyon, S. J. 1995. The FU Orionis outburst as a thermal accretion event: Observational constraints for protostellar disk models. Astrophys. J. 444:376-395. Bertout, C. 1989. T Tauri stars - Wild as dust. Ann. Rev. Astron. Astrophys. 27:351-395. Bonnell, I. and Bastien, P. 1992, A binary origin for FU Orionis stars. Astrophys. J. 401:L31-L34. Brandenburg, A., Nordlund, A., Stein, R.F., and Torkelsson, U. 1996. Astrophys. J. 458:L45. Calvet, N., Patiño, A., Magris C., G. and D’Alessio, P. 1991. Irradiation of accretion disks around young objects. I - Near-infrared CO bands. Astrophys. J. 380:617-630. Calvet, Magris C., G., Patiño, A. and D’Alessio, P. 1992. Irradiation of Accretion Disks Around Young Objects. II. Continuum Energy Distribution. Rev. Mex. Astron. Astrofis. 24:27-42. Calvet, N. and Hartmann, L. 1992. Balmer line profiles for infalling T Tauri envelopes. Astrophys. J. 386:239-247. Calvet, N., Hartmann, L. and Strom, S. E. 1997. Near-Infrared Emission of Protostars. Astrophys. J. Lett. 481:912-917. Calvet, N. 1995 Properties of the Winds of T Tauri Stars. In Herbig-Haro Flows and the Birth of Stars; IAU Symposium No. 182 , eds. B. Reipurth and C. Bertout. (Dordrecht: Kluwer Academic Publishers), p. 417-432. Calvet. N. and Gullbring, E. 1998, Astrophys. J. (in press) Chiang, E. I. and Goldreich, P. 1997. Spectral Energy Distributions of T Tauri Stars with Passive Circumstellar Disks Astrophys. J. 490:368-376. Clarke, C.J., Lin, C D.N.C. and Pringle, J. E. 1990. Pre-conditions for disc-generated FU Orionis outbursts. Mon. Not. Roy. Astron. Soc. 242:439-446. D’Alessio, P., Cantó, J., Calvet, N. and Lizano, S. 1998. Accretion Disks around Young Objects. I. The Detailed Vertical Structure. Astrophys. J. 500:411-427. D’Antona, F. and Mazitelli, I. 1994. New pre-main-sequence tracks for M less than or equal to 2.5 solar mass as tests of opacities and convection model. Astrophys. J. Suppl. 90:467-500. Duquennoy, A. and Mayor, M. 1991. Multiplicity among solar-type stars in the solar neighbourhood. II - Distribution of the orbital elements in an unbiased sample. Astron. Astrophys. 248:485-524. Dutrey, A., Guilloteau, S., Duver, G., Prato, L., Simon, M., Schuster, K. and Menard, F. 1996. Dust and gas distribution around T Tauri stars in Taurus-Auriga. I. Interferometric 2.7mm continuum and $`CO_{13}`$ J=1-0 observations. Astron. Astrophys. 309:493-504. Edwards, S., Hartigan, P., Ghandour, L. and Andrulis, C. Spectroscopic evidence for magnetospheric accretion in classical T Tauri stars. Astron. J. 108:1056-1070. Elias, J. H. 1978. A study of the IC 5146 dark cloud complex. Astrophys. J. 223:859-875. Gauvin, L. S. and Strom, K. M. 1992. A study of the stellar population in the Chamaeleon dark clouds. Astrophys. J. 385:217-231. Graham, J.A. and Frogel, J. A. 1985. An FU Orionis star associated with Herbig-Haro object 57. Astrophys. J. 289:331-341. Greene, T. P. and Lada, C. J. 1996. Near-Infrared Spectra and the Evolutionary Status of Young Stellar Objects: Results of a 1.1-2.4 micron Survey. Astron. J. 112:2184-2221. Greene, T. P. and Meyer, M. R. 1995. An Infrared Spectroscopic Survey of the rho Ophiuchi Young Stellar Cluster: Masses and Ages from the H-R Diagram. Astrophys. J. 450:233-244. Gullbring, E., Hartmann, L., Briceño, C. and Calvet, N. 1998 Disk Accretion Rates for T Tauri Stars. Astrophys. J. 492:323-341. Hanson, M. M. and Conti, P. S. 1995. Identification of Ionizing Sources and Young Stellar Objects in M17. Astrophys. J. Lett. 448:L45-L48. Hartigan, P., Strom, S. E., Edwards, S., Kenyon, S. J., Hartmann, L., Stauffer, J. and Welty, A. D. 1991. Optical excess emission in T Tauri stars. Astrophys. J. 382:617-635. Hartigan, P., Edwards, S. and Ghandour, L. 1995. Disk Accretion and Mass Loss from Young Stars. Astrophys. J. 452:736-768. Hartmann, L., Hewett, R. and Calvet, N. 1994. Magnetospheric accretion models for T Tauri stars. 1: Balmer line profiles without rotation. Astrophys. J. 426:669-687. Hartmann, L. 1995. Observational Constraints on Disk Winds. In Circumstellar Disks, Outflows and Star Formation. Rev. Mexicana Astron. Astrof. Serie de Conferencias. 1:285-291. Hartmann, L. and Kenyon, S. J. 1996. The FU Orionis phenomenon. Annu. Rev. Astron. Astrophys. 34:207-240. Hartmann, L., Cassen, P. and Kenyon, S. J. 1997. Disk Accretion and the Stellar Birthline. Astrophys. J. 475:770-785. Hartmann, L., Calvet, N., Gullbring, E. and D’Alessio, P. 1988. Accretion and the Evolution of T Tauri Disks. Astrophys. J. 495:385-400. Herbig, G. H. 1977. Eruptive phenomena in early stellar evolution. Astrophys. J. 217:693-715. Hodapp, K-W, Hora, J. L., Rayner, J. T., Pickles, A. J., and Ladd, E. F., 1996. An outburst of a deeply embedded star in Serpens. Astrophys. J. 468:861-870. Jensen, E. L. N., Mathieu, R. D. and Fuller, G. A. 1994. A connection between submillimeter continuum flux and separation in young binaries. Astrophys. J. 429:L29-L32. Kawazoe, E. and Mineshige, S. 1993. Unstable accretion disks in FU Orionis stars. Publ. Astron. Soc. Pacific 45:715-725. Kenyon, S. J. and Hartmann, L. 1987. Spectral energy distributions of T Tauri stars - Disk flaring and limits on accretion. Astrophys. J. 323:714-733. Kenyon, S. J. and Hartmann, L. 1995. Pre-Main-Sequence Evolution in the Taurus-Auriga Molecular Cloud. Astrophys. J. Suppl. 101:117-171. Kenyon, S. J. and Hartmann, L. 1996. The FU Orionis Phenomenon. Ann. Rev. Astron. Astrophys. 34:207-240. Kenyon, S. J., Hartmann, L., Strom, K. M. and Strom, S. E. 1990. An IRAS survey of the Taurus-Auriga molecular cloud. Astron. J. 99:869-887. Kenyon, S.J., Calvet, N., and Hartmann, L. 1993a. The embedded young stars in the Taurus-Auriga molecular cloud. I. Models for the spectral energy distribution. Astrophys. J. 414:676-694. Kenyon, S.J., Whitney, B., A., Gomez, M., and Hartmann, L. 1993b. The embedded young stars in the Taurus-Auriga molecular cloud. II - Models for scattered light images. Astrophys. J. 414:773-792. Kenyon, S., Gomez, M., Marzke, R. O. and Hartmann, L. 1994. New pre-main-sequence stars in the Taurus-Auriga molecular cloud. Astron. J. 108:251-261. Kenyon, S., J., Yi, I., and Hartmann, L. 1996. A Magnetic Accretion Disk Model for the Infrared Excesses of T Tauri Stars. Astrophys. J. 462:439-455. Königl, A. 1989. Self-similar models of magnetized accretion disks. Astrophys. J. 342:208-223. Königl, A. 1991. Disk accretion onto magnetic T Tauri stars. Astrophys. J. Lett. 370:L39-L43. Larson, R. B. 1984. Gravitational torques and star formation. Mon. Not. Roy. Astron. Soc. 206:97-207. Lin, D. N. C. and Bodenheimer, P. 1982. On the evolution of convective accretion disk models of the primordial solar nebula. Astrophys. J. 262:768-779. Lin, D. N. C. and Papaloizou, J. C. B. 1985. On the dynamical origin of the solar system. In Protostars & Planets II, eds. D. C. Black and M. S. Matthews (Tucson: Univ. of Arizona Press), pp. 981-107. Lynden-Bell, D. and Pringle, J. E. 1974 The evolution of viscous discs and the origin of the nebular variables. Mon. Not. Roy. Astron. Soc. 168:603-638. Mathieu, R., Adams, F. C., Fuller, G. A., Jensen, E. L., N., Koerner, D. W. and Sargent, A. Submillimeter Continuum Observations of the T Tauri Spectroscopic Binary GW Orionis. Astron. J. 109:2655-2669. McCaughrean, M. J. and O’Dell, C. R. 1996 Direct Imaging of Circumstellar Disks in the Orion Nebula. Astron. J. 111:1977-1986. McCaughrean, M. J., Chen, H., Bally, J., Erickson, Ed, Thompson, R., Rieke, M., Schneider, G., Stolovy, S. and Young, E. 1998. High-resolution near-infrared imaging of the Orion 114-426 silhouette disk. Astrophys. J. Lett. 492:L157-161. Meyer, M. R., Calvet, N. and Hillenbrand, L. A. 1997. Intrinsic Near-Infrared Excesses of T Tauri Stars: Understanding the Classical T Tauri Star Locus. Astrophys. J. 114:288-300. Muzerolle, J., Calvet, N. and Hartmann, L. 1998a. Magnetospheric Accretion Models for the Hydrogen Emission Lines of T Tauri Stars. Astrophys. J. 492:743-753. Muzerolle, J., Hartmann, N. and Calvet, N. 1998b. Emission-Line Diagnostics of T Tauri Magnetospheric Accretion. I. Line Profile Observations. Astron. J. 116:455-468. Muzerolle, J., Hartmann, N. and Calvet, N. 1998c. A Br$`\gamma `$ Probe of Disk Accretion in T Tauri Stars and Embedded Young Stellar Objects. Astrophys. J. (in press). Moriarty-Schieven, G.H. and Snell, R. 1988. High-resolution images of the L1551 molecular outflow. II. Structure and kinematics. Astrophys. J. 332:364-378. Osterloh, M. and Beckwith, S. V. W. 1995. Millimeter-wave continuum measurements of young stars. Astrophys. J. 439:288-302. Pringle, J. E. 1981. Accretion discs in astrophysics. Ann. Rev. Astron. Astrophys. 19:137-162. Pringle, J. E. 1991. The properties of external accretion discs. Mon. Not. Roy. Astron. Soc. 248: 754-759. Pudritz, R. E. and Norman, C. A. 1983. Centrifugally driven winds from contracting molecular disks. Astrophys. J. 274:677-697. Reipurth, B. 1985. Herbig-Haro objects and FU Orionis eruptions - The case of HH 57. Astron. Astrophys. 143:435-442. Reipurth, B. and Aspin, C. 1997. Infrared spectroscopy of Herbig-Haro energy sources. Astron. J. 114:2700-2707. Rucinski, S. M. 1985, Iras observations of T Tauri and post-T Tauri stars. Astron. J. 90:2321-2330. Rydgren, A. E. and Zak, D. S. 1987. On the spectral form of the infrared excess component in T Tauri systems. Publ. Astron. Soc. Pacific 99:141-145. Sandell, G. and Aspin, C. 1998. PP13S, a young, low-mass FU Orionis-type pre-main sequence star. Astron. Astrophys. 333:1016-1025. Shakura, N. I. and Sunyaev, R. A. 1973. Black holes in binary systems. Observational appearance. Astron. Astrophys. 24:337-355. Shu, F. H., Adams, F. C. and Lizano, S. 1987. Star formation in molecular clouds - Observation and theory. Ann. Rev. Astron. Astrophys. 25:23-81. Shu, F. H., Najita, J., Ostriker, E., Wilkin. F., Ruden, S. P. and Lizano, S. 1994. Magnetocentrifugally driven flows from young stars and disks. 1: A generalized model. Astrophys. J. 429:781-796. Simon, M., Ghez, A. M., Leinert, C. H., Cassar, L., Chen, W. P., Howell, R. R., Jameson, R. F., Matthews, K., Neugebauer, G. and Richichi, A. 1995. A lunar occultation and direct imaging survey of multiplicity in the Ophiuchus and Taurus star-forming regions. Astrophys. J. 443:625-637. Stahler, S. W. 1988. Deuterium and the stellar birthline. Astrophys. J. 332:804-825. Staude, H. J. and Neckel, T. 1991. RNO 1B - A new FUor in Cassiopeia. Astron. Astrophys. 244: L13-L16. Stepinski, T. F. 1998. Diagnosing Properties of Protoplanetary Disks from their evolution. Astrophys. J. in press. Stone, J.M., Hawley, J.F., Gammie, C. F., and Balbus, S.A. 1996. Astrophys. J. 463:656. Valenti, J. A., Basri, G. and Johns, C. M. 1993 Tauri stars in blue. Astron. J. 106:2024-2050. Figure 1. Relationship between accretion luminosity and (a) the excess luminosity in the U band, and (b) the luminosity in Br$`\gamma `$ for a sample of CTTS in Taurus. Data from Gullbring et al. (1997) and Muzerolle et al. (1998c). Figure 2. Histogram showing the distribution of mass accretion rates. (a) For T Tauri stars in Taurus and Cha I, with $`\dot{M}`$ determined from blue spectra or U magnitudes, and (b) For Class I sources in Taurus and $`\rho `$ Oph, determined from Br$`\gamma `$ measurements. Figure 3. Relationship between accretion luminosity and bolometric luminosity for Class I sources in Taurus (filled circles) and in $`\rho `$ Oph (open circles) (from Muzerolle et al. 1998c). Figure 4. Observed mass accretion rate vs. age for CTTS in Taurus, Cha I, and $`\rho `$ Oph. Mass accretion rates have been obtained by the methods described in section II. Ages for the CTTS have been estimated from the position in the HR diagram and comparison with the theoretical tracks from D’Antona and Mazitelli (1994, CMA case). Luminosities and spectral types were taken from Kenyon and Hartmann (1995) for Taurus, Gauvin and Strom (1992) for Cha I, and Greene and Meyer (1995) for $`\rho `$ Oph. The mean and dispersion of the estimated mass accretion rates for Class I sources is also shown for comparison (see II.C). The mean age for the Class I sources is assumed to be 0.1 Myr. Figure 5. Similarity solution for disk evolution, with $`\nu R`$, compared to observations. Upper panels: Evolution for isolated disks. (a) Disk mass vs. time. (b) $`\dot{M}`$ vs. time. Models shown have initial disk mass of 0.1 $`M_{}`$ and initial disk radii (marked): 1, 10, and 100 AU. Lower panels: Evolution with binary companions. (c) Disk mass vs. time. (d) $`\dot{M}`$ vs. time. Models are shown for initial disk mass and radius of 0.1 $`M_{}`$ and 10 AU, and for three binary separations: 30 AU, 100 AU, and 300 AU (corresponding to truncation radii of 12, 40, and 120 AU, marked). The corresponding evolution for the isolated disk is shown for comparison (heavy line). Binaries indicated by open circles. Figure 6. Characteristic disk sizes for viscous evolution as observed at 0.6 $`\mu `$m, 1.87 $`\mu `$m, and 2.7 mm. Models are shown for an initial mass of 0.1 $`M_{}`$ and initial radii 10 AU (solid) and 100 AU (short dashes). A model with initial radii 10 AU but $`\alpha =0.001`$ is shown for comparison (long dashes). Data from Dutrey et al. (1996) are shown as circles, and the error bar indicates the range of sizes measured in the disks seen in silhouettes in the Orion Nebula cluster (McCaughrean and O’Dell 1996). Binaries are indicated by open circles. See text. Figure 7. Sketch of disk evolution with time, summarizing the ideas presented in this chapter. The disk remains most of the time in a quiescent state, punctuated by episodes of high $`\dot{M}`$ as long as the envelope feeds mass to the disk. When infall ceases, the disk evolves viscously and $`\dot{M}`$ slowly decreases with time.
no-problem/9902/gr-qc9902017.html
ar5iv
text
# 1 Introduction ## 1 Introduction At present our physical worldview is deeply schizophrenic. We have, not one, but two fundamental theories of the physical universe: general relativity, and the Standard Model of particle physics based on quantum field theory. The former takes gravity into account but ignores quantum mechanics, while the latter takes quantum mechanics into account but ignores gravity. In other words, the former recognizes that spacetime is curved but neglects the uncertainty principle, while the latter takes the uncertainty principle into account but pretends that spacetime is flat. Both theories have been spectacularly successful in their own domain, but neither can be anything more than an approximation to the truth. Clearly some synthesis is needed: at the very least, a theory of quantum gravity, which might or might not be part of a overarching ‘theory of everything’. Unfortunately, attempts to achieve this synthesis have not yet succeeded. Modern theoretical physics is difficult to understand for anyone outside the subject. Can philosophers really contribute to the project of reconciling general relativity and quantum field theory? Or is this a technical business best left to the experts? I would argue for the former. General relativity and quantum field theory are based on some profound insights about the nature of reality. These insights are crystallized in the form of mathematics, but there is a limit to how much progress we can make by just playing around with this mathematics. We need to go back to the insights behind general relativity and quantum field theory, learn to hold them together in our minds, and dare to imagine a world more strange, more beautiful, but ultimately more reasonable than our current theories of it. For this daunting task, philosophical reflection is bound to be of help. However, a word of warning is in order. The paucity of experimental evidence concerning quantum gravity has allowed research to proceed in a rather unconstrained manner, leading to divergent schools of opinion. If one asks a string theorist about quantum gravity, one will get utterly different answers than if one asks someone working on loop quantum gravity or some other approach. To make matters worse, experts often fail to emphasize the difference between experimental results, theories supported by experiment, speculative theories that have gained a certain plausibility after years of study, and the latest fads. Philosophers must take what physicists say about quantum gravity with a grain of salt. To lay my own cards on the table, I should say that as a mathematical physicist with an interest in philosophy, I am drawn to a strand of work that emphasizes ‘higher-dimensional algebra’. This branch of mathematics goes back and reconsiders some of the presuppositions that mathematicians usually take for granted, such as the notion of equality and the emphasis on doing mathematics using 1-dimensional strings of symbols . Starting in the late 1980s, it became apparent that higher-dimensional algebra is the correct language to formulate so-called ‘topological quantum field theories’ . More recently, various people have begun to formulate theories of quantum gravity using ideas from higher-dimensional algebra . While they have tantalizing connections to string theory, these theories are best seen as an outgrowth of loop quantum gravity . The plan of the paper is as follows. In Section 2, I begin by recalling why some physicists expect general relativity and quantum field theory to collide at the Planck length. This is a unit of distance concocted from three fundamental constants: the speed of light $`c`$, Newton’s gravitational constant $`G`$, and Planck’s constant $`\mathrm{}`$. General relativity idealizes reality by treating Planck’s constant as negligible, while quantum field theory idealizes it by treating Newton’s gravitational constant as negligible. By analyzing the physics of $`c,G,`$ and $`\mathrm{}`$, we get a glimpse of the sort of theory that would be needed to deal with situations where these idealizations break down. In particular, I shall argue that we need a background-free quantum theory with local degrees of freedom propagating causally. In Section 3, I discuss ‘topological quantum field theories’. These are the first examples of background-free quantum theories. However, they lack local degrees of freedom. In other words, they describe imaginary worlds in which everywhere looks like everywhere else! This might at first seem to condemn them to the status of mathematical curiosities. However, they suggest an important analogy between the mathematics of spacetime and the mathematics of quantum theory. I argue that this is the beginning of a new bridge between general relativity and quantum field theory. In Section 4, I describe one of the most important examples of a topological quantum field theory: the Turaev-Viro model of quantum gravity in 3-dimensional spacetime. This theory is just a warmup for the 4-dimensional case that is of real interest in physics. Nonetheless, it has some startling features which perhaps hint at the radical changes in our worldview that a successful synthesis of general relativity and quantum field theory would require. In Section 5, I discuss the role of higher-dimensional algebra in topological quantum field theory. I begin with a brief introduction to categories. Category theory can be thought of as an attempt to treat processes (or ‘morphisms’) on an equal footing with things (or ‘objects’), and it is ultimately for this reason that it serves as a good framework for topological quantum field theory. In particular, category theory allows one to make the analogy between the mathematics of spacetime and the mathematics of quantum theory quite precise. But to fully explore this analogy one must introduce ‘$`n`$-categories’, a generalization of categories that allows one to speak of processes between processes between processes… and so on to the $`n`$th degree. Since $`n`$-categories are purely algebraic structures but have a natural relationship to the study of $`n`$-dimensional spacetime, their study is sometimes called ‘higher-dimensional algebra’. Finally, in Section 6 I briefly touch upon recent attempts to construct theories of 4-dimensional quantum gravity using higher-dimensional algebra. This subjects is still in its infancy. Throughout the paper, but especially in this last section, the reader must turn to the references for details. To make the bibliography as useful as possible, I have chosen references of an expository nature whenever they exist, rather than always citing the first paper in which something was done. ## 2 The Planck Length Two constants appear throughout general relativity: the speed of light $`c`$ and Newton’s gravitational constant $`G`$. This should be no surprise, since Einstein created general relativity to reconcile the success of Newton’s theory of gravity, based on instantaneous action at a distance, with his new theory of special relativity, in which no influence travels faster than light. The constant $`c`$ also appears in quantum field theory, but paired with a different partner: Planck’s constant $`\mathrm{}`$. The reason is that quantum field theory takes into account special relativity and quantum theory, in which $`\mathrm{}`$ sets the scale at which the uncertainty principle becomes important. It is reasonable to suspect that any theory reconciling general relativity and quantum theory will involve all three constants $`c`$, $`G`$, and $`\mathrm{}`$. Planck noted that apart from numerical factors there is a unique way to use these constants to define units of length, time, and mass. For example, we can define the unit of length now called the ‘Planck length’ as follows: $$\mathrm{}_P=\sqrt{\frac{\mathrm{}G}{c^3}}.$$ This is extremely small: about $`1.610^{35}`$ meters. Physicists have long suspected that quantum gravity will become important for understanding physics at about this scale. The reason is very simple: any calculation that predicts a length using only the constants $`c`$, $`G`$ and $`\mathrm{}`$ must give the Planck length, possibly multiplied by an unimportant numerical factor like $`2\pi `$. For example, quantum field theory says that associated to any mass $`m`$ there is a length called its Compton wavelength, $`\mathrm{}_C`$, such that determining the position of a particle of mass $`m`$ to within one Compton wavelength requires enough energy to create another particle of that mass. Particle creation is a quintessentially quantum-field-theoretic phenomenon. Thus we may say that the Compton wavelength sets the distance scale at which quantum field theory becomes crucial for understanding the behavior of a particle of a given mass. On the other hand, general relativity says that associated to any mass $`m`$ there is a length called the Schwarzschild radius, $`\mathrm{}_S`$, such that compressing an object of mass $`m`$ to a size smaller than this results in the formation of a black hole. The Schwarzschild radius is roughly the distance scale at which general relativity becomes crucial for understanding the behavior of an object of a given mass. Now, ignoring some numerical factors, we have $$\mathrm{}_C=\frac{\mathrm{}}{mc}$$ and $$\mathrm{}_S=\frac{Gm}{c^2}.$$ These two lengths become equal when $`m`$ is the Planck mass. And when this happens, they both equal the Planck length! At least naively, we thus expect that both general relativity and quantum field theory would be needed to understand the behavior of an object whose mass is about the Planck mass and whose radius is about the Planck length. This not only explains some of the importance of the Planck scale, but also some of the difficulties in obtaining experimental evidence about physics at this scale. Most of our information about general relativity comes from observing heavy objects like planets and stars, for which $`\mathrm{}_S\mathrm{}_C`$. Most of our information about quantum field theory comes from observing light objects like electrons and protons, for which $`\mathrm{}_C\mathrm{}_S`$. The Planck mass is intermediate between these: about the mass of a largish cell. But the Planck length is about $`10^{20}`$ times the radius of a proton! To study a situation where both general relativity and quantum field theory are important, we could try to compress a cell to a size $`10^{20}`$ times that of a proton. We know no reason why this is impossible in principle. But we have no idea how to actually accomplish such a feat. There are some well-known loopholes in the above argument. The ‘unimportant numerical factor’ I mentioned above might actually be very large, or very small. A theory of quantum gravity might make testable predictions of dimensionless quantities like the ratio of the muon and electron masses. For that matter, a theory of quantum gravity might involve physical constants other than $`c`$, $`G`$, and $`\mathrm{}`$. The latter two alternatives are especially plausible if we study quantum gravity as part of a larger theory describing other forces and particles. However, even though we cannot prove that the Planck length is significant for quantum gravity, I think we can glean some wisdom from pondering the constants $`c,G,`$ and $`\mathrm{}`$ — and more importantly, the physical insights that lead us to regard these constants as important. What is the importance of the constant $`c`$? In special relativity, what matters is the appearance of this constant in the Minkowski metric $$ds^2=c^2dt^2dx^2dy^2dz^2$$ which defines the geometry of spacetime, and in particular the lightcone through each point. Stepping back from the specific formalism here, we can see several ideas at work. First, space and time form a unified whole which can be thought of geometrically. Second, the quantities whose values we seek to predict are localized. That is, we can measure them in small regions of spacetime (sometimes idealized as points). Physicists call such quantities ‘local degrees of freedom’. And third, to predict the value of a quantity that can be measured in some region $`R`$, we only need to use values of quantities measured in regions that stand in a certain geometrical relation to $`R`$. This relation is called the ‘causal structure’ of spacetime. For example, in a relativistic field theory, to predict the value of the fields in some region $`R`$, it suffices to use their values in any other region that intersects every timelike path passing through $`R`$. The common way of summarizing this idea is to say that nothing travels faster than light. I prefer to say that a good theory of physics should have local degrees of freedom propagating causally. In Newtonian gravity, $`G`$ is simply the strength of the gravitational field. It takes on a deeper significance in general relativity, where the gravitational field is described in terms of the curvature of the spacetime metric. Unlike in special relativity, where the Minkowski metric is a ‘background structure’ given a priori, in general relativity the metric is treated as a field which not only affects, but also is affected by, the other fields present. In other words, the geometry of spacetime becomes a local degree of freedom of the theory. Quantitatively, the interaction of the metric and other fields is described by Einstein’s equation $$G_{\mu \nu }=8\pi GT_{\mu \nu },$$ where the Einstein tensor $`G_{\mu \nu }`$ depends on the curvature of the metric, while the stress-energy tensor $`T_{\mu \nu }`$ describes the flow of energy and momentum due to all the other fields. The role of the constant $`G`$ is thus simply to quantify how much the geometry of spacetime is affected by other fields. Over the years, people have realized that the great lesson of general relativity is that a good theory of physics should contain no geometrical structures that affect local degrees of freedom while remaining unaffected by them. Instead, all geometrical structures — and in particular the causal structure — should themselves be local degrees of freedom. For short, one says that the theory should be background-free. The struggle to free ourselves from background structures began long before Einstein developed general relativity, and is still not complete. The conflict between Ptolemaic and Copernican cosmologies, the dispute between Newton and Leibniz concerning absolute and relative motion, and the modern arguments concerning the ‘problem of time’ in quantum gravity — all are but chapters in the story of this struggle. I do not have room to sketch this story here, nor even to make more precise the all-important notion of ‘geometrical structure’. I can only point the reader towards the literature, starting perhaps with the books by Barbour and Earman , various papers by Rovelli , and the many references therein. Finally, what of $`\mathrm{}`$? In quantum theory, this appears most prominently in the commutation relation between the momentum $`p`$ and position $`q`$ of a particle: $$pqqp=i\mathrm{},$$ together with similar commutation relations involving other pairs of measurable quantities. Because our ability to measure two quantities simultaneously with complete precision is limited by their failure to commute, $`\mathrm{}`$ quantifies our inability to simultaneously know everything one might choose to know about the world. But there is far more to quantum theory than the uncertainty principle. In practice, $`\mathrm{}`$ comes along with the whole formalism of complex Hilbert spaces and linear operators. There is a widespread sense that the principles behind quantum theory are poorly understood compared to those of general relativity. This has led to many discussions about interpretational issues. However, I do not think that quantum theory will lose its mystery through such discussions. I believe the real challenge is to better understand why the mathematical formalism of quantum theory is precisely what it is. Research in quantum logic has done a wonderful job of understanding the field of candidates from which the particular formalism we use has been chosen. But what is so special about this particular choice? Why, for example, do we use complex Hilbert spaces rather than real or quaternionic ones? Is this decision made solely to fit the experimental data, or is there a deeper reason? Since questions like this do not yet have clear answers, I shall summarize the physical insight behind $`\mathrm{}`$ by saying simply that a good theory of the physical universe should be a quantum theory — leaving open the possibility of eventually saying something more illuminating. Having attempted to extract the ideas lying behind the constants $`c,G,`$ and $`\mathrm{}`$, we are in a better position to understand the task of constructing a theory of quantum gravity. General relativity acknowledges the importance of $`c`$ and $`G`$ but idealizes reality by treating $`\mathrm{}`$ as negligibly small. From our discussion above, we see that this is because general relativity is a background-free classical theory with local degrees of freedom propagating causally. On the other hand, quantum field theory as normally practiced acknowledges $`c`$ and $`\mathrm{}`$ but treats $`G`$ as negligible, because it is a background-dependent quantum theory with local degrees of freedom propagating causally. The most conservative approach to quantum gravity is to seek a theory that combines the best features of general relativity and quantum field theory. To do this, we must try to find a background-free quantum theory with local degrees of freedom propagating causally. While this approach may not succeed, it is definitely worth pursuing. Given the lack of experimental evidence that would point us towards fundamentally new principles, we should do our best to understand the full implications of the principles we already have! From my description of the goal one can perhaps see some of the difficulties. Since quantum gravity should be background-free, the geometrical structures defining the causal structure of spacetime should themselves be local degrees of freedom propagating causally. This much is already true in general relativity. But because quantum gravity should be a quantum theory, these degrees of freedom should be treated quantum-mechanically. So at the very least, we should develop a quantum theory of some sort of geometrical structure that can define a causal structure on spacetime. String theory has not gone far in this direction. This theory is usually formulated with the help of a metric on spacetime, which is treated as a background structure rather than a local degree of freedom like the rest. Most string theorists recognize that this is an unsatisfactory situation, and by now many are struggling towards a background-free formulation of the theory. However, in the words of two experts , “it seems that a still more radical departure from conventional ideas about space and time may be required in order to arrive at a truly background independent formulation.” Loop quantum gravity has gone a long way towards developing a background-free quantum theory of the geometry of space , but not so far when it comes to spacetime. This has made it difficult to understand dynamics, and particular the causal propagation of degrees of freedom. Work in earnest on these issues has begun only recently. One reason for optimism is the recent success in understanding quantum gravity in 3 spacetime dimensions. But to explain this, I must first say a bit about topological quantum field theory. ## 3 Topological Quantum Field Theory Besides general relativity and quantum field theory as usually practiced, a third sort of idealization of the physical world has attracted a great deal of attention in the last decade. These are called topological quantum field theories, or ‘TQFTs’. In the terminology of the previous section, a TQFT is a background-free quantum theory with no local degrees of freedom<sup>1</sup><sup>1</sup>1It would be nicely symmetrical if TQFTs involved the constants $`G`$ and $`\mathrm{}`$ but not $`c`$. Unfortunately I cannot quite see how to make this idea precise.. A good example is quantum gravity in 3-dimensional spacetime. First let us recall some features of classical gravity in 3-dimensional spacetime. Classically, Einstein’s equations predict qualitatively very different phenomena depending on the dimension of spacetime. If spacetime has 4 or more dimensions, Einstein’s equations imply that the metric has local degrees of freedom. In other words, the curvature of spacetime at a given point is not completely determined by the flow of energy and momentum through that point: it is an independent variable in its own right. For example, even in the vacuum, where the energy-momentum tensor vanishes, localized ripples of curvature can propagate in the form of gravitational radiation. In 3-dimensional spacetime, however, Einstein’s equations suffice to completely determine the curvature at a given point of spacetime in terms of the flow of energy and momentum through that point. We thus say that the metric has no local degrees of freedom. In particular, in the vacuum the metric is flat, so every small patch of empty spacetime looks exactly like every other. The absence of local degrees of freedom makes general relativity far simpler in 3-dimensional spacetime than in higher dimensions. Perhaps surprisingly, it is still somewhat interesting. The reason is the presence of ‘global’ degrees of freedom. For example, if we chop a cube out of flat 3-dimensional Minkowski space and form a 3-dimensional torus by identifying the opposite faces of this cube, we get a spacetime with a flat metric on it, and thus a solution of the vacuum Einstein equations. If we do the same starting with a larger cube, or a parallelipiped, we get a different spacetime that also satisfies the vacuum Einstein equations. The two spacetimes are locally indistinguishable, since locally both look just like flat Minkowski spacetime. However, they can be distinguished globally — for example, by measuring the volume of the whole spacetime, or studying the behavior of geodesics that wrap all the way around the torus. Since the metric has no local degrees of freedom in 3-dimensional general relativity, this theory is much easier to quantize than the physically relevant 4-dimensional case. In the simplest situation, where we consider ‘pure’ gravity without matter, we obtain a background-free quantum field theory with no local degrees of freedom whatsoever: a TQFT. I shall say more about 3-dimensional quantum gravity in Section 4. To set the stage, let me sketch the axiomatic approach to topological quantum field theory proposed by Atiyah . My earlier definition of a TQFT as a ‘background-free quantum field theory with no local degrees of freedom’ corresponds fairly well to how physicists think about TQFTs. But mathematicians who wish to prove theorems about TQFTs need to start with something more precise, so they often use Atiyah’s axioms. An important feature of TQFTs is that they do not presume a fixed topology for space or spacetime. In other words, when dealing with an $`n`$-dimensional TQFT, we are free to choose any $`(n1)`$-dimensional manifold to represent space at a given time<sup>2</sup><sup>2</sup>2Here and in what follows, by ‘manifold’ I really mean ‘compact oriented smooth manifold’, and cobordisms between these will also be compact, oriented, and smooth.. Moreover given two such manifolds, say $`S`$ and $`S^{}`$, we are free to choose any $`n`$-dimensional manifold $`M`$ to represent the portion of spacetime between $`S`$ and $`S^{}`$. Mathematicians call $`M`$ a ‘cobordism’ from $`S`$ to $`S^{}`$. We write $`M:SS^{}`$, because we may think of $`M`$ as the process of time passing from the moment $`S`$ to the moment $`S^{}`$. For example, in Figure 1 we depict a 2-dimensional manifold $`M`$ going from a 1-dimensional manifold $`S`$ (a pair of circles) to a 1-dimensional manifold $`S^{}`$ (a single circle). Crudely speaking, $`M`$ represents a process in which two separate spaces collide to form a single one! This may seem outré, but these days physicists are quite willing to speculate about processes in which the topology of space changes with the passage of time. Other forms of topology change include the the formation of a wormhole, the appearance of the universe in a ‘big bang’, or its disappearance in a ‘big crunch’. There are various important operations one can perform on cobordisms, but I will only describe two. First, we may ‘compose’ two cobordisms $`M:SS^{}`$ and $`M:S^{}S^{\prime \prime }`$, obtaining a cobordism $`M^{}M:SS^{\prime \prime }`$, as illustrated in Figure 2. The idea here is that the passage of time corresponding to $`M`$ followed by the passage of time corresponding to $`M^{}`$ equals the passage of time corresponding to $`M^{}M`$. This is analogous to the familiar idea that waiting $`t`$ seconds followed by waiting $`t^{}`$ seconds is the same as waiting $`t+t^{}`$ seconds. The big difference is that in topological quantum field theory we cannot measure time in seconds, because there is no background metric available to let us count the passage of time! We can only keep track of topology change. Just as ordinary addition is associative, composition of cobordisms satisfies the associative law: $$(M^{\prime \prime }M^{})M=M^{\prime \prime }(M^{}M).$$ However, composition of cobordisms is not commutative. As we shall see, this is related to the famous noncommutativity of observables in quantum theory. Second, for any $`(n1)`$-dimensional manifold $`S`$ representing space, there is a cobordism $`1_S:SS`$ called the ‘identity’ cobordism, which represents a passage of time without any topology change. For example, when $`S`$ is a circle, the identity cobordism $`1_S`$ is a cylinder, as shown in Figure 3. In general, the identity cobordism $`1_S`$ has the property that for any cobordism $`M:S^{}S`$ we have $$1_SM=M,$$ while for any cobordism $`M:SS^{}`$ we have $$M1_S=M.$$ These properties say that an identity cobordism is analogous to waiting 0 seconds: if you wait 0 seconds and then wait $`t`$ more seconds, or wait $`t`$ seconds and then wait 0 more seconds, this is the same as waiting $`t`$ seconds. These operations just formalize of the notion of ‘the passage of time’ in a context where the topology of spacetime is arbitrary and there is no background metric. Atiyah’s axioms relate this notion to quantum theory as follows. First, a TQFT must assign a Hilbert space $`Z(S)`$ to each $`(n1)`$-dimensional manifold $`S`$. Vectors in this Hilbert space represent possible states of the universe given that space is the manifold $`S`$. Second, the TQFT must assign a linear operator $`Z(M):Z(S)Z(S^{})`$ to each $`n`$-dimensional cobordism $`M:SS^{}`$. This operator describes how states change given that the portion of spacetime between $`S`$ and $`S^{}`$ is the manifold $`M`$. In other words, if space is initially the manifold $`S`$ and the state of the universe is $`\psi `$, after the passage of time corresponding to $`M`$ the state of the universe will be $`Z(M)\psi `$. In addition, the TQFT must satisfy a list of properties. Let me just mention two. First, the TQFT must preserve composition. That is, given cobordisms $`M:SS^{}`$ and $`M^{}:S^{}S^{\prime \prime }`$, we must have $$Z(M^{}M)=Z(M^{})Z(M),$$ where the right-hand side denotes the composite of the operators $`Z(M)`$ and $`Z(M^{})`$. Second, it must preserve identities. That is, given any manifold $`S`$ representing space, we must have $$Z(1_S)=1_{Z(S)}.$$ where the right-hand side denotes the identity operator on the Hilbert space $`Z(S)`$. Both these axioms are eminently reasonable if one ponders them a bit. The first says that the passage of time corresponding to the cobordism $`M`$ followed by the passage of time corresponding to $`M^{}`$ has the same effect on a state as the combined passage of time corresponding to $`M^{}M`$. The second says that a passage of time in which no topology change occurs has no effect at all on the state of the universe. This seems paradoxical at first, since it seems we regularly observe things happening even in the absence of topology change. However, this paradox is easily resolved: a TQFT describes a world quite unlike ours, one without local degrees of freedom. In such a world, nothing local happens, so the state of the universe can only change when the topology of space itself changes<sup>3</sup><sup>3</sup>3Actually, while perfectly correct as far as it goes, this resolution dodges an important issue. Some physicists have suggested that the second axiom may hold even in quantum field theories with local degrees of freedom, so long as they are background-free . Unfortunately a discussion of this would take us too far afield here.. The most interesting thing about the TQFT axioms is their common formal character. Loosely speaking, they all say that a TQFT maps structures in differential topology — by which I mean the study of manifolds — to corresponding structures in quantum theory. In coming up with these axioms, Atiyah took advantage of a powerful analogy between differential topology and quantum theory, summarized in Table 1. | DIFFERENTIAL TOPOLOGY | QUANTUM THEORY | | --- | --- | | $`\left(n1\right)`$-dimensional manifold | Hilbert space | | (space) | (states) | | cobordism between $`\left(n1\right)`$-dimensional manifolds | operator | | (spacetime) | (process) | | composition of cobordisms | composition of operators | | identity cobordism | identity operator | Table 1. Analogy between differential topology and quantum theory I shall explain this analogy between differential topology and quantum theory further in Section 5. For now, let me just emphasize that this analogy is exactly the sort of clue we should pursue for a deeper understanding of quantum gravity. At first glance, general relativity and quantum theory look very different mathematically: one deals with space and spacetime, the other with Hilbert spaces and operators. Combining them has always seemed a bit like mixing oil and water. But topological quantum field theory suggests that perhaps they are not so different after all! Even better, it suggests a concrete program of synthesizing the two, which many mathematical physicists are currently pursuing. Sometimes this goes by the name of ‘quantum topology’ . Quantum topology is very technical, as anything involving mathematical physicists inevitably becomes. But if we stand back a moment, it should be perfectly obvious that differential topology and quantum theory must merge if we are to understand background-free quantum field theories. In physics that ignores general relativity, we treat space as a background on which states of the world are displayed. Similarly, we treat spacetime as a background on which the process of change occurs. But these are idealizations which we must overcome in a background-free theory. In fact, the concepts of ‘space’ and ‘state’ are two aspects of a unified whole, and likewise for the concepts of ‘spacetime’ and ‘process’. It is a challenge, not just for mathematical physicists, but also for philosophers, to understand this more deeply. ## 4 3-Dimensional Quantum Gravity Before the late 1980s, quantum gravity was widely thought to be just as intractable in 3 spacetime dimensions as in the physically important 4-dimensional case. The situation changed drastically when physicists and mathematicians developed the tools for handling background-free quantum theories without local degrees of freedom. By now, it is easier to give a complete description of 3-dimensional quantum gravity than most quantum field theories of the traditional sort! Let me sketch how one sets up a theory of 3-dimensional quantum gravity satisfying Atiyah’s axioms for a TQFT. Before doing so I should warn reader that there are a number of inequivalent theories of 3-dimensional quantum gravity . The one I shall describe is called the Turaev-Viro model . While in some ways this is not the most physically realistic one, since it is a quantum theory of Riemannian rather than Lorentzian metrics, it illustrates the points I want to make here. To get a TQFT satisfying Atiyah’s axioms we need to describe a Hilbert space of states for each 2-dimensional manifold and an operator for each cobordism between 2-dimensional manifolds. We begin by constructing a preliminary Hilbert space $`\stackrel{~}{Z}(S)`$ for any 2-dimensional manifold $`S`$. This construction requires choosing a background structure: a way of chopping $`S`$ into triangles. Later we will eliminate this background-dependence and construct the Hilbert space of real physical interest. To define the Hilbert space $`\stackrel{~}{Z}(S)`$, it is enough to specify an orthonormal basis for it. We decree that states in this basis are ways of labelling the edges of the triangles in $`S`$ by numbers of the form $`0,\frac{1}{2},1,\frac{3}{2},\mathrm{},\frac{k}{2}`$. An example is shown in Figure 4, where we take $`S`$ to be a sphere. Physicists call the numbers labelling the edges ‘spins’, alluding to the fact that we are using mathematics developed in the study of angular momentum. But here these numbers represent the lengths of the edges as measured in units of the Planck length. In this theory, length is a discrete rather than continuous quantity! Then we construct an operator $`\stackrel{~}{Z}(M):\stackrel{~}{Z}(S)\stackrel{~}{Z}(S^{})`$ for each cobordism $`M:SS^{}`$. Again we do this with the help of a background structure on $`M`$: we choose a way to chop it into tetrahedra, whose triangular faces must include among them the triangles of $`S`$ and $`S^{}`$. To define $`\stackrel{~}{Z}(M)`$ it is enough to specify the transition amplitudes $`\psi ^{},\stackrel{~}{Z}(M)\psi `$ when $`\psi `$ and $`\psi ^{}`$ are states in the bases given above. We do this as follows. The states $`\psi `$ and $`\psi ^{}`$ tell us how to label the edges of triangles in $`S`$ and $`S^{}`$ by spins. Consider any way to label the edges of $`M`$ by spins that is compatible with these labellings of edges in $`S`$ and $`S^{}`$. We can think of this as a ‘quantum geometry’ for spacetime, since it tells us the shape of every tetrahedron in $`M`$. Using a certain recipe we can compute a complex number for this geometry, which we think of as its ‘amplitude’ in the quantum-mechanical sense. We then sum these amplitudes over all geometries to get the total transition amplitude from $`\psi `$ to $`\psi ^{}`$. The reader familiar with quantum field theory may note that this construction is a discrete version of a ‘path integral’. Now let me describe how we erase the background-dependence from this construction. Given an identity cobordism $`1_S:SS`$, the operator $`\stackrel{~}{Z}(1_S)`$ is usually not the identity, thus violating one of Atiyah’s axioms for a topological quantum field theory. However, the next best thing happens: this operator maps $`\stackrel{~}{Z}(S)`$ onto a subspace, and it acts as the identity on this subspace. This subspace, which we call $`Z(S)`$, is the Hilbert space of real physical interest in 3-dimensional quantum gravity. Amazingly, this subspace doesn’t depend on how we chopped $`S`$ into triangles. Even better, for any cobordism $`M:SS^{}`$, the operator $`\stackrel{~}{Z}(M)`$ maps $`Z(S)`$ to $`Z(S^{})`$. Thus it restricts to an operator $`Z(M):Z(S)Z(S^{})`$. Moreover, this operator $`Z(M)`$ turns out not to depend on how we chopped $`M`$ into tetrahedra. To top it all off, it turns out that the Hilbert spaces $`Z(S)`$ and operators $`Z(M)`$ satisfy Atiyah’s axioms. In short, we started by chopping space into triangles and spacetime into tetrahedra, but at the end of the day nothing depends on this choice of background structure. It also turns out that the final theory has no local degrees of freedom: all the measurable quantities are global in character. For example, there is no operator on $`Z(S)`$ corresponding to the ‘length of a triangle’s edge’, but there is an operator corresponding to the length of the shortest geodesic wrapping around the space $`S`$ in a particular way. These miracles are among the main reasons for interest in quantum topology. They only happen because of the carefully chosen recipe for computing amplitudes for spacetime geometries. This recipe is the real core of the whole construction. Sadly, it is a bit too technical to describe here, so the reader will have to turn elsewhere for details . I can say this, though: the reason this recipe works so well is that it neatly combines ideas from general relativity, quantum field theory, and a third subject that might at first seem unrelated — higher-dimensional algebra. ## 5 Higher-Dimensional Algebra One of the most remarkable accomplishments of the early 20th century was to formalize all of mathematics in terms of a language with a deliberately impoverished vocabulary: the language of set theory. In Zermelo-Fraenkel set theory, everything is a set, the only fundamental relationships between sets are membership and equality, and two sets are equal if and only if they have the same elements. If in Zermelo-Fraenkel set theory you ask what sort of thing is the number $`\pi `$, the relationship ‘less than’, or the exponential function, the answer is always the same: a set! Of course one must bend over backwards to think of such varied entities as sets, so this formalization may seem almost deliberately perverse. However, it represents the culmination of a worldview in which things are regarded as more fundamental than processes or relationships. More recently, mathematicians have developed a somewhat more flexible language, the language of category theory. Category theory is an attempt to put processes and relationships on an equal status with things. A category consists of a collection of ‘objects’, and for each pair of objects $`x`$ and $`y`$, a collection of ‘morphisms’ from $`x`$ to $`y`$. We write a morphism from $`x`$ to $`y`$ as $`f:xy`$. We demand that for any morphisms $`f:xy`$ and $`g:yz`$, we can ‘compose’ them to obtain a morphism $`gf:xz`$. We also demand that composition be associative. Finally, we demand that for any object $`x`$ there be a morphism $`1_x`$, called the ‘identity’ of $`x`$, such that $`f1_x=f`$ for any morphism $`f:xy`$ and $`1_xg=g`$ for any morphism $`g:yx`$. Perhaps the most familiar example of a category is $`\mathrm{Set}`$. Here the objects are sets and the morphisms are functions between sets. However, there are many other examples. Fundamental to quantum theory is the category $`\mathrm{Hilb}`$. Here the objects are complex Hilbert spaces and the morphisms are linear operators between Hilbert spaces. In Section 3 we also met a category important in differential topology, the category $`n\mathrm{Cob}`$. Here the objects are $`(n1)`$-dimensional manifolds and the morphisms are cobordisms between such manifolds. Note that in this example, the morphisms are not functions! Nonetheless we can still think of them as ‘processes’ going from one object to another. An important part of learning category theory is breaking certain habits one may have acquired from set theory. For example, in category theory one must resist the temptation to ‘peek into the objects’. Traditionally, the first thing one asks about a set is: what are its elements? A set is like a container, and the contents of this container are the most interesting thing about it. But in category theory, an object need not have ‘elements’ or any sort of internal structure. Even if it does, this is not what really matters! What really matters about an object is its morphisms to and from other objects. Thus category theory encourages a relational worldview in which things are described, not in terms of their constituents, but by their relationships to other things. Category theory also downplays the importance of equality between objects. Given two elements of a set, the first thing one asks about them is: are they equal? But for objects in a category, we should ask instead whether they are isomorphic. Technically, the objects $`x`$ and $`y`$ are said to be ‘isomorphic’ if there is an morphism $`f:xy`$ that has an ‘inverse’: a morphism $`f^1:yx`$ for which $`f^1f=1_x`$ and $`ff^1=1_y`$. A morphism with an inverse is called an ‘isomorphism’. An isomorphism between two objects lets turn any morphism to or from one of them into a morphism to or from the other in a reversible sort of way. Since what matters about objects are their morphisms to and from other objects, specifying an isomorphism between two objects lets us treat them as ‘the same’ for all practical purposes. Categories can be regarded as higher-dimensional analogs of sets. As shown in Fig. 5, we may visualize a set as a bunch of points, namely its elements. Similarly, we may visualize a category as a bunch of points corresponding to its objects, together with a bunch of 1-dimensional arrows corresponding to its morphisms. (For simplicity, I have not drawn the identity morphisms in Fig. 5.) We may use the analogy between sets and categories to ‘categorify’ almost any set-theoretic concept, obtaining a category-theoretic counterpart . For example, just as there are functions between sets, there are ‘functors’ between categories. A function from one set to another sends each element of the first to an element of the second. Similarly, a functor $`F`$ from one category to another sends each object $`x`$ of the first to an object $`F(x)`$ of the second, and also sends each morphism $`f:xy`$ of the first to a morphism $`F(f):F(x)F(y)`$ of the second. In addition, functors are required to preserve composition and identities: $$F(f^{}f)=F(f^{})F(f)$$ and $$F(1_x)=1_{F(x)}.$$ Functors are important because they allow us to apply the relational worldview discussed above, not just to objects in a given category, but to categories themselves. Ultimately what matters about a category is not its ‘contents’ — its objects and morphisms — but its functors to and from other categories! | SET THEORY | CATEGORY THEORY | | --- | --- | | elements | objects | | equations between elements | isomorphisms between objects | | sets | categories | | functions between sets | functors between categories | | equations between functions | natural isomorphisms between functors | Table 2. Analogy between set theory and category theory We summarize the analogy between set theory and category theory in Table 2. In addition to the terms already discussed there is a concept of ‘natural isomorphism’ between functors. This is the correct analog of an equation between functions, but we will not need it here — I include it just for the sake of completeness. The full impact of category-theoretic thinking has taken a while to be felt. Categories were invented in the 1940s by Eilenberg and Mac Lane for the purpose of clarifying relationships between algebra and topology. As time passed they became increasingly recognized as a powerful tool for exploiting analogies throughout mathematics . In the early 1960s they led to revolutionary — and still controversial — developments in mathematical logic . It gradually became clear that category theory was a part of a deeper subject, ‘higher-dimensional algebra’, in which the concept of a category is generalized to that of an ‘$`n`$-category’. But only by the 1990s did the real importance of categories for physics become evident, with the discovery that higher-dimensional algebra is the perfect language for topological quantum field theory . Why are categories important in topological quantum field theory? The most obvious answer is that a TQFT is a functor. Recall from Section 3 that a TQFT maps each manifold $`S`$ representing space to a Hilbert space $`Z(S)`$ and each cobordism $`M:SS^{}`$ representing spacetime to an operator $`Z(M):Z(S)Z(S^{})`$, in such a way that composition and identities are preserved. We may summarize all this by saying that a TQFT is a functor $$Z:n\mathrm{Cob}\mathrm{Hilb}.$$ In short, category theory makes the analogy in Table 1 completely precise. In terms of this analogy, many somewhat mysterious aspects of quantum theory correspond to easily understood facts about spacetime! For example, the noncommutativity of operators in quantum theory corresponds to the noncommutativity of composing cobordisms. Similarly, the all-important ‘adjoint’ operation in quantum theory, which turns an operator $`A:HH^{}`$ into an operator $`A^{}:H^{}H`$, corresponds to the operation of reversing the roles of past and future in a cobordism $`M:SS^{}`$, obtaining a cobordism $`M^{}:S^{}S`$. But the role of category theory goes far beyond this. The real surprise comes when one examines the details of specific TQFTs. In Section 4 I sketched the construction of 3-dimensional quantum gravity, but I left out the recipe for computing amplitudes for spacetime geometries. Thus the most interesting features of the whole business were left as unexplained ‘miracles’: the background-independence of the Hilbert spaces $`Z(S)`$ and operators $`Z(M)`$, and the fact that they satisfy Atiyah’s axioms for a TQFT. In fact, the recipe for amplitudes and the verification of these facts make heavy use of category theory. The same is true for all other theories for which Atiyah’s axioms have been verified. For some strange reason, it seems that category theory is precisely suited to explaining what makes a TQFT tick. For the last 10 years or so, various researchers have been trying to understand this more deeply. Much remains mysterious, but it now seems that TQFTs are intimately related to category theory because of special properties of the category $`n\mathrm{Cob}`$. While $`n\mathrm{Cob}`$ is defined using concepts from differential topology, a great deal of evidence suggests that it admits a simple description in terms of ‘$`n`$-categories’. I have already alluded to the concept of ‘categorification’ — the process of replacing sets by categories, functions by functors and so on, as indicated in Table 2. The concept of ‘$`n`$-category’ is obtained from the concept of ‘set’ by categorifying it $`n`$ times! An $`n`$-category has objects, morphisms between objects, 2-morphisms between morphisms, and so on up to $`n`$-morphisms, together with various composition operations satisfying various reasonable laws . Increasing the value of $`n`$ allows an ever more nuanced treatment of the notion of ‘sameness’. A 0-category is just a set, and in a set the elements are simply equal or unequal. A 1-category is a category, and in this context we may speak not only of equal but also of isomorphic objects. Unfortunately, this careful distinction between equality and isomorphism breaks down when we study the morphisms. Morphisms in a category are either the same or different; there is no concept of isomorphic morphisms. In a 2-category this is remedied by introducing 2-morphisms between morphisms. Unfortunately, in a 2-category we cannot speak of isomorphic 2-morphisms. To remedy this we must introduce the notion of 3-category, and so on. We may visualize the objects of an $`n`$-category as points, the morphisms as arrows going between these points, the 2-morphisms as 2-dimensional surfaces going between these arrows, and so on. There is thus a natural link between $`n`$-categories and $`n`$-dimensional topology. Indeed, one reason why $`n`$-categories are a bit formidable is that calculations with them are most naturally done using $`n`$-dimensional diagrams. But this link between $`n`$-categories and $`n`$-dimensional topology is precisely why there may be a nice description of $`n\mathrm{Cob}`$ in the language of $`n`$-categories. Dolan and I have proposed such a description, which we call the ‘cobordism hypothesis’ . Much work remains to be done to make this hypothesis precise and prove or disprove it. Proving it would lay the groundwork for understanding topological quantum field theories in a systematic way. But beyond this, it would help us towards a purely algebraic understanding of ‘space’ and ‘spacetime’ — which is precisely what we need to marry them to the quantum-mechanical notions of ‘state’ and ‘process’. ## 6 4-Dimensional Quantum Gravity How important are the lessons of topological quantum field theory for 4-dimensional quantum gravity? This is still an open question. Since TQFTs lack local degrees of freedom, they are at best a warmup for the problem we really want to tackle: constructing a background-free quantum theory with local degrees of freedom propagating causally. Thus, even though work on TQFTs has suggested new ideas linking quantum theory and general relativity, these ideas may be too simplistic to be useful in real-world physics. However, physics is not done by sitting on ones hands and pessimistically pondering the immense magnitude of the problems. For decades our only insights into quantum gravity came from general relativity and quantum field theory on spacetime with a fixed background metric. Now we can view it from a third angle, that of topological quantum field theory. Surely it makes sense to invest some effort in trying to combine the best aspects of all three theories! And indeed, in the last few years various people have begun to do just this, largely motivated by tantalizing connections between topological quantum field theory and loop quantum gravity. In loop quantum gravity, the preliminary Hilbert space has a basis given by ‘spin networks’ — roughly speaking, graphs with edges labelled by spins . We now understand quite well how a spin network describes a quantum state of the geometry of space. But spin networks are also used to describe states in TQFTs, where they arise naturally from considerations of higher-dimensional algebra. For example, in 3-dimensional quantum gravity the state shown in Fig. 4 can also be described using the spin network shown in Fig. 6. Using the relationships between 4-dimensional quantum gravity and topological quantum field theory, researchers have begun to formulate theories in which the quantum geometry of spacetime is described using ‘spin foams’ — roughly speaking, 2-dimensional structures made of polygons joined at their edges, with all the polygons being labelled by spins . The most important part of a spin foam model is a recipe assigning an amplitude to each spin foam. Much as Feynman diagrams in ordinary quantum field theory describe processes by which one collection of particles evolves into another, spin foams describe processes by which one spin network evolves into another. Indeed, there is a category whose objects are spin networks and whose morphisms are spin foams! And like $`n\mathrm{Cob}`$, this category appears to arise very naturally from purely $`n`$-categorical considerations. In the most radical approaches, the concepts of ‘space’ and ‘state’ are completely merged in the notion of ‘spin network’, and similarly the concepts of ‘spacetime’ and ‘process’ are merged in the notion of ‘spin foam’, eliminating the scaffolding of a spacetime manifold entirely. To me, at least, this is a very appealing vision. However, there are a great many obstacles to overcome before we have a full-fledged theory of quantum gravity along these lines. Let me mention just a few of the most pressing. First there is the problem of developing quantum theories of Lorentzian rather than Riemannian metrics. Second, and closely related, we need to better understand the concept of ‘causal structure’ in the context of spin foam models. Only the work of Markopoulou and Smolin has addressed this point so far. Third, there is the problem of formulating physical questions in these theories in such a way that divergent sums are eliminated. And fourth, there is the problem of developing computational techniques to the point where we can check whether these theories approximate general relativity in the limit of large distance scales — i.e., distances much greater than the Planck length. Starting from familiar territory we have sailed into strange new waters, but only if we circle back to the physics we know will the journey be complete. ### Acknowledgements Conversations and correspondence with many people have helped form my views on these issues. I cannot list them all, but I especially want to thank Abhay Ashtekar, John Barrett, Louis Crane, James Dolan, Louis Kauffman, Kirill Krasnov, Carlo Rovelli, and Lee Smolin.
no-problem/9902/astro-ph9902194.html
ar5iv
text
# Note about a second ”evidence” for a WIMP annual modulation ## 1 Introduction This note is intended to contribute to a clarification about a claimed ”evidence” by the DAMA group of an annual modulation of the counting rate of a Dark Matter NaI(Tl) detector as due to a neutralino (SUSY-LSP) Dark Matter candidate . A first ”evidence” had already given rise to a note of comments . As the information given in this paper refers only to the result of a theoretically constrained fit, it is difficult to estimate the relevance of the claimed evidence. Answers to the following 5 questions would be essential to enable the scientific community to correctly appreciate the relevance of the claimed effect to the search for WIMPs, which has been under way for several years, in a number of experiments over the world. ## 2 Questions 1) What are the experimental values, and the corresponding experimental errors of the modulation amplitude Sm ? This amplitude can be calculated from the daily counting rates , without any fit or maximum-likelihood procedure, for each energy interval. The experimental errors should include both statistical and systematical contributions and be given separately. Statistical errors for a given exposure can be calculated with good aproximation by anybody : in this case they are definitely larger than the errors from the maximum likelihood procedure presented in the last DAMA paper (see figure 1 of this note). How can the strong difference between the distributions of the modulation amplitudes Sm in the two papers be explained (see figure 2 of this note)? The only difference between the two approaches is the explicit presence of the WIMP hypothesis in the maximum likelihood fit giving the Sm values in the second paper, while in the previous paper the Sm values were obtained before introducing this theoretical hypothesis. 2) What is the experimental distribution of the modulation amplitude Sm for the individual NaI modules ? We remind that for the first data set (1/3 of the new statistics), this distribution, given in one of the DAMA reports , was very unlike a ”physical” distribution (there was an effect for 3 detectors while the 6 others did not show any deviation), as stressed in a note of comments about this paper. 3) Concerning possible systematic effects, can the ”separation-plot” between physical events and PMT noise be shown for the 2-6 keV energy interval relevant to the claimed effect (and not for the large 2-20 keV interval as in figure 1 of ref or figure 16 of ref )? How can the strange behaviour of the 2-6 keV energy distributions of the nine crystals (figure 2 of ref ) be explained? The total counting rate in the 2-3 keV bin is typically a factor 2-3 smaller than the counting rates in the following bins. How can the even more important drop of the residual background (obtained after subtraction of the ”signal” coming from the maximum likelihood fit) be accounted for? 2-3 keV : 0.5 evts/keV/kg/day 3-4 keV : 1.8 evts/keV/kg/day 4-5 keV : 1.9 evts/keV/kg/day 5-6 keV : 2.0 evts/keV/kg/day 4) The 2-6 keV region is affected by PMT noise which is partially rejected by software cuts. How is the stability of the efficiency correction and contamination level proven to be not worse than 1$`\%`$, as absolutely needed for the claimed effect, in a region where the efficiency is strongly varying with energy (figure 17 of ref ) ? 5) In the presented maximum-likelihood procedure, which gives as output simultaneously an ”evidence” for a rate modulation and the values of the two basic parameters caracterising the neutralino candidate (mass and cross section on proton), what is the influence of the applied constraint on the mass (neutralino mass ¿ 25 GeV) ? In other words, if this constraint is removed, does at least one additional solution show-up in this maximum likelihood fit ? References 1. DAMA collaboration, INFN/AE/98/20, ROM2F/98/34 2. R. Bernabei et al., ROM2F/97/33, Phys. Lett. B 424 (1998) 195-201 3. G. Gerbier et al., astro-ph/9710181 4. K. Freese, J. Frieman and A. Gould, Phys Rev D37 (1988)3388 5. DAMA collaboration, INFN/AE/98/23 and ROM2F/98/27
no-problem/9902/astro-ph9902120.html
ar5iv
text
# Propagation of warps in moderately thick disks ## 1 Introduction Warps are bending waves of disk galaxies, observed most often in the outer parts of edge-on spiral galaxies in the 21 cm line of neutral hydrogen. Their physics is very similar to that of spiral density waves, from which they are distinguished by their opposite parity: they involve vertical rather than horizontal motions (though we shall see that horizontal motions in warps can be important as well) and antisymmetric (in the vertical coordinate) rather than symmetric perturbed potentials. Their propagation has been described in the limit of very thin disks, and found to explain the main observational properties. Warps have also been considered as a possible source of angular momentum transfer in accretion disks. On the other hand, contrary to spirals, warps are not unstable, i.e. they cannot form spontaneously in an isolated disk. Since warps propagate radially in the disk, a continuous excitation mechanism must be found to explain their frequent occurrence in galactic disks; various mechanisms have been considered (see e.g. Binney, 1992): * tidal excitation of warps (Hunter and Toomre 1969) by a companion galaxy; this cannot explain the observation of isolated warped galaxies, and seems too weak to explain the observed amplitude of galactic warps. This has recently raised a strong interest in the context of accretion disks (Terquem 1993). * The bending of the trivial tilt mode by an elongated halo whose axis is misaligned with the axis of the galactic disk (Sparke 1984, see also Sparke and Casertano 1988). Although this hypothesis can describe very accurately the observed shape of prototype warps, it raises the problem of the origin and maintenance of such a misalignment. Recently Dubinski and Kuijken (1995) found that the axis of the halo and the disk should rapidly realign. This is the first in a series of papers where we will present a non-linear excitation mechanism for warps from spiral waves in disk galaxies. As a preliminary to this non-linear work we will concentrate here on the linear properties of warps; we find important differences with the classical theory, when we take into account the finite thickness of the disk. The reason is that the thin-disk approximation used in the classical theory relies on an underlying hypothesis, that the whole gas column at a given location in the disk can move up or down solidly, without changing its vertical profile; this would require that the hydrostatic vertical equilibrium can evolve adiabatically, i.e. that the sound time through the disk, $`\tau _s=H/a`$ where $`H`$ is the disk scale height and $`a`$ the sound speed, be shorter than the typical time scale of the warp, which is of the order of the rotation time $`\tau _R=\mathrm{\Omega }^1`$. On the other hand the vertical hydrostatic equilibrium implies $`\tau _s\tau _R`$ if the vertical gravity is dominated by a central object, or -a more appropriate condition for the outer galactic disk- if it is dominated by the self-gravity of the gas or the stars, provided that the Toomre parameter $`Q`$ is of the order of one. Finite thickness and compressibility effects have been intensively studied for spiral waves (Shu 1968, Vandervoort 1970, Romeo 1992, 1994). As an important result they found in particular that the critical Toomre parameter $`Q_{crit}`$ for disk stability is lower than unity, as predicted by the thin disk analysis, at most by a factor about 2. Thus finite thickness and compressibility effects must also be taken into account for a realistic description of the warps. We find that they can substantially modify the dispersion relation and that they introduce strong horizontal motions, comparable with the vertical one. Our method is similar to that of Shu (1968) for spiral waves, apart from modifications for taking into account the opposite parity of warps. On the other hand galactic disks are characterized by distinct populations, with different temperatures and scale heights, contributing separately to the vertical potential well: one has stars and gas, and also the halo whose contribution is believed to dominate the gravitational potential. In a first approach of the complexities introduced by these distinct populations, we present an analysis of warps in a disk composed of two fluids with distinct temperatures: assuming that, as for spiral waves, stars can be conveniently represented as a fluid if one stays away from the Lindblad resonances, this allows us to discuss the case of warps in a disk of gas and stars, or in a disk embedded in a flattened halo. We find that, in addition to the usual warp wave, a new one exists which might correspond to the “corrugations” observed in many galactic disks (Quiroga et al. 1977, Florido et al. 1991). Finally we will also present an extension of this analysis to a weak amplification mechanism for $`m>1`$ warps. This is of very limited interest for galactic disks, where the amplification time would at best be a sizable fraction of the Hubble time, but might apply to accretion disks. ## 2 Notations and formalism The dispersion relation of warps in an infinitely thin disk is (Hunter and Toomre 1969, Binney and Tremaine 1987) $$\stackrel{~}{\omega }^2=\mu ^2+2\pi G\mathrm{\Sigma }q$$ (1) where $`\stackrel{~}{\omega }=\omega m\mathrm{\Omega }(r)`$ is the warp frequency in the frame of the matter, $`m`$ is the azimuthal pattern number, $`\mu `$ is the vertical oscillation frequency of the disk, $`\mathrm{\Sigma }`$ is the mass per surface unit of the disk, and $`q`$ the modulus of the warp’s horizontal wavevector. We want to derive the dispersion relation for a moderately thick disk, i.e. a disk which is geometrically thin ($`H/r1`$), where $`H`$ is the disk thickness), but where the ratio $`\tau _S/\tau _r`$ can be of the order of one. This will be done in a manner similar to the usual thin disk derivation, but we now have to consider the three projections (radial, azimuthal and vertical) of the Euler equation, and the solution of the Poisson equation becomes more complex. ### 2.1 Notations Let us first introduce some notations. We use the shearing sheet model to describe differential rotation (Goldreich and Lynden-Bell 1965). In this model one considers an annulus around corotation as a cartesian slab with $`x=rr_0`$ (where $`r_0`$ is the corotation radius where $`\stackrel{~}{\omega }=0`$) and $`y=r\vartheta `$. The radial variations of all equilibrium quantities are neglected, except the rotation speed $`V_0(x)e_y`$ which varies linearly, $`V_0=r_0\mathrm{\Omega }(r_0)+2Ax`$ where $`A=1/2r\mathrm{\Omega }/r`$ is Oort’s first constant. We also restrict ourselves to the case of isothermal equilibrium and perturbations, with sound speed $`a`$. In what follows we denote by $`U`$, $`V`$ and $`W`$ the components of the perturbed velocity, $`\rho `$ the perturbed density and $`\varphi `$ the perturbed potential. We call $`k_x`$ and $`k_y`$ the projections of the wavevector of the warp we study, $`q=(k_x^2+k_y^2)^{1/2}`$ its modulus, $`\stackrel{~}{\omega }`$ its frequency. The connection between the cylindrical geometry and the shearing sheet is given by $`k_y=m/r_0`$. Unperturbed quantities are denoted by the same symbols with a subscript 0. ### 2.2 The formalism The dispersion relation is derived in a WKB approximation, whose validity condition is the “tightly wound” approximation, $`k_xk_y`$. In section 5 we will give a numerical solution, independent of this approximation. We limit ourselves to a linear analysis of the propagation of warp waves, i.e. perturbed quantities are infinitesimal. With these restrictions, it is possible to characterize a warp wave by parity considerations, since the parity of the equilibrium and of the perturbation equations allows one to separate symmetric and antisymmetric solutions. In the former, corresponding to spiral density waves, all perturbed quantities are even in $`z`$, except $`W`$ which is odd. The opposite holds for antisymmetric solutions, corresponding to warps. ### 2.3 The basic equations We start from the continuity equation and the horizontal projections of the Euler equation: $$i\stackrel{~}{\omega }\rho +\rho _0(ik_xU+ik_yV)+\frac{}{z}(\rho _0W)=0$$ (2) $$i\stackrel{~}{\omega }U2\mathrm{\Omega }V=ik_x\left(\varphi +a^2\frac{\rho }{\rho _0}\right)$$ (3) $$i\stackrel{~}{\omega }V+\frac{\kappa ^2}{2\mathrm{\Omega }}U=ik_y\left(\varphi +a^2\frac{\rho }{\rho _0}\right)$$ (4) where $`\kappa =(4\mathrm{\Omega }^2+4\mathrm{\Omega }A)^{1/2}`$ is the epicyclic frequency. Using the WKB assumption ($`k_xk_y`$) we get: $$k_xU+k_yV=q^2\frac{\stackrel{~}{\omega }}{\stackrel{~}{\omega }^2\kappa ^2}\left(\varphi +a^2\frac{\rho }{\rho _0}\right)$$ Thus the continuity equation (2) can be written as: $$\frac{\alpha }{z}=\left[\stackrel{~}{\omega }s+\frac{q^2\stackrel{~}{\omega }}{\stackrel{~}{\omega }^2\kappa ^2}(\varphi +a^2s)\right]\rho _0(z)$$ (5) where $$\alpha =i\rho _0(z)W$$ $$s=\frac{\rho }{\rho _0}$$ Our basic set of equations also includes the vertical projection of the Euler equation: $$\frac{s}{z}=\frac{1}{a^2}\left[\frac{\stackrel{~}{\omega }\alpha }{\rho _0}\frac{\varphi }{z}\right]$$ (6) and the Poisson equation: $$\frac{^2\varphi }{z^2}=4\pi G\rho _0s+q^2\varphi $$ (7) In equation (5) the second term in the bracket represents the horizontal part of the divergence of the velocity field, i.e. the compressional contribution which is the main new physical effect introduced in this paper. For $`\stackrel{~}{\omega }\mathrm{\Omega }\kappa `$, and unless $`\stackrel{~}{\omega }`$ is very close to $`\kappa `$ (i.e. at the Lindblad resonances), this term is of order of $`q^2a^2/\kappa ^2`$; this is of the order of $`q^2H^2`$ compared to the first term, if the disk is at hydrostatic equilibrium, with a gravity either dominated by a central object or (if Toomre’s parameter $`Q=\kappa a/\pi G\mathrm{\Sigma }1`$) by the local gravity of the gas, so that $`Ha/\kappa `$. We will return to this point in the following sub-section. Restricting ourselves to wavelengths larger than the disk thickness, we will use $`qH`$ as the expansion parameter in the following analysis. On the other hand, we will ignore here the case, which might apply to galaxies, where gravity is dominated by a massive halo, so that $`Ha/\kappa `$. If the halo is passive, i.e. does not participate in the motion, the disk is indeed thin in the sense that $`\tau _S\tau _R`$. On the other hand, if the halo is flattened it presumably participates in the differential rotation and can also be involved dynamically in the warp. Its effect will be discussed below, in the section devoted to warps in a two-component disk. Even in the case of a passive halo, we will find new effects associated with corrugations of the disk. ### 2.4 Consistent model of geometrically thin disk In the following we will make extensive use of consistent models of disk vertical density and potential profiles. We build them as coupled solutions of the hydrostatic equilibrium and the Poisson equations, in the hypothesis of a geometrically thin disk ($`H/r1`$). More precisely, let us construct ab nihilo such a consistent disk. In a first guess, we choose the density profile $`\rho _0(z)`$. The vertical hydrostatic equilibrium gives the gravitational potential by: $$\rho _0\frac{\varphi _0}{z}=a^2\frac{}{z}\rho _0(z)$$ On the other hand the Poisson equation gives: $$\frac{1}{r}\frac{}{r}\left(r\frac{}{r}\right)\varphi _0=4\pi G\rho _0\frac{^2\varphi _0}{z^2}$$ where we have written what we know (i.e. what is imposed by our choice of $`\rho _0`$) in the R.H.S. Thus the Poisson equation gives the behavior of the L.H.S., which is the radial part of the laplacian, hereafter denoted by $`\mathrm{\Delta }_r`$: $$\mathrm{\Delta }_r=\frac{1}{r}\frac{}{r}\left(r\frac{}{r}\right)$$ It is an easy matter to check that this radial laplacian is equal to: $$\mathrm{\Delta }_r\varphi _0=\kappa ^22\mathrm{\Omega }^2\mu ^2$$ (8) Note that $`\mu ^2`$, that we have called the vertical oscillations frequency, does not involve any term in $`4\pi G\rho _m`$. Indeed $`4\pi G\rho _m`$ appears when considering the vertical oscillations of a test particle in the rest potential of the disk, whereas we are concerned here with global oscillations which involve vertical motion of the potential well itself. (see Hunter and Toomre, 1969, for a derivation of equation ). Since the density profile is arbitrary, so is $`\mu ^2`$. But in a geometrically thin disk $`\mathrm{\Omega }`$ and $`\kappa `$ must be independent of $`z`$. Thus $`\mathrm{\Delta }_r\varphi _0`$ must also be constant with respect to $`z`$. We can thus summarize the definition of a consistent disk equilibrium as a disk whose density profile obeys the equation $$a^2\frac{}{z}\left(\frac{1}{\rho _0}\frac{\rho _0}{z}\right)=\mathrm{\Delta }_r\varphi _04\pi G\rho _0$$ (9) Where both $`a`$ and $`\mathrm{\Delta }_r\varphi _0`$ are independent of $`z`$. By choosing them we can construct a continuous palette of vertical equilibria, from a keplerian disk (where the vertical gravity is dominated by the mass of a central object, i.e. where the first term is dominant in the R.H.S. of equation ) to self-gravitating ones, where gravity is dominated by the local distribution of matter (i.e. where the second term is dominant in equation ). One easily obtains in the first case a gaussian profile $`\rho _0(z)=\rho _m\mathrm{exp}(z^2/H^2)`$ where $`\mu ^2=\mathrm{\Delta }_r\varphi _0=2a^2/H^2`$ (and in that case, $`\mu `$ and $`\kappa `$ are equal to $`\mathrm{\Omega }`$). The second case straightforwardly leads to: $`\rho _0(z)=\rho _m/\mathrm{cosh}^2(z/H)`$ where $`H=a/\sqrt{2\pi G\rho _m}`$. Between these extremes, one can consider any mixture of local and global gravity. Since the construction of a consistent disk implies the choice of two independent parameters (the sound speed $`a`$ and $`\mu ^2`$), the set of consistent disk equilibria is a two-dimensional manifold where any independent couple of parameters can be chosen to “label” a particular choice; We will use below $`\mu /\kappa `$ and the Toomre parameter $`Q=a\kappa /\pi G\mathrm{\Sigma }`$. Then obviously the notion of consistent disk equilibrium does not imply any constraint on the Toomre parameter $`Q`$, and on the ratio $`\tau _S/\tau _R`$ discussed in the introduction. However, in order to build realistic disk equilibria, one cannot choose $`\mu /\kappa `$ and $`Q`$ arbitrarily: indeed if $`\mu /\kappa `$ is close to one, meaning that the disk is nearly keplerian and dominated by the gravity of the central regions, the $`Q`$ parameter must be large enough to ensure that the local gravity is low enough. On the other hand, if $`\mu /\kappa `$ is not close to one, $`Q`$ must not be too large, so as to enable the local gravity to play a role in the equilibrium of the disk. Let us also recall, as already mentioned, that we will not discuss here the contribution of a halo to the vertical gravity and thus to the frequency $`\mu `$. This discussion is deferred to the consideration of two-component disks in section 4. In the following we study the dispersion relation of warps for various type of disks, from self-gravitating to keplerian. We label them by the quantity $`\mu /\kappa `$. Of course we have checked that the Toomre parameter of these disks is realistic in the sense that it is low for self-gravitating disks and very large in the case of the keplerian disk. ## 3 The dispersion relation of warps We write the dispersion relation by solving the linear system (5–7) for $`\alpha `$, $`s`$ and $`\varphi `$, subject to the appropriate boundary conditions. We have four such conditions, two at the mid plane ensuring the parity of the solution: $$s=0$$ $$\varphi =0$$ and two at infinite $`z`$: $$\alpha =0$$ $$\frac{\varphi }{z}=q\varphi $$ which ensure that the mathematical solution which satisfies the warp parity is also a physical one, with an exponential decrease of the perturbed potential, and a vanishing momentum flux due to the rarefaction of matter. A fifth condition corresponds to the amplitude of the solution (arbitrary since we consider linearized equations). Since we solve a fourth-order system, these conditions can be fulfilled only if the parameters $`q`$ and $`\stackrel{~}{\omega }`$ obey a condition, which is the dispersion relation. This dispersion relation can also be seen as a compatibility condition of an overdetermined differential system which has both Dirichlet’s and Neumann’s boundary conditions, as explained in Bertin and Casertano (1982), who investigated the dispersion relation of bending waves in incompressible thick disks with constant thickness. ### 3.1 Numerical determination of the dispersion relation We determine numerically the dispersion relation by solving the system (5–7) and looking for the values of $`q`$ and $`\stackrel{~}{\omega }`$ which allow the solution to obey the boundary conditions. Our procedure is the following: for a given value of $`q`$ and choosing an approximate value of $`\stackrel{~}{\omega }`$ we integrate the system (5–7) from large $`z`$ to zero. Obeying the boundary conditions at infinity means that we start with $`\alpha =0`$ and $`\varphi /z=q\varphi `$ (note that an error in the latter condition will give a projection on the solution that diverges at large $`z`$, so that as we integrate toward decreasing $`z`$ its effect will be exponentially small); we choose $`s=1`$ and we still have one free parameter, the value of $`\varphi `$. We do this twice, starting with different values of $`\varphi `$, and resulting in two different sets of values of $`s`$ and $`\varphi `$ at $`z=0`$. We can then find a linear combination of these two solutions to form a third one, which still obeys the conditions at infinity and now also has $`s(0)=0`$. Then a Newton method allows us to vary the value of $`\stackrel{~}{\omega }`$, for a given $`q`$, until the last condition, $`\varphi (0)=0`$ is also satisfied. This results in $`\stackrel{~}{\omega }(q)`$, the dispersion relation. ### 3.2 Analytical derivation of the dispersion relation We remind that equation (1) has been obtained for an infinitely thin sheet of matter involving only vertical motion. Thus this dispersion relation does not take into account: * compressional effects due to the finite sound speed (the finite ratio $`\tau _S/\tau _R`$). We will emphasize this point further below, when we derive the analytical dispersion relation for a moderately thick disk; * horizontal motions; * geometric effects, often approximated by “softened gravity” models for spiral density waves. Analytical calculations for an incompressible disk in which horizontal motion is suppressed show - for any density profile - that $`\stackrel{~}{\omega }`$ tends to an asymptotic value $`\stackrel{~}{\omega }_{\mathrm{}}`$ when $`qH`$ becomes large which is obviously not the case with equation (1). We present a method which allows us to derive analytically an approximate dispersion relation, taking into account the compressional effects and the finite thickness for small but finite values of $`qH`$. It must be emphasized that the thin-disk dispersion relation gives $$q\mathrm{\Omega }^2/2\pi G\mathrm{\Sigma }\mathrm{\Omega }/a$$ for $`\stackrel{~}{\omega }\mathrm{\Omega }`$, and a disk which has either a low self-gravity, or a strong one with the Toomre parameter $`Q=\kappa a/\pi G\mathrm{\Sigma }1`$ (the latter case being more relevant for the outer regions of disk galaxies). Thus $`qH\mathrm{\Omega }H/a=\tau _s/\tau _R`$, the ratio of the sound time through the disk to the dynamical time: the thin disk approximation ($`H0`$) involves an assumption that the sound time through the disk is small, so that the vertical hydrostatic equilibrium can be maintained throughout the evolution of the waves of interest and compressional effects can be neglected. On the contrary the new effects we find here, which are associated with the finite value of $`qH`$, result from the fact that this equilibrium cannot be maintained perfectly. Our method, which aims at eliminating the $`z`$ dependence of the variables, consists in making an integral over $`z`$ which represents the amplitude of the warp (the mean displacement from the galactic plane), and transforming it by using equations (57). The main difficulty comes from the potential. Here we express it by using Green’s functions, which allows us to separate the vertical structure of the solution without having to average over $`z`$, as is classically done. We show that it is possible to modify the dispersion relation by a second order term in $`qH`$. This term, similar to the pressure ($`a^2q^2`$) term of the dispersion relation of spiral modes, contains the contribution of the compressional effects, but does not contain the modification of the gravity force, which is more difficult to evaluate, and would require the analytical knowledge of the eigenvectors. The derivation can be found in appendix, and it leads to: $$\stackrel{~}{\omega }^2=\mathrm{\Delta }_r\varphi _0+2\pi G\mathrm{\Sigma }q+\frac{\stackrel{~}{\omega }^2}{\stackrel{~}{\omega }^2\kappa ^2}a^2q^2+O_{grav}[(qH)^2].$$ where $`O_{grav}[(qH)^2]`$ represents the (unknown) contribution of self-gravity at orders 2 and higher in $`qH`$. Using equation (8), and neglecting the second-order gravitational contribution, we finally obtain: $$\stackrel{~}{\omega }^2=\mu ^2+2\pi G\mathrm{\Sigma }q+\frac{\stackrel{~}{\omega }^2}{\stackrel{~}{\omega }^2\kappa ^2}a^2q^2.$$ (10) This dispersion relation is identical to the thin-disk one (1), except for the second-order compressional term. This term comes from the horizontal motions driven by the horizontal gradients of the perturbed pressure and potential, and diverges at the Lindblad resonance where epicyclic motion is resonantly excited by the pressure force. However this divergence must be taken with caution: we have thus far discussed only the case of a gaseous disk, through the use of hydrodynamical equations. It is well known that the results for a collisionless stellar disk are usually very similar, except in the vicinity of the Lindblad resonances where a major difference occurs between the two cases: the stars resonate with the wave and can exchange energy with it through Landau damping, whereas the gas does not. Here the divergence of this term does not mean a resonance, but rather the fact that our expansion fails in this region. Still, we note that this allows us to get close enough to the Lindblad resonance that our term starts playing an important role, for stars as well as for the gas. We also wish to note that a similar contribution to the dispersion relation had been found by Nelson (1976 and 1981) in the limit of weak shear, and more recently by Papaloizou and Lin (1995) in a more general study in cylindrical geometry. ### 3.3 Physical interpretation of the additional term The additional term in the dispersion relation is linked to the divergence of the horizontal perturbed velocity (as can be seen in the derivation given in appendix, from equation ). Hence this additional term is due to the presence of horizontal, compressional motions associated with the warp. In order to understand how these horizontal motions arise, let us first consider a warp in which motion is purely vertical and solid. This situation is presented in figure 1. It shows how horizontal pressure gradients appear in the system, which in turn tend to move matter horizontally. The amplitude of horizontal motions is greater and greater as the warp frequency in the frame of matter ($`\stackrel{~}{\omega }`$) approaches the frequency at which matter spontaneously moves horizontally, i.e. the epicyclic frequency. This can be the beginning of a justification of the resonant denominator in the additional term to the dispersion relation. In order to illustrate the behavior of matter far and close to the Lindblad resonances, we have plotted in figure 2 the velocity fields of matter in these two cases. We see that far from the Lindblad resonance, the matter moves essentially vertically with the motion described by the dispersion relation without our additional term; close to the Lindblad resonance, on the other hand, the motion tends to be practically horizontal as soon as we leave the median plane. In this case, the motion might be described by two slices moving in opposite directions, giving a strong vertical shear. In a forthcoming paper discussing the possible excitation of warps by spiral waves, we will fully analyze the energetics of warps, and in particular we will discuss the fraction of the total energy of the warp that can be stored in horizontal motions. It is noteworthy that our additional term is linked to the finite thickness of the disk, since a fundamental hypothesis is that the disk equilibrium is consistent, and since the sound speed which appears in that additional term is proportional to the thickness of the disk. ### 3.4 Comparison between analytical derivation and numerical results We present the comparison between the numerical dispersion relation and the dispersion relation given by equation (10) in figure 3. We obtain a very good agreement between the numerical and analytical solutions, which appears to be far better than the infinitely thin disk dispersion relation (1), except in the top-left diagram, where the disk is strongly self-gravitating, so that the gravitational part of the second-order term, which we did not evaluate, cannot be neglected. We have also emphasized in this figure the case of a massless keplerian disk which is for us, according to our definition of consistent disk, a massless disk ($`\mathrm{\Sigma }=0`$), for which $`\mathrm{\Omega }r^{3/2}`$. For such a disk, the general dispersion relation becomes: $$\stackrel{~}{\omega }^2=\kappa ^2\pm a\stackrel{~}{\omega }q$$ This dispersion relation is exact , since the unknown correction term in our derivation was due to self-gravity, which we do not need to take into account for a keplerian disk. Thus we can see that the warp of a keplerian disk precesses, a result which was not given by the infinitely thin disk relation which in that case becomes: $$\stackrel{~}{\omega }^2=\kappa ^2=\mathrm{\Omega }_K^2$$ The precession frequency, expanded to lowest order in $`qH`$, is thus of the order of $$\mathrm{\Omega }_K\frac{H}{R}$$ (for a wavelength of the warp of the order of the size of the disk). This can have important consequences in the context of protoplanetary disks. The role of warps has recently raised a strong interest, based in part on the fact that in keplerian disks the warp is a neutral tilt mode which has a vanishing energy and can thus very easily be excited. We expect that our effect should involve a finite warp energy, and thus make excitation mechanisms less efficient. A similar resolution for the spiral waves dispersion relation (where we omit again the second and higher orders terms in $`qH`$ for the self-gravity) leads to the infinitely thin disk dispersion relation; $$\stackrel{~}{\omega }^2=\kappa ^22\pi G\mathrm{\Sigma }q+a^2q^2$$ (11) From this result we can say that equation (11) and equation (10) describe respectively the propagation of spiral waves and of bending waves with the same accuracy, i.e. with the same underlying hypothesis. ## 4 Two-fluid dispersion relation In this section we derive the dispersion relation of the bending waves in a two-fluid system. Both fluids have different sound speeds and hence scale heights, and they have independent surface densities. Our primary goal is to discuss warps in a disk composed of stars and gas (assuming that the stars are correctly represented as a warm fluid), or to a disk embedded in a flattened halo which can participate dynamically in the warp. In the following, for simplicity, we refer to the two fluids as gas and stars. We will find in particular that, in a star-gas disk, we find solutions closely resembling the corrugations, i.e. short wavelength bending waves of the gas layer observed in certain edge on spiral galaxies and in the Galaxy. We keep the same notations as for the mono-fluid case, with an index $``$ or $`g`$ applying respectively to the warmer (stars) and the cooler (gas) species. We still need the assumption of consistent vertical equilibrium already used in our mono-fluid derivation, i.e. that both fluids obey the Poisson equation and the hydrostatic equilibrium, while the gravitational potential is such that $`\mathrm{\Delta }_r\varphi _0`$ is constant through the disk thickness. With a method strictly similar to that exposed in appendix for the monofluid case, we can write down equations for $`N_{}`$ and $`N_g`$, which constitute a linear homogeneous system. This system will have a non-trivial solution only if it has a vanishing determinant. Hence the dispersion relation of warps in a two-fluid system is: $$𝒟_g𝒟_{}\left(\frac{2\pi G\mathrm{\Sigma }_gq}{\stackrel{~}{\omega }^2}\frac{\nu _{}^2}{\stackrel{~}{\omega }^2}\right)\left(\frac{2\pi G\mathrm{\Sigma }_{}q}{\stackrel{~}{\omega }^2}\frac{\nu _g^2}{\stackrel{~}{\omega }^2}\right)=0$$ (12) where we have defined: $$𝒟_i=\frac{1}{S_i}\frac{\mu ^2+\nu _i^2+2\pi G\mathrm{\Sigma }_iq}{\stackrel{~}{\omega }^2}\text{ where }i=\text{ or }g$$ and: $$\nu _i=4\pi G\frac{_0^+\mathrm{}\rho _j\alpha _i}{N_i}\text{ where }ji$$ Physically, $`𝒟_i=0`$ would be the dispersion relation of bending waves if there was only the species $`i`$. The role of $`\nu _g`$ becomes clear if one goes to the limit where the gas disk is much thinner than the stellar one: in that case one has $`\nu _g=4\pi G\rho _m`$, where $`\rho _m`$ is the stellar density at the disk mid-plane; $`\nu _g`$ then appears as the vertical oscillation frequency of the gas in the stellar potential. It must be noted that realistic stellar disks can have a vertical velocity dispersion very different from the radial one. This might affect the present results by numerical factors which could be important in detailed comparisons with observations. This will be considered in future work. For a given $`\stackrel{~}{\omega }`$, equation (12) is of order $`4`$ in $`qH`$ (since the $`1/S_i`$ hidden in the $`𝒟_i`$ are of order $`2`$ in $`qH`$). In general its roots cannot be easily expressed, but we find it interesting to first analyze them in the trivial case $`a_g=a_{}`$, before turning to numerical solution in the general case. ### 4.1 Roots identification of the two-fluids dispersion relation #### 4.1.1 Peculiar case $`a_g=a_{}`$. In this case we have $`S_g=S_{}S`$, and we are dealing with two physically indistinguishable fluids. Now we can factorize the dispersion relation (12). We obtain after some straightforward transformations: $$\left(\frac{1}{S}\frac{\mu ^2+2\pi G\mathrm{\Sigma }_tq}{\stackrel{~}{\omega }^2}\right)\left(\frac{1}{S}\frac{\mu ^2+\nu _g^2+\nu _{}^2}{\stackrel{~}{\omega }^2}\right)=0$$ where $`\mathrm{\Sigma }_t=\mathrm{\Sigma }_{}+\mathrm{\Sigma }_g`$. The eigenvector associated with the second factor can be easily found by summing equations relative to $`N_{}`$ and $`N_g`$, giving: $$\left(\frac{1}{S}\frac{\mu ^2+2\pi G\mathrm{\Sigma }_tq}{\stackrel{~}{\omega }^2}\right)(N_g+N_{})=0$$ Thus when the second factor vanishes this gives: $`N_g+N_{}=0`$. This condition consists in splitting the gas in two parts that move in opposite directions, so that the “total” warp (the average displacement) vanishes. In fact we note that the second factor is of the form $`aq^2+b`$, so that it gives only one positive root for $`q`$. We will call this the “hidden” mode, even when we relax the condition $`a_g=a_{}`$. In the same manner, it is an easy matter to check that the eigenvector of the first factor is $`\mathrm{\Sigma }_{}N_g\mathrm{\Sigma }_gN_{}=0`$. This means that both fluids have the same behavior (they have in particular the same elongation $`Z`$), and we recover the one-fluid dispersion relation. Finally we have three modes: the two “classical” modes of the one fluid disk, given by the second order dispersion relation of a one-fluid warp, and the “hidden mode” associated to the second factor. #### 4.1.2 General case When $`a_g`$ and $`a_{}`$ are different, the simple factorization seen above is no more possible and we turn to numerical solution. We have adopted the following values: $$\mu ^2/\kappa ^2=0.1$$ (corresponding to a nearly flat rotation curve) $$\nu _g^2/\kappa ^2=10$$ $$\stackrel{~}{\omega }/\kappa =0.75$$ (so that we are between the warp’s Lindblad resonance (inner or outer) and the forbidden band around corotation where the warps do not propagate) $$\mathrm{\Sigma }_g/\mathrm{\Sigma }_{}=0.1$$ We also need to choose the value of $`\nu _{}`$. It is an easy matter to check that, since we limit ourselves to the (realistic) case $`a_ga_{}`$: $`\nu _g4\pi G\mathrm{\Sigma }_{}/H_{}`$ and $`\nu _{}4\pi G\mathrm{\Sigma }_g/H_{}`$ hence $`\nu _{}\nu _g\mathrm{\Sigma }_g/\mathrm{\Sigma }_{}`$. We will not discuss the standard warp mode, which behaves in a manner very similar to the one-fluid case. The stars and gas motions are nearly identical, though since $`a_ga_{}`$ small differences occur. We will rather discuss the behavior of the “hidden” mode, and tentatively identify it with the corrugations observed in many galaxies. The results are plotted on figure 4. The top plot shows $`qH_{}`$, while the bottom one shows the ratio $`N_{}/N_g`$, as a function of $`a_g/a_{}`$ for the hidden mode. When the sound speeds are identical we check the result of the previous subsection, that the two fluids move in opposite directions so as to achieve a null global vertical displacement. When the gas sound speed is decreased, the ratio $`|N_{}/N_g|`$ decreases, i.e. the stars are more and more motionless. Thus the mode under study is essentially gaseous. The stars become passive, though they still act through their unperturbed potential to change the frequency of vertical oscillations of the gas disk. One may note that for $`a_g/a_{}<.65`$ the ratio $`N_{}/N_g`$ changes sign, i.e. the stars now move in the same direction as the gas. We have monitored the relative error obtained with the computation of $`q`$ by the approximate dispersion relation: $$\stackrel{~}{\omega }^2=\mu ^2+\nu _g^2+2\pi G\mathrm{\Sigma }_gq+\frac{\stackrel{~}{\omega }^2}{\stackrel{~}{\omega }^2\kappa ^2}a_g^2q^2$$ i.e. the one-fluid dispersion relation where we have added the vertical frequency $`\nu _g`$ due to the rest potential of the stars. This error is plotted on figure 5. We see that it is always very weak, so that the approximate dispersion relation is excellent to describe the “hidden” mode. This dispersion relation is a generalization of the one obtained by Nelson (1976) without shear and self-gravity, which was: $$\stackrel{~}{\omega }^2=\nu _g^2+\frac{\stackrel{~}{\omega }^2}{\stackrel{~}{\omega }^24\mathrm{\Omega }^2}a_g^2q^2$$ For a ratio $`a_g/a_{}`$ of about one fifth, we find a radial wavelength of about one kiloparsec, which might be varied by a factor $`2`$ by varying the parameters. This is the order of magnitude of the wavelength of corrugations observed in our Galaxy and in some other edge on spirals (Quiroga et al. 1977, Florido et al. 1991). Since these corrugations are observed in the very young stellar population, i.e. are assumed to trace the motion of the gas disk, we consider that this “hidden” mode is a very good candidate to explain them. One must note that the halo contribution to the vertical oscillation frequency should also be included in the approximate dispersion relation (see e.g. Toomre, 1983). This contribution depends only on the midplane density, so that for a given surface density of the halo it also depends on its scale height - of which nothing is known. Let us assume that the halo is a flattened disk in hydrostatic equilibrium. Let us also assume that its density dominates the local gravity, so that the stellar disk scale height is: $$H_{}\frac{\rho _{}}{\rho _H}\frac{a_{}}{\mathrm{\Omega }}$$ where the subscript $`H`$ notes halo quantities, while $$H_H\frac{a_H}{\mathrm{\Omega }}.$$ Then one easily finds that the Toomre parameters of the stars and the halo are in the ratio: $$\frac{Q_{}}{Q_H}\left(\frac{\rho _{}}{\rho _H}\right)^2$$ Thus, if both the halo and the stars have Toomre parameters not too different from one, their midplane densities and their contributions to $`\nu _g`$ must be comparable, so that our estimate of the corrugation wavelength remains valid – though this suffers the same uncertainties as any estimate concerning the halo. ## 5 Amplification of $`m>1`$ warps In this section we briefly show that $`m>1`$ warps can be amplified, through a weak form of the Swing mechanism which amplifies spiral waves or modes. The $`m=1`$ warp, which is of course the most interesting since it is the one observed in most galactic disks, is not concerned since it has been shown by Hunter and Toomre (1969) that it always has a positive energy, while spiral waves are amplified by exchanging energy between negative energy waves inside corotation and positive energy ones outside corotation. This is of course connected with the well-known fact that the shearing sheet model is not adapted to describe the $`m=1`$ mode, for which the effects of the cylindrical geometry cannot be neglected. (this is different from the mechanism studied by Bertin and Mark (1980), who investigated the possibility of self-excited bending modes in galactic disks through the exchange of angular momentum between the disk and a slow bulge-halo component, with a “quantum condition” between the center of the disk and the corotation.) We have summarized on figure 6 the propagation properties of bending waves. As for spiral waves, we have a forbidden band around corotation, imposed by the vertical resonances at $`+\mu `$ and $`\mu `$. In the case of spiral waves, the thickness of this forbidden band tends to zero as the Toomre parameter $`Q`$ gets closer to unity. In the case of bending waves, this thickness tends to zero as the rotation curve tends to be flat (since $`\mu ^2=2\mathrm{\Omega }^2\kappa ^2`$). The sign of the group velocity, which is the same as the sign of $`\nu /k_x`$, allows one to give the direction of propagation of these bending waves. In our approximation (WKB and the perturbative derivation of the compressibility term), the latter term becomes infinite at the Lindblad resonances, so that we cannot conclude here on the exact properties of the warp close to these resonances. We reserve a more detailed study for future work, but we have checked that this does not affect the results presented here. Our motive for the investigation of non-WKB effects is that the dispersion relation of warps is similar in structure to that of spiral waves, with a positive rather than negative self-gravity term. It is thus analogous to the dispersion relation for spiral waves in a disk embedded in a strong vertical magnetic field, as derived by Tagger et al. (1990). They showed that these waves were still subject to a weak form of the Swing amplification mechanism, associated mathematically with the presence of the square root ($`q=(k_x^2+k_y^2)^{1/2}`$) in the self-gravity term. Physically, this corresponds to the long-range action of gravity or of magnetic stresses. Thus we can expect the same mechanism to apply to warps. However this amplification takes place when $`k_x`$ is of the order of $`k_y`$, so that the WKB approximation cannot be used straightforwardly. Although we might use the method of Pellat et al. (1990) for an analytical derivation, we present here a full numerical solution which allows us to derive the result directly. We stay in the shearing sheet and use the formalism developed by various authors (Goldreich and Lynden-Bell, 1965; Toomre, 1964; Drury, 1980; Lin and Thurstans, 1984), reviewed in Tagger et al. (1994), to compute the amplification. We refer to these works for a complete description of the formalism, where one uses the fact that because of the shearing motions $`k_x`$ changes with time as $`k_x^02Ak_yt`$. Thus the $`\stackrel{~}{\omega }`$ term obtained in the WKB approximation, is replaced in the Euler equation by derivatives: $$\stackrel{~}{\omega }\frac{}{t}2Ak_y\frac{}{k_x}$$ This in turn can be, through a change of variables, transformed to a single derivative over $`k_x`$. There results a set of differential equations which, in the tightly-wound limit ($`k_xk_y`$), reduce to the WKB result while at low $`k_x`$ they give rise to transient phenomena and amplification. We write and solve numerically these equations. At large (positive or negative) $`k_x`$ one recovers the WKB results, while amplification can occur at $`k_xk_y`$. The amplification can be described in the following manner: one can combine the equations and find at large $`k_x`$ WKB solutions, varying as $$M(k_x)=P^{1/4}\mathrm{exp}\left(\pm i_0^{k_x}P^{1/2}(k_x^{})𝑑k_x^{}\right)$$ (13) where $`M`$ is any perturbed quantity and $`P(k_x)`$ a quantity which appears in equations as the square of an “instantaneous” frequency. These solutions correspond to leading and trailing waves respectively for $`k_x`$ negative or positive, propagating inside or beyond corotation respectively for the + and - signs in the exponent. The leading waves propagate toward corotation, and trailing waves away from it. Thus let us start at $`k_x\mathrm{}`$ (i.e. leading waves) with a “pure” solution (i.e. with the $`+`$ sign in the exponent), corresponding to a wave traveling inside and toward corotation. The numerical integration over $`k_x`$ corresponds to following the wave as it approaches corotation, and then is reflected when $`k_x`$ becomes positive. But in the meantime the WKB approximation has lost its validity while $`k_xk_y`$, so that the “pure” solution has lost its identity. Thus in general, when the WKB approximation becomes valid again (at large and positive $`k_x`$), we should obtain a mixture of the two solutions, with the $`+`$ and $``$ sign in the exponent: this means a mixture of trailing waves traveling inside and outside corotation, and away from it: as the initial leading wave is reflected it also “tunnels” through the corotation region, and generates an outgoing trailing wave. This is the usual process of the Swing amplification mechanism for spiral waves; in that case the complex physics of this mechanism implies that the reflected wave has a larger amplitude than the initial one, because waves have negative energy inside corotation and positive energy outside. Thus that by conservation of energy the emission of the positive energy wave outside corotation means an amplification of the negative energy one. In the case of warps the mechanism is exactly similar; however the repulsive, rather than attractive, force between density perturbations on either side of corotation makes it much less efficient than in the case of spirals: one can obtain an amplification by a hundred in the most extreme case for spirals, while Tagger et al. (1990) find only a factor 1.05 (and a relative amplitude .3 for the transmitted wave) in the best case, when they study magnetized disks with a dispersion relation mathematically similar to that of warps. Here we obtain similar results, summarized in figures (7) and (8). This amplification is so weak that it is negligible for a single wave packet traveling in the disk. One might consider building a normal mode, formed as in the case of spiral when the reflected trailing wave is reflected back at the galactic center to become a leading one traveling back toward corotation; then, with an amplification by a factor $`1.05`$, and assuming that the wave suffers no dissipation, the $`e`$-folding time would be about 20 times the duration of this cycle, which is at best a few rotation times. This means that such modes could not reach a sizable amplitude in a galactic disk over a Hubble time. On the other hand, in accretion disks where the possible number of rotation periods is much larger, and still neglecting all possible sources of dissipation, this might provide a way of maintaining long-lived warp modes. We wish to note that this result has also been obtained independently by J. Goodman (private communication). ## 6 Discussion We have investigated the dispersion relation of warps in a moderately thick disk. We have found that the propagation of warps is described by the dispersion relation (10), with the following hypothesis: * We work in the shearing sheet model; * The disk is consistent (with our definition, see the corresponding section), which physically means that the disk must be moderately thick: it is geometrically thin but the ratio of the sound time through the disk to the rotation time is arbitrary; * The warp wavelength is larger than the disk thickness; * The disk is isothermal; * the waves are WKB, i.e. tightly wound. The corrective term is due to the effect of compressibility, acting when the sound time through the disk is not much smaller than the rotation time. The associated horizontal motions are important because they allow one to consider the possibility of a non-linear mechanism to continuously excite the warps: indeed the beat wave of two $`m=1`$ warps (or of a warp with itself) is an $`m=2`$ perturbation, whose potential and horizontal motion are even in $`z`$; thus they have the wavenumber and parity of a spiral. This means that a spiral can interact non-linearly with warps and excite them, in the same manner that it can do with other spirals. Non-linear coupling of bars and spirals has been studied by Tagger et al. , 1987, and Sygnet et al. , 1988. This mechanism was found very efficient and explained the behavior of numerical simulations of galactic spiral; in a forthcoming paper we will consider the possibility and the efficiency with which non-linear coupling of warps and spirals might feed warps from the energy and angular momentum carried by the spiral wave to the outer parts of the galactic disk. Another potentially important effect of the compressional term relates to the possible existence of warp modes, i.e. standing wave structures, in the disk. Classically, the Hunter-Toomre criterion shows that modes can exist only if the integral $`q𝑑r`$ is finite, and that with the thin-disk dispersion relation this can be written as $`𝑑r/\mathrm{\Sigma }`$ finite - a condition that is difficult to fulfill in realistic disk models. With the new compressional term, this connection between the two integrals disappears, so that one might have $`q𝑑r`$ finite (i.e. a discrete mode spectrum) even when $`𝑑r/\mathrm{\Sigma }`$ is not bounded<sup>1</sup><sup>1</sup>1As this paper was being rewritten following referee’s comments, we learned that Sellwood (in preparation), using a modification of the dispersion relation from Toomre (1966) has reached similar conclusions and indeed found standing warp modes in numerical simulations.. We have also investigated the self-amplification of $`m>1`$ warps by a weak form of the SWING mechanism. We have found an amplification which is too weak to significantly amplify warps in galactic disks, but might be considered in accretion disks. ## 7 Acknowledgments We wish to thank C. Pichon for rich and helpful discussions in the course of this work. We also thank the referee A. Romeo for very detailed discussions and advice which have considerably enriched this paper. ## 8 Appendix: derivation of the dispersion relation We derive hereafter the relation dispersion of warps in the monofluid case under the hypothesis mentioned in the main text. Let N be: $$N_0^+\mathrm{}\alpha (z)𝑑z=_0^+\mathrm{}\frac{\rho _0}{\stackrel{~}{\omega }}\left(a^2\frac{s}{z}+\frac{\varphi }{z}\right)𝑑z$$ (14) where use of equation (6) has been made. Using the unperturbed hydrostatic equilibrium and the expression of $`s`$ given by equation (5) we rewrite $`N`$ as: $$N=\frac{S}{\stackrel{~}{\omega }^2}_0^+\mathrm{}\frac{\varphi _0}{z}\frac{\alpha }{z}𝑑z+\frac{1}{\stackrel{~}{\omega }}\left(\frac{a^2q^2S}{\stackrel{~}{\omega }^2\kappa ^2}+1\right)_0^+\mathrm{}\rho _0\frac{\varphi }{z}𝑑z$$ where we have defined $$S=\frac{\stackrel{~}{\omega }^2\kappa ^2}{\stackrel{~}{\omega }^2\kappa ^2a^2q^2}$$ (15) The $`a^2q^2`$ term in the denominator is the source of the new effect we introduce. It clearly represents the effect on $`s`$ of the compressibility associated with the horizontal motions. Integrating by parts the first term, we use the Poisson equation and the consistency hypothesis (i.e. that $`\mathrm{\Delta }_r\varphi _0`$ does not depend on $`z`$), and get: $$N=\frac{4\pi GS}{\stackrel{~}{\omega }^2}_0^+\mathrm{}\rho _0\alpha 𝑑z\frac{S\mathrm{\Delta }_r\varphi _0}{\stackrel{~}{\omega }^2}N+\frac{S}{\stackrel{~}{\omega }}_0^+\mathrm{}\rho _0\frac{\varphi }{z}𝑑z$$ (16) Where we have made use of the parity properties of the warp ($`s=0`$ and $`\varphi =0`$ at $`z=0`$). We see here that the consistency hypothesis is the key to our derivation, since it has allowed us to extract $`\mathrm{\Delta }_r\varphi _0`$ from the integral, in the second term of the right-hand side. We write the solution of the Poisson equation (Equation 7) as: $$\varphi =e^{qz}_z^+\mathrm{}\frac{4\pi G\rho _0(z^{})s(z^{})}{2q}e^{qz^{}}𝑑z^{}$$ $$e^{qz}_{\mathrm{}}^z\frac{4\pi G\rho _0(z^{})s(z^{})}{2q}e^{qz^{}}𝑑z^{}$$ (one easily checks that this is the solution which verifies the boundary conditions at $`z=0`$ and at $`z\mathrm{}`$). Since $`\rho _0`$ vanishes beyond a vertical scale $`H`$ we can expand the exponential to first order in $`qH`$; this gives, after straightforward computations using the continuity equation (5): $$_0^+\mathrm{}\rho _0\frac{\varphi }{z}𝑑z=\frac{4\pi G}{\stackrel{~}{\omega }}_0^+\mathrm{}\rho _0\alpha 𝑑z+$$ $$\frac{4\pi Gq}{\stackrel{~}{\omega }}_0^+\mathrm{}\rho _0(z)𝑑z_0^+\mathrm{}\alpha 𝑑z+O_{grav}[(qH)^2]$$ where $`O_{grav}[(qH)^2]`$ means that this gravitational contribution also contains a term of second order in $`qH`$ which we have not evaluated, truncating the expansion of the exponentials to first order. An analytical computation of this can be performed if one assumes that the motion in the warp is vertical with a vanishing divergence, and if one has an analytical expression (e.g. a gaussian) for the vertical equilibrium density profile. In these conditions one finds that the dispersion relation is modified by replacing the surface density $`\mathrm{\Sigma }`$ by an apparent surface density $`\mathrm{\Sigma }^{}=\mathrm{\Sigma }(1\lambda qH)`$, where $`\lambda `$ is a constant equal to $`(2\pi )^{1/2}`$ in the gaussian case. Thus this term affects the dispersion relation only in the limit $`qH1`$, whereas the compressional term we have derived can play a role for an arbitrarily small value of $`qH`$. The computation of this gravity term could be well approximated by the use of “softened gravity” models (Erikson 1974, Athanassoula 1984, Romeo 1994) where one artificially truncates the $`(rr^{})^1`$ divergence of the potential to take into account the geometric effect of the finite disk thickness. Thus hereafter we neglect this second-order gravitational term. Substituting our result and the value of $`S`$ in equation (16), and dividing by $`N`$, we find the dispersion relation: $$\stackrel{~}{\omega }^2=\mathrm{\Delta }_r\varphi _0+2\pi G\mathrm{\Sigma }q+\frac{\stackrel{~}{\omega }^2}{\stackrel{~}{\omega }^2\kappa ^2}a^2q^2+O_{grav}[(qH)^2].$$
no-problem/9902/astro-ph9902021.html
ar5iv
text
# 1 ABSTRACT ## 1 ABSTRACT New $`BeppoSAX`$ observations of the nearby prototypical starburst galaxies NGC 253 and M82 are presented. A companion paper (Cappi et al. 1998) shows that the hard (2-10 keV) spectrum of both galaxies, extracted from the source central regions, is best described by a thermal emission model with kT $``$ 6–9 keV and abundances $``$ 0.1–0.3 solar. The spatial analysis yields clear evidence that this emission is extended in NGC 253, and possibly also in M82. This quite clearly rules out a LLAGN as the main responsible for their hard X-ray emission. Significant contribution from point-sources (i.e. X-ray binaries (XRBs) and Supernovae Remnants (SNRs)) cannot be excluded; neither can we at present reliably estimate the level of Compton emission. However, we argue that such contributions shouldn’t affect our main conclusion, i.e., that the $`BeppoSAX`$ results show, altogether, compelling evidence for the existence of a very hot, metal-poor interstellar plasma in both galaxies. ## 2 PREVIOUS SPATIAL RESULTS Evidence for complex galactic-scale outflows driven by starburst activity has been gathered in recent years based, primarily, on optical and soft ($``$ 0.1–3 keV) X-ray observations (Fabbiano 1989 and ref. therein). These have sometimes been called “superbubbles” or “superwinds”, the latter referring to those manifestations where the extended hot gas emitting optical emission lines and soft X-rays was apparently ejected into the intergalactic medium (IGM). Because of the extinction of the optical emission (often almost completely reprocessed into IR emission), X-ray data provided the most direct view of the hot wind material. In general, spectroscopic studies confirmed or, at least, were consistent with thermal emission from a hot plasma, most likely shock-heated by supernovae (e.g. Dahlem, Weaver & Heckman 1998). The detailed physical characteristics of the gas (in particular its metal abundance), however, have remained unclear because the analysis in the soft X-ray band is complicated by the unknown line-of-sight extinction, the large uncertainties in theoretical models used (in particular around the Fe L-shell energy band), and the presence of multiple temperatures (typically with kT between 0.2–3 keV). At higher energies, the images available to date have been essentially limited to the $`ASCA`$ observations of the two brightest starburst galaxies (SBGs), NGC 253 and M82 (but see recent studies on star-forming dwarf galaxies by Della Ceca et al. 1996, 1997). $`ASCA`$ resolved the 2–10 keV emission of NGC 253 (Ptak et al. 1997) but not from M 82 (Tsuru et al. 1997). However, the ASCA data did not allow adequate spatial analysis. From the ASCA spectral analysis, the hard components of both sources are well described by either a thermal model (kT $``$ 6–9 keV), or a power-law model ($`\mathrm{\Gamma }`$ $``$ 1.8–2.0). The absence in the data of a significant Fe-K line emission has, however, always been puzzling and has induced several authors to propose alternative explanations to the thermal emission, i.e. the presence of a low-luminosity active galactic nucleus (LLAGN) (Ptak et al. 1997, Tsuru et al. 1997), or non-thermal emission from Compton scattering of relativistic electrons by the intense FIR radiation field (Rephaeli et al. 1991, Moran & Lehnert 1997). In a companion paper (Cappi et al. 1998), we have presented the $`BeppoSAX`$ spectral results that clearly show the first evidence of Fe-K line emission (at $``$ 6.7 keV) and high-energy rollover expected in the case of thermal emission for both NGC 253 and M82 (see Persic et al. 1998 for more details on the results for NGC 253). Here we present preliminary results obtained from the spatial analysis which support the thermal origin of the hard component in NGC 253 and, to a lesser extent, in M82. ## 3 $`BEPPOSAX`$ IMAGES OF NGC 253 AND M82 $`BeppoSAX`$ observed NGC 253 on Nov.29–Dec.2, 1996 and M82 on Dec.06–07, 1997 with the LECS, MECS and PDS detectors operating between 0.1–4 keV, 1.3–10 keV and 13–60 keV, respectively (see Table 1). The spectral results have demonstrated that two thermal components (kT $``$ 0.1–0.3 keV and kT $``$ 6–9 keV) are required to fit the spectra of both sources (Cappi et al. 1998, Persic et al. 1998), and that the hard thermal component needs to be absorbed in order not to over-produce the continuum at E $`<`$1 keV. Therefore, the present analysis will focus on the spatial properties in only the 3–10 keV energy range, where the contribution of the soft component is marginal. Here we present preliminary results obtained from only the MECS instruments. Figure 1 shows the MECS 3–10 keV images of both galaxies superimposed on Digital Sky Survey images. The left panel clearly shows that the hard X-ray emission of NGC 253 is extended and elongated along its major axis. No point sources embedded in the extended emission are detected, but given the limited resolution they cannot be ruled out; note that there is an indication for a “cone” of X-ray emission that extends toward the southwest direction. Another interesting feature is the apparent extended emission perpendicular to the major axis in the northwest and southeast, in a way similar to the ROSAT PSPC and HRI results (Dahlem, Weaver & Heckman 1998). As a matter of fact, such emission could be the signature of a very hot gas ejected out of the galaxy, into the IGM. Further analysis work is in progress to determine the detailed characteristics (e.g. flux, temperature vs. distance) of this extended component. The right panel suggests the presence of a more symmetric X-ray halo in M82, though with some excess emission oriented along the optical minor axis in the northwest direction. The radial profiles of the 3–5 keV, 7–10 keV and 6–7 keV emission from NGC 253 and M82 are shown in Figure 2 together with the instrumental PSF energy-weigthed over the source spectra. The new and most convincing result of the present analysis is that the hard X-ray emission in NGC 253 extends to $``$ 8. There is also evidence that M82 extends to $``$ 5, however the effect is in this case only marginal. In NGC 253, the extension is also evident if one considers the FeK line flux only (i.e. in the 6–7 keV band). ## 4 ON THE ORIGIN OF THE HARD SPECTRAL COMPONENT The origin of the hard extended component is puzzling. It could be due to either a collection of point sources which contribute to the 3-10 keV emission (e.g., XRBs, SNRs), or a truly diffuse emission due to Compton scattering of IR-optical photons from relativistic e<sup>-</sup>. Alternatively, it could be a very hot ($``$ 6–10 $`\times `$ 10<sup>7</sup> K) ISM phase the cause of the hard extended component. Individually, each of these components, except for the hot ISM phase hypothesis, seems unlikely to dominate the 3-10 keV of these galaxies. The average spectrum of an ensemble of SNRs would probably be too soft (kT$`<`$ 4 keV), and Compton emission would predict a spectrum with a power-law shape at odds with what shown in the companion paper (Cappi et al. 1998). An accurate study by Dahlem, Weaver & Heckman (1998) based on spatially resolved ROSAT PSPC spectra of NGC 253 and M82, has shown that the 0.1-2 keV flux of NGC 253/M82 can be divided into emission from the source disk+core (53%/82%), the halo (25%/11%) and point sources (22%/7%). Extrapolating the average spectrum of the ROSAT point sources of NGC 253 and M82 to higher energies, we obtain a 3-10 keV flux of $``$ 1.4 $`\times `$ 10<sup>-12</sup> erg cm<sup>-2</sup>s<sup>-1</sup> and 5 $`\times `$ 10<sup>-14</sup> erg cm<sup>-2</sup>s<sup>-1</sup>, respectively. This is negligible in the case of M82 (F<sub>3-10keV</sub> $``$ 2.3 $`\times `$ 10<sup>-11</sup> erg cm<sup>-2</sup>s<sup>-1</sup>), and less than 40% of the flux of NGC 253 (F<sub>3-10keV</sub> $``$ 3.6 $`\times `$ 10<sup>-12</sup> erg cm<sup>-2</sup>s<sup>-1</sup>). In conclusion, alternatives to the hot ISM phase could hardly produce, by themselves, all the hard X-ray emission detected in both galaxies. Therefore the interpretation according to which most of the hard X-ray emission is produced in a hot ISM plasma seems to be favored. However, as mentioned above for NGC 253, XRBs (and possibly Compton emission, Rephaeli et al. 1991) certainly contribute to the hard X-ray emission. Thus we estimated how their contribution would modify our conclusions on the measurements of temperature and abundance of the hard component. To do so, we added an extra hard component (an absorbed power-law) to the best-fit spectra shown in Cappi et al. (1998) to mimic the extra contribution from XRBs and/or Compton emission and/or emission from a LLAGN. For several values of $`N_\mathrm{H}`$ (from 10<sup>22</sup> to 10<sup>24</sup> cm<sup>-2</sup>) and $`\mathrm{\Gamma }`$ (from 1 to 2), we found no improvement of the fit and upper-limits of (at most) 20% and 10% of the observed 3-10 keV flux of NGC 253 and M82, respectively. In NGC 253, forcing the power-law contribution to be about 40% of the total 3-10 keV flux, the thermal component softens from kT $``$ 6 keV to $``$ 4.8 keV and the abundances increase from $``$ 0.25 to $``$ 0.34 solar. In M82, forcing a power-law contribution of 50% of the total 3-10 keV flux, the thermal component softens from kT $``$ 8 keV to $``$ 6 keV, and the abundances increase from $``$ 0.08 to $``$ 0.25 solar, but the fit becomes worse by $`\mathrm{\Delta }\chi ^2`$ = 11 (mainly because of the clear cutoff obtained from the PDS data). However, it should be pointed out that the average spectrum of XRBs may not be well described by a single absorbed power-law but might require the addition of an FeK line emission (most XRBs are known to emit strong FeK lines at 6.4 and/or 6.7 keV). In such case, abundance differences would become even lower. It should be noted also that in the case of M82, some short-term ($``$ hrs) variability (with $``$ 30% amplitude) was detected in the 3–10 keV light curve of M82, possibly indicating a contribution from XRBs to the hard X-ray flux. Given the lack of strong point-sources in the ROSAT PSPC observations of M82 reported by Dahlem, Weaver & Heckman (1998), these could either be highly-variable (as suggested by Ptak et al. 1997), or be strongly absorbed in order to show up at E$`>`$ 3 keV. However, our timing analysis indicates a dispersion of the light curve around its mean value of only (15$`\pm `$ 4)%. Therefore, as shown above, such a contribution should not have strong effects on our spectral results. The overall results presented here are thus consistent with a major contribution in the 3-10 keV band from a hot and diffuse thermal plasma. A significant contribution from other emission mechanisms (point-source population and/or Compton emission) cannot be excluded based on the present data but even with the extreme hypothesis of a $``$ 30% contribution between 3-10 keV, best-fit temperatures and abundances derived from the hard component would become only slightly lower and higher than reported. In any case, the present results clearly rule out the possibility that a LLAGN makes the bulk of the hard X-ray emission in these two SBGs. The above is consistent with our preliminary results obtained from a spatially-resolved spectral analysis of the MECS data of NGC 253 (Cappi et al., in prep.) which clearly shows that the temperature of the hard component decreases with increasing distance from the core (from kT $``$ 6 keV in the core to kT $``$ 4 keV in the disk). This suggests either a thermal contribution from point-source populations in the core and in the disk (but both with thermal average spectra) or thermal emission from a hot ISM gas. If due to hot gas, the observed temperatures (T<sub>obs.</sub> $``$ 6.5/9.7 $`\times `$ 10<sup>7</sup> K for NGC253/M82) are much higher than the “escape temperature” (T<sub>esc.</sub> $``$ 2/1 $`\times `$ 10<sup>6</sup> K for NGC253/M82; Wang et al. 1995) of the gas in these galaxies, so the gas should easily escape from the galaxies. As pointed out by Heckman (1997), this could consequently be very important for understanding galactic evolution and the chemical enrichment of IC and IG gas. Finally, it is interesting to note the analogy of our results with those on clusters of galaxies obtained about 25 years ago. Indeed, the Perseus, Virgo and Coma clusters were known to be X-ray sources but the origin of their 2–10 keV emission was at first unclear. Several hypotheses were proposed: e.g., a collection of AGNs or AGNs-related (point) sources (Kellog et 1972), Compton scattering of microwave background photons by electrons emitting the diffuse radio halos (Forman et al. 1972) or a hypothetical intracluster medium (Gott and Gunn 1971). But it was only after the first observational evidence with $`UHURU`$ that cluster 2–10 keV emission was shown to be extended (Forman et al. 1972, Kellog et al. 1975) and by the detection by $`Ariel5`$ (Mitchell 1976) and $`OSO`$-8 (Serlemitsos et al. 1977) of an iron emission line at 6.7 keV that the origin of their hard X-ray emission was attributed to a hot, evolved, and diffuse intracluster gas. Does history repeat itself ? ## 5 CONCLUSIONS The main conclusion from the above results is that the $`BeppoSAX`$ observations of the two prototypical SBGs NGC 253 and M82 have revealed for the first time evidence of a very hot and diffused thermal plasma which is mainly responsible for their hard (2–10 keV) emission. This discovery is likely to have important implications on the AGN/starburst connection and on our understanding of the chemical enrichment of the IG medium and on the formation and evolution of galaxies. ## 6 REFERENCES * Cappi, M., et al., to appear in proceedings of “Dal nano- al tera-eV: tutti i colori degli AGN”, third Italian conference on AGNs, Roma, Memorie S.A.It, astro-ph/9809325 (1998) * Dahlem, W., Weaver, K.A., & Heckman, T.M., Astrophys. J. Suppl., in press (1998) * Della Ceca, R., Griffiths, R.E., & Heckman, T.H., Astrophys. J., 485, 581 (1997) * Della Ceca, R., Griffiths, R.E., Heckman, T.H., & MacKenty, J.W., Astrophys. J., 469, 662 (1996) * Fabbiano, G., $`A.R.A.\&A.`$, 27, 87 (1989) * Forman, W., Kellogg, E., Gursky, H., Tananbaum, H., & Giacconi, Astrophys. J., 178, 309 (1972) * Gott, J., & Gunn, J., Astrophys. J., 169, L13 (1971) * Heckman, T.M, RevMexAA (Seie de Conferencias), 6, 156 (1997) * Kellog, E., Baldwin, J.R., & Koch, D., Astrophys. J., 199, 299 (1975) * Kellog, E., Gursky, H., Tananbaum, H., & Giacconi, R., Astrophys. J., 174, L65 (1972) * Mitchell, R.J., Culhane, J.L., Davison, P.J.N., & Ives, J.C., M.N.R.A.S., 176, 29 (1976) * Moran, E.C, & Lehnert, D., Astrophys. J., 478, 172 (1997) * Persic, M., et al., Astron. Astr., 339, L33 (1998) * Ptak, A., Serlemitsos, P., Yaqoob, T., Mushotzky R. & Tsuru, T., Astron. J., 113, 1286 (1997) * Rephaeli, Y., Gruber, D., Persic, M., & D. McDonald, Astrophys. J., 380, L59 (1991) * Serlemitsos, P.J., Smith, B.W., Boldt, E.A., Holt, S.S., & Swank, J.H., Astrophys. J., 211, L63 (1977) * Tsuru, T.G., Awaki, H., Koyama, K., & Ptak, A., P.A.S.J., in press (1998) * Wang, D., Walterbos, R., Steakley, M., Norman, C., & Braun, R., Astrophys. J., 439, 176 (1995)
no-problem/9902/hep-ph9902265.html
ar5iv
text
# 1 Introduction ## 1 Introduction One can explore theoretically the properties of deconfined matter by employing certain numerical methods in evaluating the theory of strong interaction – Quantum Chromo Dynamics (QCD) – to get, for instance, the equation of state . Phenomenological models can supplement such analyses to arrive at a more qualitative understanding of a system composed of quarks and gluons . Ultimately, however, one is interested in a verification whether such a state is realized in nature. It is usually thought that central collisions of heavy ions at high energies offer the unique way to create transiently deconfined matter under laboratory conditions. There is a fairly long list of proposals of how to measure the properties of such a novel matter state. Dileptons represent penetrating probes which are considered since a long time to be a good messenger from the early stages of the deconfined matter resulting in ultrarelativistic heavy-ion collisions. Therefore, it should be possible to get direct information about the thermodynamical parameters of the hot and dense, strongly interacting system. The problematic part about dileptons in heavy-ion collisions is that there are quite a lot of different sources. Dalitz decays dominate the low invariant mass region, while the high mass region is governed by Drell-Yan (DY) dileptons. The preferable region for a thermal signal from thermalized QCD deconfined matter is the so-called intermediate mass region between the $`\varphi `$ and the $`J/\mathrm{\Psi }`$. In the resonance region below the $`\varphi `$ the vector meson decays provide a strong signal and also the thermal radiation from hadron matter can be probably best observed. In addition one has to be aware that, although most of heavier thermal dileptons are produced in the very early stages of the hot matter, the production process continues during the whole evolution of the system and only the space-time integrated yield can be measured. Therefore, some efforts are needed to unfold a dilepton spectrum and to extract the wanted information. With increasing beam energies one generally expects higher matter temperatures and therefore a stronger signal of the thermal production. At the same time, however, also other production channels for dileptons gain importance. In particular the correlated semileptonic decays of open charm and bottom mesons become strong sources. To get an idea on the various competing sources, in fig. 1 we compare the beam energy dependence of the expected thermal signal with the DY yield and the dileptons from correlated charm and bottom decays. (For the details about our modeling we refer the interested reader to . Here we mention that at large beam energies we estimate the initial conditions of deconfined matter within the mini-jet model . The DY process is calculated with standard procedures. $`c\overline{c}`$ and $`b\overline{b}`$ pairs are produced in gluon fusion processes; in lowest order the heavy quarks propagate back-to-back in the transverse plane, and the hadronization into open charm and bottom mesons can be approximated by a $`\delta `$ fragmentation scheme.) At SPS energies the charm and DY yields are of the same order of magnitude, and the thermal signal is below both ones. With increasing beam energy or $`\sqrt{s}`$ the thermal signal increases stronger than the DY yield. On the other hand, the dilepton yields from charm and bottom decays increase even stronger and result in a background which is up to two orders of magnitude higher than the thermal signal. Therefore it seems to be difficult to get thermal information from the simple invariant mass spectrum. In what follows we are going to discuss whether one can find such kinematical cuts which enable one to discriminate the thermal signal from the background at very high beam energies such as envisaged at RHIC and LHC (sect. 2). We also discuss the change of the heavy quark spectra by the deconfined medium and the resulting impact on the single lepton spectra stemming from charm and bottom decays (sect. 3). And finally we consider the influence of a dense hadron medium on the esfinal open charm spectrum at present SPS energies and show that the dilepton spectra are correspondingly changed (sect. 4). ## 2 Perspectives for RHIC/LHC: dilepton spectra Since the kinematics of heavy meson production and decay differs from that of thermal dileptons, one can expect that special kinematical restrictions superimposed on the detector acceptance will be useful for finding a window for observing thermal dileptons in the intermediate mass continuum region. As demonstrated recently , the measurement of the double differential dilepton spectra as a function of the transverse pair momentum $`Q_{}`$ and transverse mass $`M_{}=\sqrt{M^2+Q_{}^2}`$ within a narrow interval of $`M_{}`$ offers a chance to observe thermal dileptons at LHC. The key observation here is to apply also a single–electron low–$`p_{}`$ cut. Fig. 2 shows the double differential spectrum for a narrow interval of $`M_{}`$ around 5.5 GeV and for $`p_{}>2`$ GeV. One observes that these kinematical restrictions suppress the background at large values of $`Q_{}`$, while the thermal signal obeys the so called $`M_{}`$-scaling and extends nearly up to the kinematical boundary. Another possibility to suppress the mentioned background processes is to implement only a large enough low-$`p_{}`$ cut on single electrons . This opens a window for the thermal signal in the invariant mass distribution. Since the energy of individual decay electrons or positrons has a maximum of about 0.88 (2.2) GeV in the rest frame of the decaying $`D`$ ($`B`$) meson, one can expect to get a strong suppression of correlated decay lepton pairs by choosing a high enough low-momentum cut $`p_{}^{\mathrm{min}}`$ on the individual leptons in the mid-rapidity region. For thermal leptons stemming from deconfined matter there is no such upper energy limit and for high temperature the thermal yield will not suffer such a drastically suppression by the $`p_{}^{\mathrm{min}}`$ cut as the decay background. The results of our lowest-order calculations of the invariant mass spectrum with such a $`p_{}`$-cut is displayed in fig. 3 again for LHC energies. One observes that the thermal dilepton signal with a single-electron low-momentum cut-off $`p_{}^{\mathrm{min}}=`$ 3 GeV exhibits an approximate plateau in the invariant mass region 2 GeV $`M2p_{}^{\mathrm{min}}`$. With both methods it is possible to extract the information about the thermodynamical parameters of the very first stages of the deconfined matter . But due to the quite low rates, it is questionable if these cuts are experimentally feasible. Recently, the ALICE-GSI group found that, via exact tracking and vertex reconstruction, one can suppress a substantial part of the open charm and bottom decay electrons in the midrapidity region. Therefore, the need of stringent cuts is relaxed somewhat and realistic count rates are to be expected. The announced heavy-ion programme of the CMS collaboration at LHC looks also quite interesting as it can provide complementary muon spectra in the midrapidity region. ## 3 Energy losses of heavy quarks in deconfined matter Charm and bottom quarks traversing through deconfined matter radiate gluons and lose energy. As a consequence the transverse momenta of parent quarks and the resulting heavy mesons and the emerging decay leptons are diminished. Since the invariant mass of dileptons is $`M^2=2p_{}^+p_{}^{}[\text{ch}(y^+y^{})\mathrm{cos}(\varphi ^+\varphi ^{})]`$, smaller values of the $`p_{}`$’s cause smaller values of $`M`$ ($`p_{}^\pm `$, $`y^\pm `$ and $`\varphi ^\pm `$ are the respective transverse momenta, and rapidities, and azimuthal angles of leptons). Therefore, via energy losses the number of dileptons in the intermediate mass region is reduced. The effect is extensively studied in and it turns out that, with realistic estimates of the energy loss, the background is not reduced below the thermal signal. However, we would also like to point out that an explicit measurement of the inclusive single-electron $`p_{}`$-spectra from open charm and bottom decays contains valuable information . Namely the energy losses sufficiently change the resulting momentum distribution of the open charm and bottom mesons and, as a consequence, the decay electrons exhibit a significantly modified $`p_{}`$ spectrum (for details consult ). Since such an effect does not appear in pp collisions, the verification of a modified electron spectrum from identified charm and bottom decays would offer a hint to the creation of deconfined matter. The studies in demonstrate that tracking cuts offer the chance to get a ”signal”-to-background ratio of 98%, where ”signal” means here the decay electrons from charm and bottom. Therefore, such a measurement seems to be feasible with ALICE at LHC. To illustrate the order of magnitude of the expected effect we show in fig. 4 the transverse momentum spectrum of decay electrons from open charm and bottom mesons in a lowest-order calculation as described above . Only electrons which seem to come from a point outside a sphere of 150 $`\mu `$m around the primary vertex are counted. Although charm and bottom are of the same order of magnitude, one can fit the summed distribution by $`dN_e^{}/dp_{}\mathrm{exp}(p_{}/T_e)`$ in the interval $`p_{}=`$ 3…5 GeV and finds a change of the slope parameter $`T_e`$ from 930 MeV (without energy loss) to 790 MeV (with energy loss). This difference is approximately the same as could be expected in the PHENIX acceptance at RHIC. One should stress that with such a measurement one can reveal the presence of a medium. Detailed information on thermodynamical state parameters are, however, difficult to extract: a variation of the initial temperature from 0.8 to 1.2 GeV causes only minor changes of the slopes of the single-electron $`p_{}`$ spectra. ## 4 Dileptons in the intermediate mass region at CERN-SPS In present SPS experiments ($`\sqrt{s}=15\mathrm{}20`$ GeV) the CERES collaboration reports a dilepton excess over the known hadronic cocktail in S + Au and Pb + Au collisions at invariant masses $`M<1`$ GeV . The order of magnitude of this excess can be attributed to a thermal source stemming mainly from pion annihilation, while the detailed shape of the spectrum is still matter of debate, e.g. it might reflect an in-medium changed $`\rho `$ spectral function. Similarly, in the intermediate mass region as accessible in the acceptances of the HELIOS-3 and NA38/50 experimental set-ups, the conventional sources Drell-Yan and open charm decays, known from pp collisions, seem also not to account for the observed data, i.e. there is an excess too . The NA50 data for central Pb-Pb collisions can be explained for an enhanced charm production . A possible source could be a pronounced nuclear anti-shadowing of gluons . On the other hand, the charmed mesons can experience a modification of their primordial spectrum via interactions with the dense hadron medium before freezing out. As a working hypothesis one can assume that the transverse open charm meson spectra in central heavy-ion collisions at CERN-SPS look like the other hadron spectra. Indeed, as shown in the available transverse momentum spectra of $`\pi ^\pm `$, $`K^\pm `$, $`K_s^0`$, $`p^\pm `$, $`\mathrm{\Lambda }`$, $`\overline{\mathrm{\Lambda }}`$, $`d`$ at midrapidity can be described by a unique freeze-out temperature of 120 MeV and a unique transverse flow with averaged velocity of 0.41 c. Following a suggestion of one can give the charm mesons a randomly oriented thermal kick, which mimics the thermalization process. As a consequence the resulting decay electron spectrum is modified as displayed in fig. 5. One observes that, due to the very acceptance of NA50 experiment, a change in the kinematical distributions of the particles can lead to an apparent excess in the measured yield. This effect deserves further studies, e.g., by analyzing the transverse dilepton spectra in the same mass region, which are presently prepared by the NA38/50 collaboration . As a measure for the theoretical uncertainties we also display in fig. 5 the effect of changing the fragmentation scheme to Lund fragmentation with Peterson function ($`ϵ=0.02`$) and $`m_c=1.5`$ GeV. ## 5 Summary It is obvious that dileptons are very interesting and promising signals if one wants to learn about the physics of highly excited, strongly interacting matter. It is also clear that very much efforts have to be put in to unfold the spectra to get the desired information. We show that suitable kinematical cuts can suppress the background dilepton yield. For a doubtless identification of the thermal dilepton signal, however, an explicit measurement of open charm and bottom would be preferable. In addition, in the present contribution we report studies of two in-medium effects: (i) the change of the charm & bottom quark spectra by gluon radiation in deconfined QCD matter and (ii) the change of the open charm meson spectra by thermalization in the hadron stage. Both effects are noticeable. ### Acknowledgments We would like to thank Prof. I. Iori for the organization of such a nice series of workshops and for the fruitful and exciting scientific atmosphere in Bormio. The work is supported by BMBF 06DR829/1.
no-problem/9902/hep-ph9902407.html
ar5iv
text
# 1 Introduction ## 1 Introduction Recently, Dienes, Dudas and Gherghetta (DDG) have suggested the intriguing possibility that grand unification may occur at intermediate or even low energy scales in models with extra spacetime dimensions compactified on orbifolds of radii $`R`$. Above the compactification scale $`\mu _0=1/R`$, the vacuum polarization tensors for the gauge fields of the minimal supersymmetric standard model (MSSM) receive finite corrections from a tower of Kaluza-Klein (KK) excitations which contribute at one loop. As a consequence, the gauge couplings develop a power-law rather than logarithmic dependence on the ultraviolet cut off of the theory, $`\mathrm{\Lambda }`$ . While this is not running in the conventional sense, the gauge couplings nonetheless evolve rapidly as a function of $`\mathrm{\Lambda }`$, so that it is possible to achieve an accelerated unification. In the minimal scenario proposed by DDG, all the non-chiral MSSM fields, the two Higgs doublets and the gauge multiplets, live in a $`4+\delta `$ dimensional spacetime, and have an associated tower of KK excitations. The chiral MSSM fields are assumed to lie at fixed points of the orbifolds, and thus have no KK towers. This is the simplest way of avoiding the difficulties associated with giving mass to chiral KK states. DDG demonstrated that an approximate gauge unification could be achieved in this scenario, at a grand unification scale $`M_{\mathrm{GUT}}`$ that is much smaller than its usual value in supersymmetric theories, $`2\times 10^{16}`$ GeV. However, as pointed out in Ref. , strict comparison to the low energy data reveals that for TeV-scale compactifications, the DDG model predicts a value for $`\alpha _3(m_z)`$ that is higher than the prediction in conventional unified theories, which is already $``$5 standard deviations higher than the experimental valueUsing the same two loop code and input values described later in this paper, we find that $`\alpha _3(m_z)0.1276`$ in conventional supersymmetric unified theories, compared to the world average, $`0.1191\pm 0.0018`$ .. Assuming that the unification point coincides with the string scale, then a specific model of string scale threshold corrections is required before one can claim that unification in the DDG model is actually achieved. It is the purpose of this paper to point out that there are a number of simple variations on the DDG proposal that achieve gauge unification much more precisely than the minimal scenario described above. To begin, however, let us consider the variations that are as successful as the minimal case. As pointed out by DDG, one possibility is to allow $`\eta `$ generations of matter fields to experience extra dimensions, and to add to the theory their chiral conjugate mirror fields, so that suitable KK mass terms may be formed. Assuming that the orbifold is $`S^1/Z_2`$, then one may take the mirror fields to be $`Z_2`$ odd, so that unwanted zero modes are not present in the low-energy theory. Given that the KK excitations of the matter fields form complete SU(5) multiplets, it is perhaps not surprising that if unification is achieved for $`\eta =0`$, it will also be preserved for $`\eta 0`$, at least for some range of $`\mu _0`$. What is less obvious is that the KK excitations of the matter fields may be chosen to form incomplete SU(5) multiplets, and an approximate unification may still be preserved if only some of the MSSM gauge and Higgs fields experience extra dimensions. In Section 2, we will present three models that demonstrate (i) that it is not necessary for all gauge groups to live in the higher dimensional bulk in order to achieve unification, and (ii) a precise unification may sometimes be obtained without introducing any exotic matter, beyond the mirror fields described aboveFor other possibilities, see Ref. .. In Section 3 we study these cases quantitatively, taking into account weak-scale threshold corrections, and two-loop running up to the compactification scale. For TeV scale compactifications , we will see that all of the nonminimal scenarios we present in this paper unify more precisely than the minimal DDG scenario, and do not require large threshold corrections at the unification scale. In Section 4 we summarize our conclusions, and make some brief comments on the phenomenological implications of these models when the compactification scale is low. ## 2 Three scenarios We assume $`4+\delta `$ spacetime dimensions, with $`\delta `$ dimensions each compactified on a $`Z_2`$ orbifold of radius $`1/\mu _0`$. The fields that experience extra dimensions are periodic in the $`\delta `$ new spacetime coordinates $`y_1\mathrm{}y_\delta `$, and are either even or odd under $`\stackrel{}{y}\stackrel{}{y}`$. For example, in the case where $`\delta =1`$, these ‘bulk’ fields have expansions of the form $$\mathrm{\Phi }_+=\underset{n=0}{\overset{\mathrm{}}{}}\mathrm{cos}(\frac{ny}{R})\mathrm{\Phi }^{(n)}(x^\mu ),$$ $$\mathrm{\Phi }_{}=\underset{n=1}{\overset{\mathrm{}}{}}\mathrm{sin}(\frac{ny}{R})\mathrm{\Phi }^{(n)}(x^\mu )$$ (2.1) where $`n`$ indicates the KK mode. The only other fields in the theory are those which live at the orbifold fixed points $`y=0`$ or $`y=\pi R`$, and have no KK excitations. The effect of a tower of KK states on the running of the MSSM gauge couplings was computed by DDG, and is given in a useful approximate form by $`\alpha _i^1(\mathrm{\Lambda })`$ $`=`$ $`\alpha _i^1(m_z){\displaystyle \frac{b_i}{2\pi }}\mathrm{ln}({\displaystyle \frac{\mathrm{\Lambda }}{m_z}})+{\displaystyle \frac{\stackrel{~}{b}_i}{2\pi }}\mathrm{ln}({\displaystyle \frac{\mathrm{\Lambda }}{\mu _0}})`$ (2.2) $``$ $`{\displaystyle \frac{\stackrel{~}{b}_iX_\delta }{2\pi \delta }}\left[({\displaystyle \frac{\mathrm{\Lambda }}{\mu _0}})^\delta 1\right].`$ (2.3) Here the $`\stackrel{~}{b}_i`$ are the beta function contributions of a single KK level, and $`X_\delta `$ is given by $$X_\delta =\frac{2\pi ^{\delta /2}}{\delta \mathrm{\Gamma }(\delta /2)}.$$ (2.4) For the scenarios considered by DDG, these beta functions are $$b_i=(\frac{33}{5},1,3)\stackrel{~}{b}_i=(\frac{3}{5},3,6)+\eta (4,4,4),$$ (2.5) where $`\eta `$ is the number of generations of matter fields that experience extra dimensions. DDG observe that a sufficient condition for gauge unification to be preserved is that the ratios $$B_{ij}=\frac{\stackrel{~}{b}_i\stackrel{~}{b}_j}{b_ib_j}$$ (2.6) be independent of $`i`$ and $`j`$. Thus, they point out that in the scenario above $$\frac{B_{12}}{B_{13}}=\frac{72}{77}0.94\text{ and }\frac{B_{13}}{B_{23}}=\frac{11}{12}0.92.$$ (2.7) We will now show that there are a variety of other models, each with a different set of MSSM fields living at the orbifold fixed points, that lead to $`B_{12}/B_{13}B_{13}/B_{23}1`$. First, notice that the $`\stackrel{~}{b}_i`$ of the minimal scenario can be decomposed into the contributions from the KK excitations of each MSSM field, as shown in Table 1. An overline denotes a mirror field required so that the given KK tower is vector-like. This table is useful in that it allows us to mix and match. We will do so taking into account the string constraint that bulk matter may only transform under bulk gauge groups. For example, consider a model with all leptons and gauge fields living in the bulk, but with Higgs fields and quarks at the fixed points. The $`\stackrel{~}{b}_i`$ are given by $$\stackrel{~}{b}_i=(0,4,6)+3(9/5,1,0)=(27/5,1,6).$$ (2.8) In this case we find $$\frac{B_{12}}{B_{13}}=\frac{128}{133}0.96\text{ and }\frac{B_{13}}{B_{23}}=\frac{19}{20}0.95.$$ (2.9) As we will confirm explicitly in the next section, this scenario achieves unification more precisely than the minimal one. In Table 2, we present three scenarios with $`B_{ij}`$ ratios that are significantly better than in the minimal scenario. We indicate the gauge group when the corresponding gauge multiplet is a bulk field, H for both MSSM Higgs fields, and $`n\mathrm{\Phi }`$ for $`n`$ generations of an MSSM matter field $`\mathrm{\Phi }(Q,U,D,L,\text{ or }E)`$. Note that it is possible in scenario 1 to exchange an L for an H; the vector-like tower of KK excitations associated with a zero-mode left-handed lepton field have the same effect on the $`\stackrel{~}{b}`$ as the tower associated with the MSSM Higgs fields. Scenarios 2 and 3 demonstrate that it is not necessary to assume that both SU(2) and SU(3) gauge multiplets live in the higher dimensional bulk in order to obtain a successful unification. As far as we are aware, this point has not been made in the literature. Note that only the third scenario involves extra matter, two SU(5) 5+$`\overline{\mathrm{𝟓}}`$ pairs in which only the leptons live in the bulk. We assume that the exotic matter zero modes have a mass of $`m_{\mathrm{top}}`$ for the purpose of our subsequent analysis. More strikingly, Scenarios 1 and 2 demonstrate that is possible to achieve an improved unification in nonminimal models without the addition of any exotic matter multiplets, beyond the mirror fields required to render the KK towers of the matter fields vector-like. We will now consider all three scenarios quantitatively, and show that none require large threshold corrections at the unification scale. ## 3 Numerical Results Our numerical analysis of gauge unification in the scenarios listed in Table 2 is quite conventional. We adopt the $`\overline{MS}`$ values for the (GUT normalized) gauge couplings $`\alpha _1(m_z)=58.99\pm 0.04`$ and $`\alpha _2(m_z)=29.57\pm 0.03`$ that follow from data in the 1998 Review of Particle Physics . We run these up to the top quark mass, where we then assume the beta functions of the supersymmetric standard model, and where we convert the gauge couplings to the $`\overline{DR}`$ scheme. We take into account threshold effects, due to varying superparticle masses, at the one-loop level, and running between $`m_{\mathrm{top}}`$ and the compactification scale $`\mu _0`$ at the two loop level. We then use Eq. (2.3) above the scale $`\mu _0`$ to determine the unification point. Thus, our procedure is similar to Ref. , except that we allow for greater freedom in our choice of weak scale threshold corrections. This procedure is iterated with trial values of $`\alpha _3(m_z)`$ until a suitable three coupling unification is achieved. For each of the given scenarios we obtain a prediction for $`\alpha _3(m_z)`$ assuming no threshold corrections at the unification scale. While such high scale threshold corrections should be present generically, our approach allows us to test the assumption that these need not be large. In Fig. 1, we show the qualitative behavior of unification in scenarios 1, 2 and 3 by plotting the running couplings above a compactification scale of $`2`$ TeV, assuming the experimental value of $`\alpha _3(m_z)=0.1191\pm 0.0018`$ . Table 3 presents predictions for $`\alpha _3(m_z)`$ assuming either an intermediate or low unification scale. We display results for $`\delta =1`$ and $`2`$, and for $`\mu _0=2`$ TeV and $`10^8`$ GeV. These choices are sufficient to understand the qualitative behavior of the results: as $`\delta `$ increases, the predictions for $`\alpha _3(m_z)`$ increase monotonically, while for increasing values of $`\mu _0`$, the predictions approach that of the MSSM without extra dimensions. Table 4 provides the predictions for $`\alpha _3(m_z)`$ including representative weak scale threshold effects, in which we’ve either placed all the non-colored MSSM superpartners at $`1`$ TeV, with the rest at $`m_{\mathrm{top}}`$, or vice versa. Let us consider the results for each of the scenarios in turn: $``$ Minimal scenario: This is the $`\eta =0`$ scenario of DDG, which we include as a point of reference. The beta functions for this scenario are given in Eq. (2.5). Note that for $`\delta =1`$ and $`\mu _0=2`$ TeV the low energy value of $`\alpha _3(m_z)`$ is $``$30 standard deviations above the experimental central value, $`0.1191\pm 0.0018`$ , and improves to $``$16 standard deviations<sup>§</sup><sup>§</sup>§Had we used value of $`\alpha _3(m_z)`$ in the 1996 Review of Particle Physics, we would obtain results too high by about $`9.8`$ standard deviations. The experimental determination of $`\alpha _3`$ has since improved. if one assumes that colored MSSM superpartners at the weak threshold are all at $`m_{\mathrm{top}}`$ while noncolored sparticles are at $``$1 TeV. These results agree qualitatively with those in Ref. , where a different approximation for weak scale threshold effects was used. $``$ Scenario 1: In this scenario, the gauge fields and leptons live in the bulk, while the Higgs and quarks live at orbifold fixed points. The beta functions for this scenario were given in Eq. (2.8). Notice that our previous observation that this scenario satisfies the relation $`B_{12}/B_{13}=B_{13}/B_{23}=1`$ more accurately than the minimal case does translate into a better predictions for $`\alpha _3(m_z)`$. For $`\delta =1`$ and $`\mu _0=2`$ TeV, and assuming the same choice for weak scale threshold corrections applied to the minimal scenario above, we find agreement with the experimental value of $`\alpha _3(m_z)`$ at the $`4`$ standard deviation level. $``$ Scenario 2: In this scenario, the SU(2) gauge multiplet and Higgs fields are confined to the fixed point, while precisely one generation of right-handed up and down quarks, and three generations of right-handed leptons live in the bulk. The KK beta function contributions are given by $$\stackrel{~}{b}_i=(28/5,0,4).$$ (3.10) This scenario achieves unification much more precisely than the minimal one. In the case where $`\mu _0=2`$ TeV and $`\delta =1`$, the predicted value of $`\alpha _3(m_z)`$ is only 1.8 standard deviations below the experimental central value, ignoring all threshold corrections. Allowing the superparticle spectrum to vary as in Table 4, we find this prediction varies between $$0.105<\alpha _3(m_z)<0.126.$$ (3.11) Thus, unification can be achieved in this model without any string scale threshold corrections. $``$ Scenario 3: In this example, the SU(3) gauge multiplet is confined to the orbifold fixed point, and hence we are only allowed to place leptons and Higgs fields in the bulk. If we let $`\eta _E`$ and $`\eta _L`$ represent the number of right-handed and left-handed KK lepton excitations (including their chiral conjugate partners), then the constraint $`B_{12}=B_{13}=B_{23}`$ implies that $`\eta _E=(3\eta _L13)/2`$, which has no solution for only three generations of left-handed lepton fields. However, there is a simple way to circumvent this problem. Notice that if we add SU(5) 5+$`\overline{\mathrm{𝟓}}`$ pairs in which only the lepton compontents live in the bulk, then the condition on $`\eta _E`$ and $`\eta _L`$ given above will hold, since differences in zero mode beta function pairs will remain unchanged. A solution may then be obtained by choosing $`\eta _L=5`$ and $`\eta _E=1`$, which implies the existence of two such 5+$`\overline{\mathrm{𝟓}}`$ pairs, when one generation of right-handed and three generations of left-handed MSSM lepton fields are assigned to the bulk. The beta functions for this scenario are then given by $$b_i=(43/5,3,1)\stackrel{~}{b}_i=(24/5,2,0).$$ (3.12) Notice in this case that the SU(3) gauge coupling only evolves logarithmically, since $`\stackrel{~}{b}_3=0`$. Unification may be achieved at a low scale by virtue of the power-law evolution of $`\alpha _1^1`$ and $`\alpha _2^1`$, as can be seen from Fig. 1. This scenario is about as successful as scenario 2, predicting $`\alpha _3(m_z)`$ only $`2.0`$ standard deviations below the experimental central value, ignoring all threshold corrections, and assuming that the exotic matter fields have masses of $`m_{\mathrm{top}}`$. Allowing the sparticle mass spectrum to vary between $`m_{\mathrm{top}}`$ and 1 TeV, as in Table 4, we find that the scenario 3 prediction for $`\alpha _3(m_z)`$ varies between $$0.104<\alpha _3(m_z)<0.124,$$ (3.13) for $`\delta =1`$ and $`\mu _0=2`$ TeV. Again, unification is achieved without the need for threshold corrections at the high scale. Although this scenario does indeed involve some new matter fields, the choice is relatively minimal, and may be completely natural from the point of view of string theory. ## 4 Discussion What is interesting about the scenarios we’ve presented is that gauge unification can be achieved in so many different ways. Each of our scenarios unifies more precisely than the minimal DDG model, and none requires large (or in two cases any) threshold corrections at the unification scale. These models illustrate two other interesting points as well: (1) One can achieve unification when some of the standard model gauge groups are confined to a brane. (2) There are some models that unify more precisely than DDG that do not require any additional matter fields with exotic quantum numbers, beyond the vector-like KK towers of certain MSSM fields that are chosen to live in the bulk. Before concluding, we comment briefly on some of the other phenomenological implications of these scenarios when the compactification scale is low. For a more complete discussion of the phenomenology of standard model KK excitations in models with TeV scale compactification , we refer the reader to Refs. . In scenarios 1 and 2 the gluon has a tower of KK excitations, which leads to a significant bound on the compactification scale. The KK gluon excitations are massive color octet vector mesons, with couplings both to zero mode gluons and to all the quarks. Thus, the KK gluons are in every way identical to flavor universal colorons, and are subject to the same bounds. Recall that in coloron models, one obtains a massive color octet from the spontaneous breaking of SU(3)$`\times `$SU(3) down to the diagonal color SU(3). In the case where the two SU(3) gauge couplings are equal, the coloron couples to quarks exactly like a gluon, or a KK gluon. The couplings of colorons or KK gluons to zero mode gluons are completely determined by SU(3) gauge invariance, and hence are also the same. Thus the relevant bound on the lowest KK gluon excitation is given by $`M_c>759`$ GeV at the 95% confidence level , which follows from consideration of the dijet spectrum at the Tevatron. This constraint places a lower bound on the scale for all the KK excitations in scenarios 1 and 2. We can obtain a similar bound on the compactification scale in scenario 3 from the production and hadronic decay of $`W`$ boson KK excitations, which have standard model couplings to the quarks: $`M_W^{}>600`$ GeV . Other direct collider bounds on the compactification scale require a more detailed analysis, given the nonstandard $`W^{}`$ and $`Z^{}`$ couplings in our models. This issue will be considered elsewhere . Scenario 1 is particularly interesting when one takes into account that interaction vertices involving fields that all live in the higher dimensional bulk respect a conservation of KK number. (One can think of this as arising from the conservation of KK momentum following from translational invariance in the extra dimensions.) Hence, in scenario 1, the KK excitations of the electroweak gauge fields cannot couple to the lepton zero modes, and we obtain both $`Z^{}`$ and $`W^{}`$ bosons with otherwise standard couplings, that are naturally leptophobic! These states would likely be within the reach of the LHC for TeV scale compactifications. The other two scenarios present a more complicated phenomenology, since some generations of a given MSSM matter field live in the bulk while others live on the brane. It follows that the KK excitations of a standard model gauge field would have generation-dependent couplings to the zero mode matter fields, and may contribute to a variety of quark and lepton flavor-changing processes. Finally it is worth pointing out that in scenario 3, the fact that the KK $`W`$ boson excitations can’t couple to zero mode left-handed lepton fields, also leads to a leptophobic $`W^{}`$. While the purpose of the present work was to focus on gauge unification in these nonminimal scenarios, a more quantitative discussion of the TeV scale phenomenology of the scenarios described here will be presented in a separate publication . Acknowledgments We are grateful to Keith Dienes for his comments on the manuscript, and thank Marc Sher and Carl Carlson for useful discussions. We thank the National Science Foundation for support under grant PHY-9800741.
no-problem/9902/chao-dyn9902001.html
ar5iv
text
# References Temporal chaos in discrete one dimensional gravity model of traffic flow Elman Mohammed Shahverdiev,<sup>1</sup><sup>1</sup>1e-mail:shahverdiev@lan.ab.az On leave from Institute of Physics, 33, H.Javid avenue, Baku 370143, Azerbaijan Department of Information Science,<sup>2</sup><sup>2</sup>2e-mail:elman@ai.is.saga-u.ac.jp Saga University, Saga 840, Japan Shin-ichi Tadaki Department of Information Science,<sup>3</sup><sup>3</sup>3e-mail:tadaki@ai.is.saga-u.ac.jp Saga University, Saga 840, Japan There are mainly two approaches to traffic flow dynamics: At a microscopic level, the system can be described in terms of variables such as the position and velocity of each vehicle (optimal velocity model \[1,2 \],cellular automaton model \[3,4 \]);at a macroscopic level important variables include the car density, average velocity, the rate of traffic flow, the total number of trips between two zones.(mean field theory \[5-11\], origin-destination (or the so- called gravity)model ).The gravity model originates from an anology withNewton’s gravitational law . As a rule traffic flow dynamics is the nonlinear one.Nowadays it is well-known that some deterministic nonlinear dynamical systems depending on the value of system’s parameters exhibit unpredictable,chaotic behavior,see,e.g.\[14-16\] and references therein. The main reason for such a behavior is the instability of the nonlinear system. Such an instability in general was undesirable not only in traffic flow dynamics ,but also in dynamical systems in mechanics, engineering,etc, due to the frightening nature of unpredictability as the unstability could lead to chaos. The stability or unstability of traffic flow dynamics are highly valued conceptions in traffic management and planning. Since the pioneering papers \[17,18 \] on chaos control theory,the attitude to chaos has been changed dramatically. Nowadays in some situations chaotic behavior is considered even as an advantage.In general the main idea behind chaos control theory is to modify the nonlinear systems’ dynamics so that previously unstable states (fixed points,periodic states,etc.)now become stable.In practice such modifications could be realized by changing system’s parameters,through some feedback or nonfeedback mechanisms,or even by changing the dynamical variables of the system in an “appropriate” manner in “due”time (adaptively or nonadaptively),etc. The interest to the chaos control theory is due to the application of this phenomenen in secure communication, in modelling of brain activity and recognition processes, etc,. Also methods of chaos control may result in the improved performance of chaotic systems.(For the latest comprehensive review of chaos and its control see Focus Issue and references therein;also see \[20-22\]). In this Brief Report we report on the possible chaotic behaivor in one dimensional gravity model of traffic flow dynamics. The gravity model assumes that the number of trips between zones (origins and destinations) depends on the number produced at and attracted to each zone,and on the travel cost between zones.In the dynamic formulation of the gravity model the travel costs are a function of the number of trips between zones.According to ,the discrete dynamic trip distribution gravity model takes the form $$x_{ij}(t+1)=f(c_{ij}(t)),(1)$$ where $`x_{ij}`$ is the relative number of trips from zone $`i`$ to zone $`j`$,normalised so that $`_{ij}x_{ij}=1`$, $`c_{ij}`$ is the travel cost from zone $`i`$ to zone $`j`$ given the trips $`x_{ij}`$. $$c_{ij}(t)=c_{ij}^0(1+\alpha (\frac{x_{ij}}{z_{ij}})^\gamma ),(2)$$ where $`c_{ij}^0`$ is the uncongested travel cost,$`q_{ij}`$ is the relative capacity of the roads between origin and destination and $`\alpha `$ and $`\gamma `$ are constants. $`f(c_{ij})`$ is a function which relates the number of trips to the travel costs.The following cost function $$f(c_{ij})=c_{ij}^\mu \mathrm{exp}(\beta c_{ij}),(3)$$ where $`\mu `$ and $`\beta `$ are constants is refered to as combined cost function and unites both the power and exponential forms of cost functions. It is known that for continuous dynamical systems for chaotic behavior the number of dynamical variables should be three or more than that.For discrete systems chaos is possible even in one dimensional systems. According to ,for the unconstrained (the model is the unconstrained one in the sense that it cannot guarantee that the number of trips originating from or terminating at a given zone has a value which is predetermined.)and singly-constrained (the model is the singly-constrained one in the sense that it supposes that either the number of trips originating from or terminating at a given zone has a value which is predetermined.) gravity models the dynamics of trip distribution model in the one dimensional case could be written as $$x(t+1)=Af(x(t))=A(c^{(0)})^\mu (1+\alpha (\frac{x(t)}{q})^\gamma )^\mu \mathrm{exp}(\beta c^{(0)}(1+\alpha (\frac{x(t)}{q})^\gamma )),(4)$$ where $`A`$ is the normalizing constant factor;the definition of other constants are given above. Authors of claim that one dimensionsal gravity model does not exhibit chaotic behavior. We will show that this model could be reduced to one dimensional chaotic model known as exponential map in ecology : $$x(n+1)=f(x(n))=x(n)\mathrm{exp}(r(1x(n))),(5)$$ where $`r`$ is positive control parameter of the chaotic mapping (5). Indeed let us take $`\mu =\gamma =1`$. Then the mapping (4) could be written in the following form: $$x(t+1)=m_1(1+mx(t))\mathrm{exp}(\beta c^{(0)}mx(t)),(6)$$ where $`m_1=Ac^{(0)},m=\frac{\alpha }{q}`$.Further by linear transformation of variables $`y=1+mx(t)`$ the mapping (6) could be related to the mapping: $$y(t+1)=m_2y(t)\mathrm{exp}(\beta c^{(0)}(1y(t))),(7)$$ where $`m_2=m_1\mathrm{exp}(\beta c^{(0)})`$. Comparing (5) and (7) one can see that in the gravity model the $`\beta c^{(0)}`$ could be taken as a control parameter. Thus we have shown that the temporal chaotic behavior is possible even in discrete one dimensional gravity model for traffic flow dynamics.Moreover this chaotic behavor could be controlled by the constant rate harvesting approach developed in for unimodal one dimensional mappings,including (5) or by some other methods for one dimensional dynamical systems. Acknowledgments The author thanks the JSPS for the Fellowship.
no-problem/9902/astro-ph9902341.html
ar5iv
text
# Intrinsic Narrow Absorption Lines in HIRES/Keck Spectra of a Sample of Six QuasarsBased in part on observations obtained at the W. M. Keck Observatory, which is jointly operated by the University of California and the California Institute of Technology. ## 1 Introduction Absorption by an ionized medium close to the central engine is now recognized as a fairly common feature in the spectra of active galactic nuclei (hereafter AGNs). In Seyfert 1 galaxies, the signature of the ionized gas takes the form of narrow UV resonance absorption lines from highly–ionized species, such as Nv and Civ (Crenshaw, Maran, & Mushotzky 1998) as well as soft X–ray absorption edges from highly ionized species such as Ovii and Oviii (Reynolds (1997); George et al. (1998)). Similar UV absorption lines<sup>1</sup><sup>1</sup>1Here, we consider only absorption lines which are close to the redshift of the quasar, i.e. $`z_{\mathrm{abs}}z_{\mathrm{em}}`$, not lines at considerably lower redshifts than that of the quasar. are also observed in quasars (e.g. Foltz et al. (1986); Anderson et al. (1987); Young, Sargent, & Boksenberg 1982; Sargent, Boksenberg, & Steidel 1988; Steidel & Sargent (1991)). Quasars, however, are perhaps better known for their broad absorption lines, whose widths often reach 50,000 km s<sup>-1</sup> (e.g., Turnshek et al. (1988); Weymann et al. (1991)). The high ionization state of the absorbers and the often blueshifted absorption lines (relative to the broad emission lines) suggest that the absorber is a fairly tenuous medium outflowing from the AGN, most likely accelerated by radiation pressure (Arav et al. (1995)). The absorber could plausibly be associated with an accretion–disk wind (e.g., Murray et al. (1995); Krolik & Kriss (1995)) analogous to those observed in many cataclysmic variables (e.g., Drew (1990); Mauche et al. (1994)). This can only be a broad analogy, however, since a number of observational clues suggest that the picture in AGNs is neither as simple nor as “clean” as in cataclysmic variables. For example, the fact that absorption line depths often exceed either the net continuum level or the net emission–line level on which they are superposed suggests that the absorber does not lie between the broad emission–line region (BELR) and the more compact continuum source since it intercepts both continuum and emission–line photons. That is, it lies either within the BELR or away from both regions. The relation between the absorption–line region and the ubiquitous AGN emission–line regions is unknown, although it has been suggested that they may represent different phases or layers of the same medium (Shields, Ferland, & Peterson 1995; Hamann et al. (1998)). From an observational perspective, it is important to determine the geometry of the absorbing gas and its physical conditions because these constitute useful constraints on theoretical models of the ionization and acceleration of the outflow. Moreover, it is also interesting to ask whether there are any trends in the properties of the absorption lines with AGN subclass since such trends serve as constraints on scenarios of fundamental differences between such subclasses (e.g., the radio–loud/radio–quiet AGN dichotomy). It is, therefore, encouraging that some progress has been made in recent years in answering the above questions. First, observations of large samples of quasars indicate that broad absorption lines (BALs) are found preferentially among radio–quiet quasars (Stocke et al. (1992)), while the preferred hosts of associated <sup>2</sup><sup>2</sup>2“Associated” absorption lines, according to the definition of Anderson et al. (1987), are those with velocities within 5000 km s<sup>-1</sup> of the quasar rest frame. In principle, associated absorption lines need not be intrinsic to the quasar. narrow absorption lines (NALs) are radio–loud quasars (Foltz et al. (1986); Anderson et al. (1987)). These trends are by no means well–established and have been called into question by the discovery of radio–loud BAL quasars (Becker et al. (1997); Brotherton et al. (1998)) and radio–quiet intrinsic NAL quasars (Hamann, Barlow, & Junkkarinen 1997a; Hamann et al. 1997b ). We note, however, that the Foltz et al. (1986) and Anderson et al. (1987) results discuss the preference of strong associated NALs for radio–loud quasars, while also demonstrating that weak associated NALs occur in both radio–quiet and radio–loud quasars. Second, observations of selected NAL quasars at high spectral resolution have shown that the absorbers only partially cover the background BELR and/or continuum source (Barlow & Sargent (1997); Hamann et al. 1997a ). In another quasar, the NALs were found to vary on a time scale of only a few months (Hamann et al. 1997b ). These observations demonstrate that the NALs in these objects are intrinsic and also provide useful constraints on their geometry. In this paper, we present five associated NALs along the lines of sight toward six quasars observed at high spectral resolution. These were discovered serendipitously in a survey of intervening Mgii absorbers ($`z_{\mathrm{abs}}z_{\mathrm{em}}`$; Churchill (1997)). In three of the five cases, the NAL systems exhibit the partial coverage signature of absorbing gas intrinsic to the quasar. In the process of analyzing the data, we have refined and expanded the method of Barlow & Sargent (1997) to treat the coverage fractions of the continuum source and the BELR separately. Although it is, in principle, impossible to determine the two coverage fractions independently using an absorption doublet, it is possible to place interesting constraints on them. In particular, we find that in two of the quasars in this small collection, the continuum source must be partially covered by the absorber. In §2 of this paper, we describe the selection of the original quasar sample, the observations and data reduction, and the properties of the associated NALs that we detected and other relevant properties of their hosts. In §3 and §4, we show that, in three of these systems, the relative strengths of the absorption lines provide evidence for partial coverage of the background source by the absorber. In §4, we determine the effective coverage fractions by applying the method of Barlow & Sargent (1997). We also develop and apply a refined version of this method in which we treat the coverage fraction of the continuum source and broad BELR separately. In §5, we discuss the implications of the observational results, possible partial coverage mechanisms, and the general picture of intrinsic absorbers in different types of active galaxies. In §6, we summarize our findings and present our final conclusions. ## 2 Sample of Objects, Observations, and Data The associated NAL systems reported here were found in the course of a large statistical study of intervening Mgii absorbers during which spectra of a sample of 25 quasars were obtained (Churchill (1997)). In six objects from this sample, the spectra included the broad Civ emission line and revealed the presence of five associated NAL systems within $`5000`$ km s<sup>-1</sup> of the emission–line redshift. Another system at $`\mathrm{\Delta }v=13,800\text{km s}\text{-1}`$ toward Q $`0450132`$ could be “associated” (see Petitjean, Rauch, & Carswell 1994), but it does not satisfy the traditional velocity criterion. These six quasars are listed in Table 1 along with the some of their basic properties. The emission redshifts and emission line fluxes for the quasars were taken from Sargent et al. (1988) and from Steidel & Sargent (1991). Because of the way the original sample was selected, the final collection of six quasars whose Civ emission lines were observed is unbiased towards the presence of associated Civ NALs. In two of the six objects the spectra include additional absorption lines, namely Siiv and Nv in Q $`0450132`$ and Siiv in Q $`1213003`$. Since the radio properties of these quasars are important to our later discussion, we have compiled information about their radio properties based on reports in the literature and included it in Table 1 along with the appropriate references. The observed 5 GHz radio flux was converted to the rest frame of the source using the measured spectral index $`\alpha `$, if available, or assuming $`\alpha =0.5`$ otherwise (where $`f_\nu \nu ^\alpha `$). The radio power was computed assuming a Hubble constant of 50 $`\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, and a deceleration parameter of $`\frac{1}{2}`$, following Kellermann et al. (1994). The rest–frame 4400 Å optical flux was calculated from the observed $`V`$ magnitude assuming a flat optical–UV spectrum (i.e., $`\alpha =0`$; Elvis et al. (1994)). To classify an object as radio–loud or radio–quiet, we adopted the criteria of Kellermann et al. (1994) according to which radio–loud quasars have either a 5 GHz power of $`P_{\mathrm{\hspace{0.17em}5}\mathrm{GHz}}>10^{26}\mathrm{W}\mathrm{Hz}^1`$ or a 5 GHz–to–4400 Å flux density ratio of $`R>10`$. According to these criteria, three of the six objects are radio–quiet and three are radio loud. The observations were carried out with the HIRES spectrometer (Vogt et al. (1994)) on the Keck I telescope. The spectra have a resolution of 6.6 km s<sup>-1</sup> with a sampling rate of 3 pixels per resolution element. They were reduced with the IRAF<sup>3</sup><sup>3</sup>3IRAF is distributed by the National Optical Astronomy Observatories, which are operated by AURA, Inc., under contract to the NSF. apextract package for echelle data. The detailed steps for the reduction are outlined in Churchill (1997). The details of the observations are listed in Table 2 which also includes the parameters of the associated NAL systems that we detected along with the lowest measurable equivalent widths ($`5\sigma `$ limits) in the corresponding spectrum. In this table, we give the mean velocity of each absorption–line system relative to the peak of the broad emission line. NALs were found at both negative (blueshifted) and positive (redshifted) velocities relative to the Civ broad emission–line peak. A positive velocity, such as seen in PG $`1222+228`$ and PG $`1329+412`$, need not imply infall toward the quasar since the true velocity of the quasar is more accurately measured by narrow forbidden lines than by the broad emission line peaks. For example, in a picture in which the NAL gas is outflowing from the central engine, the NAL redshift would imply a smaller outflow velocity of the absorbers relative to the broad emission–line gas. Figures 15 present the velocity aligned absorption profiles of the Civ doublets and of other detected transitions (Siiv and Nv) of the five associated systems that were covered in the HIRES/Keck I spectra. The Civ$`\lambda `$1550 profile of the $`\mathrm{\Delta }v=60`$ km s<sup>-1</sup> system of Q $`1213003`$ (Figure 3) is truncated near 0 km s<sup>-1</sup> because is falls off the edge of the CCD. In fact, because of the redshift of Q $`1213003`$ its Civ emission line is in the observed wavelength range above 5100 Å, where the echelle orders are separated by gaps. As a result, there are several gaps in the coverage of the broad emission–line profile. None of the other objects suffer from this problem, however. The kinematic structure of the NAL profiles is varied and has multiple components. It is not distinguishable from the structure found in intervening absorbers. However, the strong Nv absorption in Q $`0450132`$ is indicative of the typically high metallicities of associated systems (Petitjean et al. (1994)) in contrast to the lower metallicities of intervening systems. In §3 and §4, we demonstrate that in three of the five systems there is evidence that the absorbing clouds only partially cover the source, which then requires them to be in relatively close proximity to the source. It should be noted that the $`\mathrm{\Delta }v=60`$ km s<sup>-1</sup> system of Q $`1213003`$ has been classified as intrinsic by Sowinski, Schmidt, & Hines (1997), but we have not been able to find the relevant report in the literature. ## 3 Identification of Anomalous Optical Depths In the three of the quasars displaying NALs we find that the optical depth ratio of the two members of the Civ doublet did not have the value expected from atomic physics, namely $`\frac{\tau _1}{\tau _2}=\frac{f_1\lambda _1}{f_2\lambda _2}`$ (Savage & Sembach (1991); where $`f_1`$ and $`f_2`$ are the oscillator strengths and $`\lambda _1`$ and $`\lambda _2`$ are the rest wavelengths of the doublet members). To demonstrate this discrepancy, we have fitted the weaker member of each doublet with a combination of Voigt profiles, scaled the model according to what atomic physics dictates for the stronger member of the same doublet, and compared it to the data. The results of this exercise are illustrated in Figures 15, where we show the models as solid lines superposed on the data. The bottom panel in each set shows the deviation of the data from the predicted profile of the strongest member of each doublet in each resolution element, scaled by the error bar. We see three obviously discrepant cases: the $`\mathrm{\Delta }v=2011`$ km s<sup>-1</sup> system toward Q $`0450132`$ (Figure 1), the $`\mathrm{\Delta }v=+1482`$ km s<sup>-1</sup> system toward PG $`1222+228`$ (Figure 4), and the $`\mathrm{\Delta }v=+314`$ km s<sup>-1</sup> system toward PG $`1329+412`$ (Figure 5). Neither of the associated systems toward Q $`1213003`$ appear to deviate from the atomic physics prediction. We consider five explanations for the apparent violation of the optical depth scalings in these three associated systems: (1) an instrumental/reduction effect such as scattered light which mimics a source function (e.g., Crenshaw et al. (1998)) (2) blends with other transitions at other redshifts; (3) a source function that fills in the line cores (Wampler, Chugai, & Petitjean 1995); (4) incomplete occultation of the continuum and/or emission–line by the absorbers (Wampler, Bergeron, & Petitjean 1993; Wampler et al. (1995); Hamann et al. 1997a,b; Barlow & Sargent (1997)) and (5) scattering of background photons back into the light path. The first is an unlikely resolution to the problem since the optical depth scaling is obeyed by all 64 intervening absorption–line systems (including transitions from Civ, Mgii, and Feii) observed with the same instrumental setup and reduced in the same manner (Churchill et al. (1998)). In addition, we have searched the spectra of these quasars for an extensive set of transitions at the redshifts of all other known systems, and found that none of them affect the profiles studied here. This rules out the second possibility. Some forms of the source function explanation could be viable while others are not; we consider these further in §5.2. The fourth and fifth explanations have identical signatures, since we can think of scattering as a form of “effective partial coverage”. Intuitively, the partial coverage effect will cause models that assume full coverage to over–predict the strength of the stronger member of a doublet, relative to the weaker. Physically, the intensity observed in the core of an absorption profile is the composite of two types of light paths: (1) light that is not occulted by the absorbing clouds or is scattered back into the light path, and (2) the light that successfully passes through the absorbers. The unocculted fraction (in normalized units) is nearly the same for both members of the doublet provided the emission line is not too steep over the wavelength range covered by the absorption (see Appendix). This results in a larger increase in the flux, relative to the full coverage expectation, detected in the deeper blue member of the doublet relative to the red member. The most straightforward example is a situation in which the absorbers are optically thick such that the unattenuated flux is negligible at either member of the doublet. With partial coverage, one still observes a significant flux which is equal in the two members. ## 4 Partial Coverage ### 4.1 Effective Coverage Fraction $`C_\mathrm{f}`$ To pursue the partial coverage interpretation further we have computed the effective coverage fraction following Barlow & Sargent (1997) and Hamann et al. (1997a). The effective coverage fraction is computed by considering the fraction of all photons of a given wavelength that do not pass through the absorbing gas. This calculation assumes a single, extended continuum and emission–line source and a single effective optical depth appropriate to the absorbing clouds at the corresponding velocity. If $`\tau `$ is the effective optical depth of an absorbing cloud that occults a fraction $`C_\mathrm{f}`$ of the source, the observed residual intensity, $`R`$, in normalized units is: $$R(\lambda )=[1C_\mathrm{f}(\lambda )]+C_\mathrm{f}(\lambda )e^{\tau (\lambda )}.$$ (1) The first term on the right–hand side represents the photons traveling along unocculted lines of sight while the second term arises from photons that survive absorption due to the finite optical depth of the cloud. For two multiplet transitions, the conjunction of the two residual intensities with the optical depth scaling yields the coverage fraction through the solution to: $$\left[\frac{R_\mathrm{r}1+C_\mathrm{f}}{C_\mathrm{f}}\right]^{\frac{f_\mathrm{b}\lambda _\mathrm{b}}{f_\mathrm{r}\lambda _\mathrm{r}}}=\frac{R_\mathrm{b}1+C_\mathrm{f}}{C_\mathrm{f}},$$ (2) where the subscripts “r” and “b” refer to the properties of the redder and bluer transitions, respectively. For the resonant UV doublets, such as Civ $`\lambda \lambda 1548,1550`$, $`\frac{f_\mathrm{b}\lambda _\mathrm{b}}{f_\mathrm{r}\lambda _\mathrm{r}}2`$ allowing an analytic solution: $$C_\mathrm{f}(v)=\frac{\left[R_\mathrm{r}(v)1\right]^2}{R_\mathrm{b}(v)2R_\mathrm{r}(v)+1},$$ (3) where we have converted the wavelength dependence of the coverage fraction to a velocity dependence. In the application of this method to the data, the calculation of $`C_\mathrm{f}`$ was performed in each resolution element along each of the resonant doublet profiles for the three associated systems with anomalous optical depth ratios. The results are illustrated as points with $`1\sigma `$ error bars in the bottom window of each panel of Figures 68. Points are only plotted if: (1) the stronger transition is detected, using the aperture method (Lanzetta, Turnshek, & Wolfe 1987), at the $`3\sigma `$ level; (2) the derived coverage fraction is physical ($`0<C_\mathrm{f}<1`$); (3) the $`1\sigma `$ error in $`C_\mathrm{f}`$, $`\sigma _{C_\mathrm{f}}<0.5`$; and (4) the fractional uncertainty in $`C_\mathrm{f}`$ is less than unity. A notable trend is that the coverage fraction always drops toward the wings of lines. This is an effect of the instrument’s line spread function which tends to “wash out” the wings and artificially cause anomalous optical depth ratios. We have explored this effect by simulating observed spectra with normal optical depth ratios and analyzing them in the same way as the data. As shown in Figure 9, synthetic spectra which are not convolved with the line spread function exhibit no variation in the coverage fraction across the single component Voigt profile. When convolved with the instrumental resolution function, even a profile with $`C_\mathrm{f}=1`$ will show $`C_\mathrm{f}<1`$ in its wings due to this convolution effect. The signal–to–noise ratio of the observed spectra was not high enough to merit an attempt to deconvolve the line spread function. Therefore it is only possible to interpret the derived $`C_\mathrm{f}`$ values in the cores of well–resolved lines. There is evidence for partial coverage ($`C_\mathrm{f}<1`$) over a significant range of velocity for the three systems. The values of $`C_\mathrm{f}`$, averaged over the velocity regions defined by the apparent location of kinematic components in the profile, are given in Table 3. The weighted mean values of $`C_\mathrm{f}`$, averaged over all resolution elements in the regions, are also shown graphically in the top windows of Figures 68, where they are represented by the level of the horizontal bars. The width of the horizontal bars depicts the velocity bin over which the average value of $`C_\mathrm{f}`$ was computed. (Note that the vertical bar does not represent an error bar but rather it indicates the coverage fraction derived separately for the BELR, as discussed in the next section.) We have also found that the effective coverage fraction varies with velocity component, as follows. In the case of Q $`0450132`$ (Figure 6), the spectrum includes absorption lines from three species, Civ, Siiv, and Nv. The coverage fraction was derived in five separate regions of the Civ profile and there is evidence that it varies from component to component. The Nv data are noisy, but consistent with the coverage fraction derived for Civ in regions of overlap. The $`\mathrm{\Delta }v=2100\mathrm{km}\mathrm{s}^1`$ component that shows strong Siiv absorption (presumably originating from a lower–ionization region of the absorber) also shows partial coverage consistent with the corresponding component of Civ. There are two distinct components in the PG $`1222+228`$ intrinsic Civ absorption profile (Figure 7), both of which yield a small coverage fraction: the $`\mathrm{\Delta }v=+1469`$ km s<sup>-1</sup> component gives $`C_\mathrm{f}0.7`$ while the $`\mathrm{\Delta }v=+1587`$ km s<sup>-1</sup> component gives a coverage fraction of about $`C_\mathrm{f}0.4`$. The effective coverage fraction was also determined in four separate regions of the intrinsic Civ profile of PG $`1329+412`$ (Figure 8); in two of which we found $`C_\mathrm{f}<1`$. ### 4.2 Partial Coverage of Continuum and Emission Line Sources The single number $`C_\mathrm{f}`$ gives the fraction of all photons at a given wavelength that pass through absorbers along the line of sight. However, at the position of these associated absorption lines in the spectrum, the absorbed photons have two significant sources, the continuum source and the BELR. The continuum source is likely to be significantly smaller than the BELR, but the geometry and relative position of the BELR is unknown. In this section, we continue to assume that the photons from these two regions pass through the same absorbers, i.e. that the optical depth $`\tau `$ is the same along the paths to the observer from the continuum source and from the BELR. However, we now consider the different coverage fractions, $`C_\mathrm{c}`$ and $`C_{\mathrm{elr}}`$, that can apply for the continuum source and the BELR. First we define, $`W=F_{\mathrm{elr}}/F_\mathrm{c}`$, as the ratio of the broad emission–line flux to the continuum flux at the wavelength of the narrow absorption line. Then we can write the normalized flux as: $$R=1\frac{(C_\mathrm{c}+WC_{\mathrm{elr}})(1e^\tau )}{(1+W)}.$$ (4) By the same optical depth scaling argument as in the previous section, this reduces down to: $$\frac{C_\mathrm{c}+WC_{\mathrm{elr}}}{1+W}=\frac{[(R_\mathrm{r}(v)1)]^2}{R_\mathrm{b}(v)2R_\mathrm{r}(v)+1}=C_\mathrm{f},$$ (5) where we have assumed that underlying, unabsorbed continuum plus line flux is the same in both transitions, i.e., that the value of $`W`$ is the same for both members of the doublet. In the Appendix we calculate that $`C_\mathrm{c}`$ or $`C_{\mathrm{elr}}`$ will change by at most 15% if a doublet is on the steep slope of an emission line due to a differing $`W`$. Equation (5) shows that the effective coverage fraction, $`C_\mathrm{f}`$, can be considered as an average coverage fraction weighted according to the flux from each source. Of course, with the available information, the continuum source and BELR coverage fractions cannot be determined independently of each other. It is possible, nevertheless, to place interesting constraints on them. Given the value of $`C_\mathrm{f}`$ and $`W`$, and their $`1\sigma `$ uncertainties, equation (5) defines a region in the $`C_\mathrm{c}`$$`C_{\mathrm{elr}}`$ parameter plane where the solution must lie. Examples of such regions are illustrated in Figure 10. It is interesting to note that, if the upper boundary of this allowed region intersects one of the axes at a value below 1, then the value of the corresponding coverage fraction must be less than unity (at the $`1\sigma `$ confidence level). Such a situation can occur if the narrow absorption doublet happens to fall on the high–velocity wing of an emission line where the underlying flux is dominated by the continuum (i.e., when $`W`$ is small). Because of this technical requirement, we cannot draw any conclusions about the preferred velocity distribution of systems that cover the continuum source only partly. Using the observed values of $`W`$ from published low–resolution spectra (Sargent et al. (1988); Steidel & Sargent (1991)) and the measured values of $`C_\mathrm{f}`$ we have derived constraints on the coverage fractions of the two distinct sources. These limits, along with the ingredients needed to compute them, are listed in Table 3 for each system and transition showing the signature of partial coverage. In particular, in the last two columns of Table 3 we list the range of allowed values of $`C_\mathrm{c}`$ and $`C_{\mathrm{elr}}`$ as deduced from the range of allowed solutions of equation (5). We emphasize that the limits on $`C_\mathrm{c}`$ and $`C_{\mathrm{elr}}`$ do not hold independently of each other since the solution should also satisfy equation (5). In practice these limits correspond to the extreme corners of the solution boxes as illustrated in Figure 10. The constraints on $`C_\mathrm{c}`$ and $`C_{\mathrm{elr}}`$ are also represented graphically by the vertical extent of shaded boxes and vertical bars, respectively, in Figures 68. In most of the absorption components listed in Table 3, either the continuum source or the BELR could be fully covered. Notable exceptions are the Siiv line of Q $`0450132`$ and the highest–velocity component of its Nv line (at $`\mathrm{\Delta }v=1880`$ km s<sup>-1</sup>; see Figure 6), as well as the red Civ component in PG $`1222+228`$ (at $`\mathrm{\Delta }v=+1587`$ km s<sup>-1</sup>; see Figure 7). In these cases $`C_{\mathrm{elr}}`$ is not constrained, but the continuum source cannot be fully covered under any conditions. We derive $`0.59<C_\mathrm{c}<0.88`$ and $`C_\mathrm{c}<0.84`$, respectively, in the case the former object and $`0.12<C_\mathrm{c}<0.52`$ in the case of the latter object. In the Appendix we consider the possibility that the photons from the continuum source and the BELR pass through different absorbers, i.e. $`\tau _c\tau _{elr}`$. This could be possible depending on the sizes of absorbing structures and their spatial positions relative to the emitting regions, but we show in the case of the redward component of PG $`1222+228`$ partial coverage of the continuum source is required regardless of the assumed $`\tau _c`$ and $`\tau _{elr}`$. The $`\mathrm{\Delta }v=+1587`$ km s<sup>-1</sup> component of the PG $`1222+228`$ system also requires a partially covered continuum source, with a larger continuum coverage fraction. It is interesting that the range of constraints $`0.51<C_\mathrm{c}<0.92`$ does not overlap with that of the $`\mathrm{\Delta }v=+1469`$ km s<sup>-1</sup> component. ### 4.3 Application of the Refined Method to Additional Quasars With Published Data There are three other quasars with intrinsic Civ narrow line absorbers observed with HIRES/Keck I and presented in the literature in sufficient detail to derive $`C_\mathrm{f}`$, $`C_\mathrm{c}`$, and $`C_{\mathrm{elr}}`$. These are PKS $`0123+237`$ (radio–loud; Barlow & Sargent (1997)), Q $`0150203`$ (UM 675; radio–quiet; Hamann et al. 1997a ), and Q $`2343+125`$ (radio–quiet; Hamann et al. 1997b ). The results we obtain by applying the method of the previous section to these objects are listed in Table 4. The NAL systems of PKS $`0123+237`$ were discussed by Barlow and Sargent (1997). The velocity structure of the NAL profile of PKS $`0123+237`$ is similar to that of Q $`0450132`$ but in the former object the effective coverage fraction is different for different transitions. In the case of Q $`2343+125`$ (Hamann et al. 1997b ) the NALs are found at a very large velocity relative to the emission–line peak ($`\mathrm{\Delta }v=24,000`$ km s<sup>-1</sup>) but still there is considerable evidence that they are intrinsic: in addition to showing the signature of partial coverage they also happen to be variable. The continuum coverage fraction can be constrained to be very small ($`<0.2`$) because the emission–line contribution at the velocity of absorption is small ($`W=0.1`$; see Table 4). Together with the NAL systems presented in the previous section, the systems analyzed here comprise the database of available high–resolution spectra of NALs. All six intrinsic NAL systems have Civ absorption profiles with velocity widths of $`100`$$`400`$ km s<sup>-1</sup> and have obvious sub–structure (see, for example, Figure 1). The NAL profiles suggest that there are discrete absorbing structures (though not necessarily discrete “clouds”) at different positions in velocity space. ## 5 Discussion ### 5.1 Implications of the Observational Results Since the six quasars of Table 1 were drawn from a sample that was selected based on the properties of intervening Mgii absorbers, they can be regarded as a collection which is unbiased towards the detection of associated Civ absorption lines. As such we can use it to estimate roughly how frequently associated Civ NALs are found in radio–loud and radio–quiet quasars. Associated absorption lines were detected in all three radio–quiet objects, and, in two cases, the lines were shown to be intrinsic based on the signature of partial coverage. The two NAL systems in Q $`1213003`$, which did not show the signature of partial coverage, could plausibly be intervening. One would expect to find 2 intervening systems within 5000 km s<sup>-1</sup> in a sample of six quasars, based on the known density of Civ systems at $`z=2`$ and the redshift path that we have observed. On the other hand, only one out of the three radio–loud objects showed an associated NAL system which also turned out to be intrinsic. Our results are consistent with previous studies (Foltz et al. (1986); Anderson et al. (1987)) that find that associated NALs are fairly common in both radio–loud and radio–quiet quasars. However, as Foltz et al. (1986) pointed out, there could be a systematic difference between the strengths of associated NALs found in radio–loud and radio–quiet quasars. Strong NALs ($`>1.5`$ Å) prefer radio–loud hosts, while weak NALs show no such preference. None of the five associated systems detected in our survey are strong, thus we cannot re–address this issue. It should be noted that although the associated systems that we report here are fairly weak, in three our of four quasars they are still within the detection limits of the Foltz et al. (1986) and Anderson et al. (1987) surveys. However, our more sensitive survey shows that in some cases (Q $`0002+051`$ and Q $`1421+331`$) NALs are not detected down to a $`5\sigma `$ limit of rest–frame equivalent width, $`REW<0.05`$ Å. This implies that there may be some lines of sight that do not pass through NAL gas. More than 10 intrinsic NAL systems reported in the literature appear in radio–quiet quasars (Hamann et al. 1997c ; Barlow et al. (1997); Tripp, Lu, & Savage 1997). It is interesting that, although Foltz et al. (1986) and Anderson et al. (1987) find associated NALs in about 70% of radio–loud quasars, only very few of these systems have been proven rigorously to be intrinsic: two systems from the sample of Foltz et al. (1986) show signs of variability (3C $`205`$ and the “mini–BAL” in PHL $`1157+0128`$; Aldcroft et al. (1997)). Moreover, some fraction of associated NAL systems are bound to be intervening systems by chance; in fact 2 out of the 12 associated NAL systems in the sample of Anderson et al. (1987) are expected a priori to be intervening systems. A description of the properties of intrinsic NALs in radio–loud and radio–quiet quasars has not yet been established. A systematic survey at very high spectral resolution is needed to clarify the situation. Another important observational result is that, in at least two quasars from our collection, the NAL gas can be demonstrated to cover the continuum source only partially. One of these quasars, Q $`0450132`$, is radio–quiet, while the other, PG $`1222+228`$, is radio–loud. Partial coverage of the continuum source may be more common than these two cases suggest but we have no way of determining this yet. The implication of this result is not absolutely clear because the reason for the apparent partial coverage is not known. One possibility is that the absorbing medium is clumpy and the clump sizes are comparable to or smaller than the size of the continuum source. Another possibility, however, is that the continuum source is completely covered by the absorber but a scattering medium makes it possible for continuum photons to bypass the absorber and get to the observer without suffering any absorption. We expand on this issue in the next section where we consider possible explanations for the observed partial coverage. ### 5.2 Interpretation of Effective Partial Coverage Central to interpreting these results is knowledge of the mechanism that can give rise to effective coverage fractions less than unity. Following our preliminary discussion of this subject in §3, three mechanisms can fill in saturated, non–black troughs, or more generally produce discrepant optical depth ratios for doublet or multiplet transitions: 1) a source function, 2) scattering of background photons back into the light path, 3) true (i.e., geometric) partial coverage of the continuum and/or BEL source by the absorbing medium. The present data do not permit us to distinguish among these possibilities, but we discuss possible tests. These tests involve polarization measurements, consideration of the coverage fraction in different transitions, and correlating time variability with coverage fractions. One possible source function explanation involves a non–occulted emission source which contributes additional photons to the light path after absorption of the continuum and BEL photons has occurred, essentially “diluting” the absorbed spectrum. Wampler et al. (1995) determined that an unabsorbed thermal continuum characterized by $`T=17,000`$ K could explain the residual intensity in the saturated troughs of various transitions in the BAL system of Q $`00592735`$. In fact, other intrinsic NAL systems also show variations in $`C_\mathrm{f}`$ from transition to transition, namely UM675 (Hamann et al. 1997a ) and PKS $`0123+257`$ (Barlow & Sargent (1997)). The mechanism for production of the additional source function photons is unknown, but its contribution should vary smoothly with wavelength to produce the observed effect. Another manifestation of a source function is line emission by the same gas that is responsible for the absorption. This version of the scenario is not viable, however, since it would not have the desired effect: if there were an emission component filling in the unresolved absorption troughs it would result in an effective coverage fraction greater that unity. Scattering of photons back into the light path is certainly a plausible explanation. An indirect argument for a scattering explanation is that it is known to be important in filling in the troughs of broad absorption lines. Spectropolarimetric observations of many BAL quasars (Ogle (1997); Brotherton et al. (1997); Cohen et al. (1995); Glenn, Schmidt, & Foltz 1994; Goodrich & Miller (1995); Hines & Wills (1995); Schmidt, Hines, & Smith (1997)) show that the polarization level in saturated non–black absorption troughs is higher than in the continuum. This by no means proves that scattered light makes a large contribution for all NAL absorbers, since the lines of sight through these absorbers could well be different than in BALs. Spectropolarimetric observations of partially covered NAL systems are needed to investigate the role of scattering. Until such observations are carried out, we can consider clues provided by the relationship between time variability and partial coverage. The implications of the observational clues, however, depend on the picture that one adopts for the scatterers. If the scatterers comprise an extended collection of clumps or a quasi–uniform medium enveloping the BELR, then the scattering contribution is unlikely to vary substantially over time scales of the order of the dynamical time of the BELR gas. The strength of absorption, on the other hand, can vary as a result of changes in ionization state of the absorber or motion of the absorbing gas across the line of sight. One alternative picture is that of a discrete clump acting as a mirror to scatter continuum and/or emission–line photons in the direction of the observer without their passing through the absorber. In this picture the amount of scattered light can vary on the same time scale as the absorbing column density making a unique interpretation of the variability time scale virtually impossible. Another alternative picture is that the scatterers are close to our line of sight (they could be mixed with the absorbers or they could form a tenuous, ionized atmosphere round individual clumps in the absorber). In this case the light is scattered in the forward direction and its fractional polarization does not change. In other words, even though scattering is the culprit in this scenario, it does not betray itself by its polarization signature. At present there is one NAL system where variability provides strong evidence against a scattering mechanism. The $`\mathrm{\Delta }v=24,000`$ km s<sup>-1</sup> Civ NAL system toward Q $`2343+125`$ has an effective coverage fraction that varies substantially from one epoch of observation to another, and the effective coverage fraction is smaller when the absorption lines are weaker (Hamann et al. 1997a ). This argues against a scattering picture of the above type, in which scattered light would fill in a larger fraction of the trough of a weaker line. From the time variability and the partial coverage of the continuum source for this system, Hamann et al. (1997b) derived the requirement that the absorbing clouds are quite small ($`0.01`$ pc) and are located close to the continuum source. If we adopt this interpretation, then true partial coverage of the continuum source is the most plausible explanation. This suggests that time variability and true partial coverage occur together because both are related to small absorbers close to the continuum and/or BEL source. The variations can result from either motion of the clouds or from changes in the sizes of the multiphase layers of the absorber. Discriminating between these two possibilities is challenging and requires a geometric model of the absorbers and the sources of ionization, which can be constrained by observations of multiple transitions. ### 5.3 A General Picture of Intrinsic NALs Narrow absorption lines are ubiquitous, appearing in all types of broad–line AGNs, namely in Seyfert 1 galaxies and broad–line radio galaxies as well as in radio–loud and radio–quiet quasars. If we take the frequency of narrow associated absorption lines as a rough indication of the coverage fraction of the absorbers (as seen by the continuum and/or emission–line source) then this fraction appears to be very high. It is about 50% in Seyfert 1 galaxies (Crenshaw 1997), perhaps about 70% in radio–loud quasars (Foltz et al. 1986; Anderson et al. 1987), and apparently also very high in radio–quiet quasars based on our small collection. The observational clues available at the moment, although tantalizing, do not point clearly to a specific interpretation. Because NALs are seen in many different types of active galaxies and seem to be quite common, their relation to BALs is unclear. Unlike BALs, NALs are common in radio–loud quasars and in Seyfert 1’s. It is plausible that NALs with large velocities and those that are quite broad and without structure (“mini–BALs”) are related to the BAL phenomenon (Hamann et al. 1997c ). This hypothesis is supported by the fact that large outflow velocities ($`>`$10,000 km s<sup>-1</sup>), such as in Q $`2343+125`$, Q $`0935+417`$ (Hamann et al. 1997c ), and Q $`1700+64`$ (Tripp et al. (1997)), have only been observed in radio–quiet quasars, as have the majority of BALs. Radio–loud quasars on the other hand have mostly narrow, “associated” absorption lines. Finally, associated NALs found in Seyfert 1 galaxies show limited observational evidence for an intrinsic nature. Some have covering factors close to unity (Crenshaw (1997)), and in two cases, NGC 3783 and NGC 3516, the NALs vary on a short time scale (Koratkar et al. (1996); Shields & Hamann (1997)). Interestingly enough NALs in Seyfert 1 galaxies are always blueshifted relative to the BELs, unlike NALs in quasars which are sometimes redshifted relative to the BELs. Based on the above summary, a reasonable model or scenario for the NAL gas should explain why NALs are so common (in other words, why the absorbing gas covers such a large solid angle relative to the continuum source) and why they are sometimes observed to be redshifted and sometimes blueshifted relative to the peaks of the broad emission lines (i.e. apparent infall as well as outflow). It is obviously impossible to construct a unique unifying model to explain these observational facts. Nevertheless, we venture to speculate on how NALs can fit into the accretion–disk wind picture of Murray et al. (1995), which was originally meant to explain BALs. Their possible relatives, the high velocity NALs, could originate in a different phase of the same region or in an atmosphere just outside the wind. Sometimes both NALs and BALs arise in the same quasar, at different velocities, so it seems plausible that NAL phases could exist nearby the BAL region. The broader, smoother, mini–BALs may also be related to this type of wind flow. In this context, it is extremely interesting that in some cases the NALs are redshifted relative to the broad emission–line peak. Of course this relative redshift does not necessarily imply infall towards the central engine. The systemic redshift is best determined from the narrow emission lines, which, however, have not been observed in these particular objects. Because the peaks of the broad emission lines can be blueshifted relative to the systemic redshift, it is possible that the observed NALs are also blueshifted relative to the systemic redshift. At any rate, all four of the NAL systems in radio–loud quasars that have been demonstrated to be intrinsic, i.e., PG $`1222+228`$ (this paper), Q $`0123+237`$ (Barlow & Sargent (1997)), $`3`$$`205`$ (Aldcroft et al. (1997)), and PHL $`1157+014`$ (Aldcroft et al. (1997)), have NALs that appear to be redshifted with respect to the peaks of the broad emission lines. In Figure 11, we show schematically a side view of the geometry of the accretion–disk wind of Murray et al. (1995) and a plausible location for the NAL gas. For specific orientations of the observer (such that $`\beta <i`$, where $`i`$ is the inclination of the disk relative to the observer and $`\beta `$ is the opening angle of the wind, as shown in the figure), the NALs originating in the far side of the disk can appear redshifted, assuming that the NAL gas is outflowing. Since the NALs are sometimes deeper than the net emission–line level on which they are superposed, the absorbing gas must intercept continuum photons as well. To accommodate such an effect in this picture we must postulate that the outflowing NAL gas originates very close to the center of the disk so that it can cover the continuum source, i.e., it fills the region between the fast disk wind and the disk axis. We emphasize, however, that alternative pictures are also plausible. For example, the NAL gas could be located on the near side of the disk, in the direction of the observer. Because the wind is rotating as well as outflowing, the net velocity vector of the outflowing gas can point away from the observer depending on the azimuth, in which case the resulting absorption lines will also appear redshifted. In the context of the speculative geometric picture outlined above, the difference between AGN subclasses may be related to the properties of the fast wind. For example, if the opening angle of the wind is very small, the wind effectively “hugs” the accretion disk and the likelihood of our line of sight going through it is very small. Hence BALs would not be observed in such objects but NALs could still appear. We speculate further that the difference between radio–quiet quasars on one hand and radio–loud quasars and Seyferts on the other is the opening angle of the wind, which may in turn be a consequence of the luminosity of the object (relative to the Eddington luminosity), of the shape of the ionizing continuum, or of the combination of these two properties. This could also affect the strength of NALs in the two cases as observed by Foltz et al. (1986) for radio–loud vs. radio quiet quasars. Finally, we note that if any hot electrons happen to be located in the direction of the axis of the disk, they can act as scatterers and redirect continuum photons towards the observer. Through this effect, the absorption lines can be “diluted” creating the impression that the absorber only partly covers the continuum source. ## 6 Summary and Conclusions In this paper we reported the discovery of five narrow “associated” ($`|\mathrm{\Delta }v|<5000`$ km s<sup>-1</sup>) absorption–line systems in the spectra of four quasars. These quasars were observed in the course of a survey for intervening Mgii absorbers, in which the Civ emission lines of six objects happened to fall in the observed spectral range. As such, this “mini–sample” of six quasars is unbiased towards the presence of narrow, intrinsic Civ absorption lines. Three of the absorption systems were demonstrated to be intrinsic, because the relative strengths of the members of the Civ doublet required effective partial coverage of the continuum and/or emission–line sources. The two systems toward Q $`1213003`$ could not be demonstrated as intrinsic because their coverage fractions were consistent with unity. We carried out a detailed analysis of the coverage fractions of the three intrinsic systems that we have found: Q $`0450132`$, PG $`1222+228`$, and PG $`1329+412`$. The analysis consisted of determining the effective coverage fraction of the background source(s) by the absorber according to the method of Barlow & Sargent (1997) as well as a refinement of this method to treat the coverage fractions of the background continuum and emission–line sources separately. The latter technique was also applied to three additional intrinsic NAL systems for which sufficient information was available in the literature: PKS $`0123+237`$, UM $`675`$, and Q $`2343+125`$. This provided a total of six intrinsic NAL systems studied at high resolution, and among the six there was a very wide range of properties. The derived Civ effective coverage fractions ranged from $`C_\mathrm{f}0.1`$ to nearly unity, and in some systems varied across the absorption profiles. The velocities of the absorbing clouds, relative to the quasar, ranged from outflow at $`24,000`$ km s<sup>-1</sup> to apparent infall at $`1500`$ km s<sup>-1</sup>. In three systems, Q $`0450132`$ and PG $`1222+228`$ (this study) and Q $`2343+125`$ (Hamann et al. 1997b ), the data require that the covering factor of the continuum source is less than unity ($`C_\mathrm{c}<1`$, according to our formulation presented in §4.2). The two systems are quite different from each other: Q $`2343+125`$ is radio–quiet and its absorption system is at an outflow velocity of $`24,000`$ km s<sup>-1</sup>, while PG $`1222+228`$ is radio–loud and its absorption system is redshifted relative to the BEL region by $`1500`$ km s<sup>-1</sup>. We have discussed mechanisms which can give rise to a covering factor that is less than unity but we have not been able to select a favorite based on the information that is currently available from observations. We have also speculated that the NAL gas may be related to the fast outflows that manifest themselves as BALs. If this is indeed the case, then the fact that NALs are quite common in radio–loud quasars and in Seyfert 1 galaxies may imply that these types of AGN also harbor fast accretion–disk winds, whose absorption signature (BAL) is unobservable. Further observations, and more specifically systematic surveys of the various classes of active galaxies and quasars, are needed to test these ideas. Some of the key goals of these surveys should be to: 1) establish just how common intrinsic NALs are in radio–loud/quiet quasars and in Seyfert 1’s; and 2) check if non–unity covering factor and time variability occur together as a clue toward understanding the spatial distribution of the NAL clouds. In addition spectropolarimetric observations are sorely needed to establish whether scattering or true partial coverage by small absorbers is responsible for effective covering factors less than unity. ###### Acknowledgements. We thank Jules Halpern as well as the anonymous referee for useful suggestions. This work was supported by the National Science Foundation Grants AST–9529242 and AST–9617185, and by NASA Grant NAG5–6399. During the early stages of this work M.E. was based at the University of California, Berkeley and was supported by a Hubble Fellowship (grant HF–01068.01–94A from Space Telescope Science Institute, which is operated for NASA by the Association of Universities for Research in Astronomy, Inc., under contract NAS 5–26255). We are grateful to Steven S. Vogt for the great job he did building HIRES. ## APPENDIX ## Appendix A More General Forms of the Partial Coverage Calculation Here we consider the consequences of two simplifying assumptions that we made in the calculation of the coverage fraction in §4.2: (1) the emission line flux is the same for both members of the doublet; and (2) the optical depths toward the continuum source and the BLR are the same. ### A.1 Underlying Emission–Line Flux Differs from Stronger to Weaker Transition We have assumed that the ratio of the emission line to the continuum flux is the same in the blue and red components of a doublet, i.e. that $`W_\mathrm{b}=W_\mathrm{r}=W`$. This assumption may lead to inaccuracies in the derivation of the coverage fraction when the absorption falls on the steep part of an emission feature so that $`W_\mathrm{b}`$ and $`W_\mathrm{r}`$ may differ by $`10`$$`15`$%. In this more general case we have $$\frac{(1R_\mathrm{b})(1+W_\mathrm{b})}{C_\mathrm{c}+W_\mathrm{b}C_{\mathrm{elr}}}=\frac{2(1R_\mathrm{r})(1+W_\mathrm{r})}{C_\mathrm{c}+W_\mathrm{r}C_{\mathrm{elr}}}\left[\frac{(1R_\mathrm{r})(1+W_\mathrm{r})}{C_\mathrm{c}+W_\mathrm{r}C_{\mathrm{elr}}}\right]^2,$$ (A1) which can be reduced to a quadratic equation in $`C_{\mathrm{elr}}`$ if $`C_\mathrm{c}`$ is specified (or in $`C_\mathrm{c}`$ if $`C_{\mathrm{elr}}`$ is specified). To see how the assumption used previously will affect our results, consider the quasar Q $`0450132`$ which has its absorption on the steepest area of its emission feature. In this case, using $`W_\mathrm{r}=0.83`$ and $`W_\mathrm{b}=0.7`$ changes $`C_\mathrm{c}`$ or $`C_{\mathrm{elr}}`$ only by $`1215`$%. ### A.2 Different Absorbers Cover Continuum and BEL Sources Here we examine the more general case of incomplete coverage of the two distinct sources of photons (continuum source and BEL) where the light from the two sources is allowed to pass through different absorbing clouds (or significantly different regions of the same cloud). Let $`\tau _\mathrm{c}`$ and $`\tau _{\mathrm{elr}}`$ be the optical depths along the lines of sight from observer to continuum source and from observer to BEL source. If the clouds occult a fraction $`C_\mathrm{c}`$ and a fraction $`C_{\mathrm{elr}}`$ of the lines of sight toward each source, then the normalized residual intensity is: $$R=\frac{(1C_\mathrm{c})+C_\mathrm{c}e^{\tau _\mathrm{c}}+(1C_{\mathrm{elr}})W+C_{\mathrm{elr}}We^{\tau _{\mathrm{elr}}}}{(1+W)},$$ (A2) where $`W`$ is the emission line flux in continuum units. In conjunction with optical depth scaling laws, this gives a system (for an $`n`$–transition multiplet) of $`3n2`$ equations ($`n`$ residual intensity equations, $`n1`$ continuum optical depth relations, $`n1`$ ELR optical depth relations) and $`2(n+1)`$ unknowns ($`2`$ coverage fractions and $`2n`$ optical depths). This can, in principle, be solved for a multiplet with at least four transitions. With this more general case in mind we now reconsider the issue of partial coverage of the continuum source in PG $`1222+228`$ to see if this interesting result still applies. At the position of the $`\mathrm{\Delta }v=+1587`$ km s<sup>-1</sup> component of the Civ absorption (Figure 4) the continuum contributes $`77`$% of the total flux (i.e., $`W=0.3`$), and the BELR only $`23`$%. The observed flux at that position is 0.6 in normalized units. Therefore it is not possible to have zero coverage of the continuum source because then the observed flux would be at least 0.77 no matter what the values of $`C_{\mathrm{elr}}`$ and $`\tau _{\mathrm{elr}}`$. If $`C_\mathrm{c}=1`$ then $`0.24<\tau _\mathrm{c}(\lambda 1550)<0.73`$. The lower limit is provided by the requirement that the continuum photons alone (those that pass through the absorber) do not produce too much flux. The upper limit is the minimum contribution from unimpeded continuum photons since the BEL can contribute at most a flux of 0.3. Over that range of $`\tau _\mathrm{c}`$, the continuum photons that pass through the absorber produce a flux ranging from 0.6 to 0.3 at the position of Civ $`\lambda `$1550 and from 0.48 to 0.18 at the position of Civ $`\lambda `$1548. In either case, the observed flux at 1548 Å is just slightly smaller than at 1550 Å (0.60 as compared to 0.62). To be consistent with these data, the contribution to the flux from the BEL photons would have to be larger at 1548 Å than at 1550 Å. The contribution from photons that do not pass through absorbers is the same for the two transitions, and the contribution from photons at 1550 Å that make it through the absorber should be equal or larger than at 1548 Å. This is the opposite of what is required. Thus the continuum source cannot be fully covered at any $`\tau _\mathrm{c}`$. Since $`C_\mathrm{c}0`$ and $`C_\mathrm{c}1`$ we conclude that the continuum source must be partially covered regardless of assumptions about $`\tau _\mathrm{c}`$ and $`\tau _{\mathrm{elr}}`$.
no-problem/9902/physics9902011.html
ar5iv
text
# A Compton Backscattering Polarimeter for Measuring Longitudinal Electron Polarization ## 1 Introduction Stored polarized electron beams are now available at many laboratories. Such beams are produced either by radiative polarization of the stored beam or by injecting polarized electrons. Internal target experiments performed with such a beam require the electron polarization to be measured during the experiments. These measurements need to be done quickly, so that any changes in polarization can be accounted for, and parasitically, so that the measurement in no way affects the internal target experiment. One technique to do this is through the use of a Compton backscattering polarimeter. In this technique, a circularly polarized photon beam is scattered off a polarized electron beam. Due to the asymmetry in the spin-dependent Compton scattering cross section, the polarization of the electron beam can be determined by changing the helicity of the photon beam. For transversely polarized electrons the asymmetry is measured with respect to the orbital plane of the electrons, while for longitudinally polarized electrons the asymmetry is measured in the energy-dependent cross section. To perform spin-dependent electron scattering experiments, the MEA/AmPS facility at NIKHEF has been upgraded to provide a longitudinally polarized electron beam for internal target experiments. An overview of the NIKHEF electron beam facility is shown in fig. 1. Polarized electrons are produced by a recently commissioned Polarized Electron Source (PES) , consisting of a laser driven photocathode electron source, a Z-shaped spin manipulator to orient the electron spin in any arbitrary direction, a Mott polarimeter to measure the electron polarization at the source, and a post-accelerator to match the energy of the polarized electrons to the acceptance of the chopper/buncher system of the linac(400 keV). After the post-accelerator, the polarized electrons are injected into the linear medium-energy accelerator (MEA) and accelerated up to 750 MeV. The electrons are then injected into the AmPS storage ring. Due to the low current produced by the polarized source, only a few mA can be stored per injection, but through the use of stacking a current of more than 200 mA has been stored in the ring. A Siberian Snake has been installed in the ring to preserve the polarization of the stored beam and to maintain a longitudinal orientation of the spin at the location of the internal target. In this paper the performance of a laser backscattering polarimeter is presented, which was designed and constructed for the measurement of the longitudinal polarization of the stored electron beam in the AmPS ring. While many laboratories use Compton backscattering polarimeters to measure the polarization of stored transversely polarized electron beams, the present polarimeter was the first to measure the longitudinal polarization of a stored polarized electron beam. In section 2, the physics of Compton scattering is discussed with emphasis on the energy range of the AmPS ring and on the technique for the measurement of longitudinal polarization. In section 3 the layout and the technical characteristics of the polarimeter are described, and in section 4 the results of the performance tests of the polarimeter are presented. Emphasis is put on the investigation of the systematic uncertainties of the polarimeter. Conclusions and a summary are given in section 5. ## 2 The Physics of Compton Scattering The kinematics of Compton scattering in the lab frame is shown in fig. 2. The initial photon and electron energies are expressed as $`E_\lambda `$ and $`E_e`$, respectively, while the scattered photon energy is expressed as $`E_\gamma `$. The scattered electron is not shown. The positive z-axis is defined to be the direction of the incident electron. Unlike the kinematics of transversely polarized electrons, the reaction with longitudinally polarized electrons is symmetric about the azimuthal angle $`\phi `$, and thus only the angle $`\theta `$, between the incident electron and scattered photon, is shown. In the case of high-energy electrons, photons are backscattered in a narrow cone centered around the direction of the initial electron. Typical values of the scattering angle in the lab frame, corresponding to $`90^{}`$ scattering in the electron rest frame, are given in table 1. The polarization of the electron is specified in Cartesian coordinates by $`𝐏=P_e\widehat{𝐏}=(P_x,P_y,P_z)`$ and that of the incident photon by the Stokes vector $`𝐒=(S_0,S_1,S_2,S_3)`$ . The amount of linearly polarized light is given by $`\sqrt{S_1^2+S_2^2}`$ and that of circular polarization by $`|S_3|`$. The sign of $`S_3`$ indicates the helicity of the polarization: $`S_3>0`$ for left-handed helicity ($`S_{3L}`$) and $`S_3<0`$ for right-handed helicity ($`S_{3R}`$). For normalization of the Stokes vector $`S_0`$ is taken to be unity. The cross section for Compton scattering of circularly polarized photons by longitudinally polarized electrons can be written as: $$\frac{d\sigma }{dE_\gamma }=\frac{d\sigma _0}{dE_\gamma }[1+S_3P_z\alpha _{3z}(E_\gamma )]$$ (1) where $`\alpha _{3z}(E_\gamma )`$ is the circular-longitudinal spin correlation function and $`\frac{d\sigma _0}{dE_\gamma }`$ is the energy-differential cross section for unpolarized electrons and photons. This cross section and the spin correlation function are shown in fig. 3. Exact formula’s for these quantities can be found e.g. in refs. . Equation 1 shows that an asymmetry measurement can be performed by switching the sign of $`S_3`$ which gives an asymmetry proportional with $`P_z`$. For a given $`E_\lambda `$ and $`E_e`$ this asymmetry can be written as, $$A(E_\gamma )=\frac{\frac{d\sigma }{dE_\gamma }_L\frac{d\sigma }{dE_\gamma }_R}{\frac{d\sigma }{dE_\gamma }_L+\frac{d\sigma }{dE_\gamma }_R}=\mathrm{\Delta }S_3P_z\alpha _{3z}(E_\gamma )$$ (2) where $`\frac{d\sigma }{dE_\gamma }_L`$ ($`\frac{d\sigma }{dE_\gamma }_R`$) is the polarized cross section with incident left-handed (right-handed) helicity, and $`\mathrm{\Delta }S_3=\frac{1}{2}(S_{L3}S_{R3})`$. The longitudinal polarization of the electron beam, $`P_z`$, can be determined by taking the quantity $`P_z`$ as a free parameter while fitting the measured asymmetry with eq. 2. The photon polarization term, $`\mathrm{\Delta }S_3`$, needs to be measured independently. Typical values of the total unpolarized Compton cross section, the maximum energy for backscattered photons, and the maximum value for the spin correlation function are given in table 1 for various electron beam energies that can be stored in the AmPS ring. ## 3 The NIKHEF Compton Polarimeter A schematic layout of the Compton polarimeter is shown in fig. 4. The polarimeter consists of the laser system with its associated optical system and a detector for the detection of the backscattered photons. Laser photons interact with stored electrons in the straight section ($`3`$ m long) after the first dipole (bending angle $`11.25^{}`$) and before the second dipole after the Internal Target Facility (ITF). The distance of the Interaction Region (IR) from the internal target is 4.5 m. The Siberian snake preserves the longitudinal spin direction at the ITF, but the orientation of the spin precesses in the ring, according to $`\phi _p`$ $`=`$ $`{\displaystyle \frac{E_e\text{[MeV]}}{440.65}}\phi _e`$ (3) $`P_z`$ $`=`$ $`P_e\mathrm{cos}\phi _p`$ (4) where $`\phi _e`$ is the bending angle of the electrons and $`\phi _p`$ the precession angle of the spin. The polarimeter is only sensitive to the longitudinal component of the electron spin (see eq. 2). Therefore, the experimental asymmetry of the polarimeter reduces with $`\mathrm{cos}(\phi _p)`$. To minimize this reduction factor, the polarimeter has to be located close to the ITF. If for example the North or South straight (see fig. 1) would be used for the polarimeter (90 rotation from the ITF), the spin would be exactly transverse at $`E_e`$ = 440 MeV, and the reduction factor would be zero. On the other hand, because of the internal target at the ITF, the residual gas pressure in the 32 m long west straight can be orders of magnitude higher than in any other place in the ring, resulting in a large bremsstrahlung background. The section between the first and second dipole after the ITF has been chosen as a compromise between spin precession and bremsstrahlung background. The backscattered photons leave the IR travelling in the same direction as the electrons of the beam and are separated from them by the second dipole. The $`\gamma `$-photons leave the vacuum system via a vacuum window and pass through a mirror of the optical system, a sweeping magnet, and a hole in a concrete shielding wall to the gamma detector. ### 3.1 The laser and the associated optical system An Innova 100-25 Ar-ion laser system<sup>1</sup><sup>1</sup>1Coherent Benelux, Argonstraat 136, 2718 SP Zoetermeer, The Netherlands is used which can provide a 10 W continuous beam. The wavelength of the laser light is $`\lambda `$ = 514 nm, the divergence 0.4 mrad (full angle), and the diameter 1.8 mm. The initially linearly polarized light is converted to circularly polarized light with an anti-reflection coated $`\lambda /4`$-plate. Left- and right-handed light is obtained by switching the high voltage on a Inrad 214-090 Pockels cell<sup>2</sup><sup>2</sup>2Inrad, 181 Legrand Avenue, Northvale, NJ 07647, USA, positioned after the $`\lambda /4`$-plate, see fig. 4. The Pockels cell switches between zero-wave (if high voltage is off) and half-wave (if high voltage is on) retardation. The polarization of the laser light is better than 99.8% directly after the Pockels cell. We have observed a small degradation along the beam path, but the degree of polarization is well above 95% at the IR if the system is tuned carefully. After the first tests with a polarized electron beam (see section 4), we have added a $`\lambda /2`$-plate to our system, which reverses the sign of the laser polarization while keeping all steering signals and the high voltage on the Pockels cell constant. The $`\lambda /2`$-plate can be moved into the optical path by means of a compressed air driven translation stage and is located directly after the Pockels cell. It has been used to investigate false asymmetries (see section 4). To facilitate the transport of the laser beam and in order to prevent the Pockels cell from damage by high intensity light, a Galilean expander consisting of two lenses is placed in front of the $`\lambda /4`$-plate. The focal lengths of the lenses are -2.5 cm and 8.0 cm with almost coinciding focal points. This results in an expansion of the laser beam by a factor 3.2 and a reduction of the divergence by the same factor. The focal point of the expander is placed in the IR. The lenses are diffraction-limited and have anti-reflection coatings optimized for 514 nm. In order to protect both the Ar-ion laser and the Pockels cell from radiation damage they are installed in a niche in the NW-curve of AmPS. The laser light is guided to the IR, over a total path of approximately 12 m, by a system of six dielectric mirrors. All mirrors are optimized for 90 scattering at 514 nm. The first mirror reflects the beam from the niche in the direction of ITF. A set of two mirrors reflects the beam into the beam positioning system. Two mirrors are used at this location in order to account for the 11.25 bend caused by the first dipole after ITF, while maintaining 90 reflections on all mirrors. The positioning system consists of two mirrors connected to two translation stages and a gimbal mount. Both translation stages and the gimbal mount can be controlled with DC-motors. The positioning system gives full control over both angle and position of the laser beam in all directions and is used to optimize the overlap of the electron and laser beam. Initial alignment was done manually, centering it on the entrance and exit vacuum windows of the IR. The final alignment of the laser beam is performed with the DC-motors, while electrons are circulating in AmPS. The laser is aligned by optimizing the rate of backscattered photons, see fig. 5. A sixth mirror is used to reflect the beam into the IR. The laser beam enters and leaves the IR via Kodial ND-40 CF-F vacuum windows<sup>3</sup><sup>3</sup>3Balzers-Pfeiffer GmbH, Postfach 1280, D-35608 Asslar, Germany (diameter 25 mm). A system of two mirrors reflects the beam into a beam analysis system after the second vacuum window. It consists of a power meter, a linear polarizer, and $`\lambda /2`$-plate. The polarizer and $`\lambda /2`$-plate are mounted on a motorized translation stage and can be taken out of the beam path to measure the total transported laser power. When the $`\lambda /2`$-plate and polarizer have been inserted into the beam, the $`\lambda /2`$-plate can be rotated with a DC-motor and the laser polarization can be determined by measuring the power as a function of the orientation of the $`\lambda /2`$-plate. Laser beam polarization measurements done with the beam analysis system are only used as a monitor of the system. The polarization used in the extraction of the electron polarization from an asymmetry measurement is measured manually immediately before and after the vacuum windows of the IR. This ensures the absence of systematic errors introduced by the two mirrors between the second vacuum window and the beam analysis system. A chopper mounted immediately after the laser is used to block the laser light for $`1/3`$ of the time to measure the background. The chopper, operating at a frequency of 75 Hz, is also used to generate the driving signal for the Pockels cell. This ensures that the background measurement and the flipping of the laser polarization have exactly the same frequency. The small fraction of the laser light transmitted through the mirrors is utilized to monitor the position of the laser beam using camera’s which are located directly before and after the IR. ### 3.2 The gamma detector The gamma detector of the polarimeter, situated $``$12 m downstream of the IR, must be capable of detecting $`\gamma `$-rays with an energy of up to 30 MeV (see table 1), handling rates up to 1 MHz, and be radiation resistant. Total absorption shower counters made of dense inorganic scintillating crystals were chosen. For the commissioning experiments with an unpolarized stored electron beam a cylindrical BGO scintillator crystal (102 mm diameter and 100 mm long), optically coupled to a Hamamatsu R1250 photomultiplier<sup>4</sup><sup>4</sup>4Hamamatsu Photonics Deutschland GmbH, Arzbergerstr. 10, D-8036 Herrsching am Ammersee, Germany was used to test and tune our laser positioning system. A disadvantage of BGO crystal is the long decay time (300 ns) of the scintillation light which limits the maximum rate. The BGO crystal was later replaced by a rectangular ($`24\times 10\times 10`$ cm<sup>3</sup>) pure CsI crystal. This crystal was optically coupled to a XP4312/B photomultiplier<sup>5</sup><sup>5</sup>5Philips Photonics, BP 520, F-19106 Brive, France. Due to the small decay time of the pure CsI crystal (35 and 6 ns), an active base on the photomultiplier tube, and dedicated electronics, the total setup is able to handle rates up to 1 MHz. A 10 cm thick lead cylinder surrounds the scintillator crystal for shielding purposes. A plastic scintillator (NE102) placed in the front of the gamma detector can be used to veto charged particles. ### 3.3 The Control and the Data Acquisition System The optical system is controlled with a computer code developed with LabVIEW<sup>6</sup><sup>6</sup>6National Instruments, 6504 Bridge Point Parkway, Austin, TX 78730-5039, USA on a SUN-workstation, see fig. 6. The communication between the workstation and the optical system is realised via GPIB (IEEE-488.2), which is connected to the workstation via Ethernet. The four DC-motors used for the steering of the laser beam are controlled by a PI804 motor controller<sup>7</sup><sup>7</sup>7Physik Instrumente GmbH, Polytec-Platz 5-7, D-76333 Waldbronn, Germany. A resolution of 2 $`\mu `$m and 60 $`\mu `$rad can be obtained with the PI804 and the DC-motors. Control signals for the movement of the $`\lambda /2`$-plate after the Pockels cell, and of the $`\lambda /2`$-plate and translation stage of the beam analysis system are also generated in the PI804. The data acquisition is done with a single VME-module, designed and constructed at NIKHEF. The module accepts the analog output of the photomultiplier connected to the gamma detector and digitises this signal using an 8-bit flash ADC. Furthermore, it accepts logic signals representing the state of the chopper and the Pockels cell. Based on those signals, the module constructs four separate energy spectra, for laser blocked or not and for left and right circularly polarized light. The maximum rate the module can handle is 2.5 MHz. The energy spectra are read out typically every 30 s. The VME-module can generate its own trigger for the ADC or it can accept an external trigger. When the external trigger is used, more sophisticated triggering systems can be made, while the internal trigger makes any external logic modules superfluous, resulting in a very compact system. An extra advantage of the external trigger circuit is the possibility to connect the trigger not only to the VME-module, but also to a scaler. The scaler information can then be used for the optimization of the electron and laser beam overlap and to determine dead-time corrections for all energy spectra. Both modes of operation have been used. Determining the electron polarization from the energy spectra is done via eq. 2. All energy spectra are normalized to their integrated luminosities, taking into account dead-time effects and measuring times. Background spectra are subtracted, after which the experimental asymmetry ($`A_{exp}`$) is constructed, via: $$A_{exp}(E_\gamma )=\frac{n_Ln_R}{n_L+n_R}=\mathrm{\Delta }S_3P_e\mathrm{cos}\phi _p\alpha _{3z}^{exp}$$ (5) where $`n_L`$ ($`n_R`$) are the energy spectra for left (right) polarized laser light, after normalisation and background subtraction. The experimental spin correlation function $`\alpha _{3z}^{exp}`$ is obtained with Monte Carlo simulations, performed with the computer code GEANT, taking into account the electron and laser beam phase space along the IR and the characteristics of the gamma detector. $`\mathrm{\Delta }S_3`$ is measured separately and $`\mathrm{cos}\phi _p`$ is calculated from the electron beam energy. $`P_e`$ is extracted from $`A_{exp}`$ via a fit with eq. 5 with $`P_e`$ as a free parameter. The results of the Monte Carlo simulations have also been used for the energy calibration of the gamma detector. ## 4 The performance of the polarimeter The polarimeter has been tested with polarized electrons at a low beam current of appr. 5 mA. These preliminary tests were the first direct measurements of longitudinal polarization in an electron storage ring. They are described in section 4.1. The accuracy of the electron polarization enters directly in the accuracy of all experiments performed with the polarized electron beam. Therefore, it is of the utmost importance that the statistical and systematic uncertainty in $`P_e`$ is small and well known. Statistical accuracy of 1% can be obtained in 1500 s and will not influence the accuracy of the internal target experiments significantly. We have performed extensive investigations of the systematic uncertainty of the polarimeter, in order to minimize and understand these errors. The results of these investigations are presented in section 4.2. Finally, the polarimeter has been used to monitor the electron polarization during the first internal target experiment at NIKHEF that made use of the polarized beam. This is presented in the last paragraph of this section. ### 4.1 Preliminary tests Preliminary tests with a polarized electron beam have been done with a beam current of 2–5 mA and 3 W laser power, to avoid rate problems in the gamma-detector and to minimize the contribution of background events. Because of the extremely low current in the accelerator and a very poor injection efficiency, multiple injections were stacked to obtain the beam current mentioned. The electron energy was 615 MeV, resulting in backscattered photons with a maximum energy of $`E_\gamma ^{max}=13.7`$ MeV. 88% of all detected photons originated from Compton scattering, with a rate normalized to the beam current of 3.5 kHz/mA, see fig. 7. The polarization of the electrons was measured with the Mott polarimeter, located between PES and MEA, to be 0.41. The orientation of the electron spin at the injector was not optimized for maximum polarization in AmPS, reducing the polarization in AmPS to 0.39<sup>8</sup><sup>8</sup>8The difference in the polarization as measured with the Mott polarimeter cited in originates from a recalibration of the Mott polarimeter.. During the experiment, an energy shift was observed between the energy spectra for left- and right-handed polarized laser light, due to ground currents (see next section). This shift of 22 keV ($`10^3`$ of the full scale) was corrected. The asymmetry $`A_{exp}`$ was measured and the electron polarization $`P_e`$ extracted, see fig. 8. In the extraction of the electron polarization, we only used the values of $`A_{exp}`$ measured in the energy range from 5 to 14 MeV. The lower limit of the range is determined by the energy threshold of the detector. As the energy shift described above occurs before the trigger is applied, the asymmetry gets a systematic offset at the threshold (see the lowest points in fig. 8). The upper limit is chosen such that the real to background ratio is better than 4:1, to avoid sensitivity to the background subtraction. We measured $`P_e=0.38\pm 0.04`$ ($`P_e=0.42\pm 0.06`$) for electrons with positive (negative) helicity. These numbers are in good agreement with the result from the Mott polarimeter. We also determined $`A_{exp}`$ for unpolarized electrons and found $`P_e=0.01\pm 0.04\%`$, indicating the absence of false asymmetries in our measurements. This first result obtained with the Compton polarimeter proves that it is possible to accelerate polarized electrons and inject them in a storage ring, even if stacking is required. It also shows that it is possible to operate a Compton polarimeter to determine the absolute degree of polarization of a longitudinally polarized stored electron beam. ### 4.2 Systematic checks We have performed a series of measurements to investigate the systematic errors of the polarimeter. False asymmetries result in errors on the determination of the electron polarization. Possible sources of false asymmetries are: a) inaccuracies in the ratio of the integrated luminosities for the two laser polarization states; b) inaccuracies in the determination of the background contribution, and; c) any signal in phase with the asymmetry measurement. The first type of false asymmetry will give a energy-independent contribution, while type b will depend on $`E_\gamma `$. Type c can either be energy dependent or independent, based on how the signal influences the asymmetry measurement. During these measurements, the storage ring could only be operated with a 10% partial snake. Therefore, it was necessary to perform all measurements at an electron beam energy of 440 MeV, resulting in a maximum energy for the Compton photons of 7.0 MeV. This beam energy is lower than the design specification of the polarimeter (500–900 MeV), resulting in a poor energy resolution. To reduce background at this rather low energy, we performed all measurements with beam currents smaller than 15 mA, resulting in Compton rates $``$ 120 kHz (8.0 kHz/mA). An advantage of the low beam energy for our measurements is an enhanced sensitivity for false asymmetries, because of the relatively small Compton asymmetry (see table 2). The polarization measured with the Mott polarimeter was 0.69. The settings of the Z-shaped spin manipulator were optimized for maximum polarization in the AmPS ring. The measurements showed an energy-independent asymmetry of the order of $`0.510^3`$. It disappeared if we disabled the switching of the Pockels cell. After we switched the Pockels cell the rate changed of the order of 10% in 10 s, and then stabilized. If we moved the laser beam by 0.1 mm simultaneously with the switching of the Pockels cell, the equilibrium rate was the same for both states. From this, we concluded that the Pockels cell is heated by applying the high voltage. The effect is a small steering of the laser beam, in the order of 10 $`\mu `$rad between the two equilibrium states, or 15 nrad during normal operation, resulting in a false asymmetry of type a. The exact value of this false asymmetry changed over the time of days, though the order of magnitude was constant. It can be explained by temperature drifts of the laser, because the size of the false asymmetry depends strongly on the exact position of the laser beam with respect to the electron beam. False asymmetries from inaccuracies in the background contributions are negligible, because of the good real to background ratio in the energy range used for the polarization measurements. Furthermore, the background has to be related to the laser polarization in order to introduce false asymmetries. In the energy range used, no physics process can contribute significantly to the asymmetry. The background subtraction could also introduce an error in the size of the asymmetry, if the background subtraction was not performed accurately. Also this effect is negligible, because we measured the background simultaneously, the background is small compared to the real signal, and we correct all energy spectra for dead-time effects. The only signal in phase with the asymmetry measurements is the driving signal of the Pockels cell and electronics. We have observed a false asymmetry of the order of $`210^3`$, related to the driving signal of the Pockels cell and electronics. It is proportional to the derivative of the energy spectrum. This indicates that the signal from the gamma detector is shifted by the driving signal of the Pockels cell, before digitisation. This can happen either between the detector and the electronics, or inside the VME-module. The asymmetry disappears if we generate the driving signals for the electronics separately, indicating that the shift of the analog signal happens before the signal reaches the electronics. This is confirmed by the fact that the false asymmetry also disappears if the signal from the gamma detector is disconnected from the electronics. The shift of the analog signal results in a shift of the energy spectrum of the order of $`110^3`$ of the full energy scale. Both false asymmetries mentioned above can be corrected for, if they are known during real polarization measurements. Therefore, we have performed all measurements in sets of six. Three measurements were done with different electron polarizations injected into the ring (positive helicity, unpolarized and negative helicity). These measurements were repeated with the $`\lambda /2`$-plate inserted in the path of the laser beam, which reverses the sign of the measured Compton asymmetry by a change of sign of the laser polarization. The measurements with unpolarized electrons were used to determine the false asymmetries. This is only valid, if the false asymmetries do not change on the time scale of the 2 times 3 measurements. To check this, one measurement of such a set was repeated nine times. To exclude sensitivity to variations in the polarization of the injected electrons or spin life time, we choose to use unpolarized electrons. The total measurement time was appr. 90 minutes., while a full set of six measurements normally takes about 60 minutes. The results are shown in fig. 9 and show good stability on this time scale, indicating that we can use measurements with unpolarized electrons to correct for false asymmetries. The systematic uncertainty of the polarimeter is not only determined by false asymmetries, but also by the analysis parameters. The main contribution comes from the energy calibration of the detector. Smaller effects come from the uncertainty in the energy of the electron beam, the laser polarization and the parametrisation of the theoretical analysing power, folded with the parameters of the detector, like the energy resolution. The laser polarization measured is the average polarization of the whole laser beam. The laser beam is larger than the electron beam and therefore only its central part interacts with the electrons. We have measured the laser polarization in a range of transverse positions and did not observe any variation. Table 2 shows an overview of all sources of systematic errors for electrons of 440 MeV with a polarization of 0.60. The systematic error will decrease for higher beam energy, because the Compton asymmetry increases. ### 4.3 Polarization monitoring During the first data run of experiment 94-05 , the polarimeter has been used to monitor the polarization of the electrons stored in the AmPS ring. This data run followed immediately after the measurements described in the previous section. These measurements were also performed with a partial snake at a beam energy of 440 MeV. The polarization could not be measured simultaneously with the experimental data taking, because the background rate due to <sup>3</sup>He gas leaking into the IR from the internal target was too high. Therefore, the electron polarization was measured once a day when no gas was fed into the internal target. The long-term stability was investigated from these measurements. They are sensitive not only to variations of the polarimeter, but also to any other time-dependent effect such as a degradation of the photo-cathode used at PES. No trend is observed in the polarization of the electrons (see fig. 10), indicating a good long-term stability for all components, including the polarimeter. The average polarization for all measurements and for negative and positive electron helicity determined with the $`\lambda /2`$-plate in (out) the laser beam was measured to be $`0.615\pm 0.009\pm 0.027`$ ($`0.595\pm 0.009\pm 0.027`$). This is less than the value measured with the Mott polarimeter (0.69). No polarization loss was observed during the first measurement at 615 MeV (see section 4.1). The difference between the two measurements were the beam energy, the settings of the snake, and the orientation of the spin in the accelerator. The beam energy has no direct effect on the depolarization in the accelerator. Depolarization can occur in the focussing solenoids in the first sections of the accelerator. Small energy differences of the electrons can cause differences in the spin precession in the lenses, which will result in loss of coherence of the transverse component of the electron spin. At 615 MeV the spin was longitudinal in the accelerator and thus was not sensitive for this effect. The electron spin in the measurements at 440 MeV was completely transverse and so had maximum sensitivity for depolarization due to the lenses. The difference might also be explained by polarization losses during injection, due to the partial snake. A full snake ensures that the spin tune is 0.5, so that all possible spin resonances are far away. The spin tune with the partial snake was 0.05. If this was close to a depolarizing resonance, it is possible that during injection the spin life time is shorter due to larger synchrotron oscillations and so cause a loss of polarization during the damping of the beam. We have measured the spin life time of the damped beam, to be well over 3600 s which can not explain the losses we have observed. ## 5 Summary - Conclusions We have successfully designed, constructed, and operated a Compton polarimeter to measure the longitudinal polarization of a stored electron beam. We have measured the polarization of a stored beam at beam energies of 440 MeV and 615 MeV. The absolute systematic uncertainty has been determined to be 0.027 for electrons at 440 MeV with a polarization of 0.60. The systematic uncertainty will decrease at higher beam energies. It has been shown that the polarimeter can be operated routinely and reliably during the first experiment at the internal target facility that made use of polarized electrons. Extra pumping capacity will be installed in the future to make polarization measurements possible during operation of an internal target. We would like to thank C. L. Morris (LANL) for the supply of the CsI crystal and O. Hausser (DESY) for advise on the optical system. We would also like to thank the NIKHEF Electronic workshop for the design and construction of the data acquisition module. We like to thank the accelerator group for their assistance with the operation of the facility, the PES group for the polarized beam and the 94-05 collaboration for the time and manpower they made available to perform the experiments discussed. This work was supported in part by the Stichting voor Fundamenteel Onderzoek der Materie (FOM), which is financially supported by the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO), and by HCM Grant Nrs. ERBCHBICT-930606 and ERB4001GT931472.
no-problem/9902/astro-ph9902334.html
ar5iv
text
# Starburst99: Synthesis Models for Galaxies with Active Star Formation ## 1 Introduction Synthesis models for comparison with observed galaxy properties, and in particular with spectral energy distributions, have become increasingly popular in recent years. This is partly due to improvements in stellar libraries and models. An overview over the latest achievements can be found in the conference volume edited by Leitherer, Fritze-v. Alvensleben, & Huchra (1996b). At least equally important is the possibility of electronic dissemination of the data, either by CD-ROM or via the internet. Most groups who provide such models to the community make them available electronically. See Leitherer et al. (1996a) for a compilation of a suite of models. This paper concerns a particular set of synthesis models: models which are optimized to reproduce properties of galaxies with active star formation. In the absence of an active galactic nucleus, most radiative properties of such galaxies are determined by their massive-star content. The most extreme examples are referred to as starbursts (Weedman et al. 1981; Weedman 1987; Moorwood 1996) but the models in this paper are often applicable to less extreme star-forming regions and galaxies, like 30 Doradus and M33, as well. Leitherer & Heckman (1995; hereafter LH95) published a homogeneous grid of synthetic starburst models. These models, together with those of Bressan et al. and Bruzual & Charlot (both published on the CD-ROM of Leitherer et al. 1996a), Fioc & Rocca-Volmerange (1997), and Cerviño & Mas-Hesse (1994), are widely used to aide the interpretation of galaxy observations. Although the LH95 models are still recently up-to-date and no major error was found, we decided to generate and release a new generation of models. There are two reasons for this. First, stellar modeling is continuously advancing, and some known shortcomings of the stellar atmosphere and evolution models used in LH95 have been improved. Specifically, we have included the new model atmosphere grid of Lejeune, Buser, & Cuisinier (1997) and the latest Geneva evolutionary tracks in our code. Other modifications include the possibility to perform isochrone synthesis, a technique which was pioneered by Charlot & Bruzual (1991). The second motivation for an update to LH95 concerns the distribution method. While the LH95 data set is unofficially available via our website, the principal publication is on paper. Many users found this inconvenient and encouraged us to publish fully electronically. Therefore we decided not to include any tables with model predictions and only a subset of the figures in the hardcopy version of this paper. The full data set with all the figures is available in the electronic version of the journal and via our webpage, which is an integral part of this publication. The reader should visit the website at http://www.stsci.edu/science/starburst99/ to view the figures and to download the tables. Moreover, this website offers the opportunity to run tailored models remotely and to access our source code. The reader should use this paper as a guide when visiting our webpage. The organization of the paper is as follows: In § 2 we outline our computation technique, the model assumptions, and the improvements over LH95. This section also explains how to reach our website and how to navigate. The spectral energy distributions are presented in § 3. Spectral line profiles in the ultraviolet (UV) are discussed in § 4. In § 5 we provide numbers for the stellar inventory. Luminosities and colors are in § 6 and 7, respectively. Properties related to the far-UV are presented in § 8. Some other useful diagnostic lines are in § 9. § 10 covers the mass and energy return of a stellar population. We conclude with § 11. ## 2 Model assumptions and computational technique The present model set is an extension of our previous work which was published in several papers. Most of the earlier tables are in LH95. The energy distributions used in LH95 were made available electronically in Leitherer et al. (1996a) and on our website. Ultraviolet spectra between 1200 Å and 1850 Å at 0.75 Å resolution were originally published in Leitherer, Robert, & Heckman (1995) and distributed electronically in Leitherer et al. (1996a). A set of spectra around O VI $`\lambda `$1035 and Ly$`\beta `$ was discussed by González Delgado, Leitherer, & Heckman (1997). All mentioned models have been re-computed and included in the present database. The covered parameter space is similar to the one in LH95. Since arguments are given in LH95 for the choice of these particular parameters, we will be brief and only address those issues in more detail which are different from LH95. We consider two limiting cases for the star-formation law: an instantaneous burst of star formation and star formation proceeding continuously at a constant rate. The instantaneous models (also referred to as a single stellar population) are normalized to a total mass of $`10^6`$ M. The star-formation rate of the continuous model is 1 M yr<sup>-1</sup>. These normalizations were chosen to produce properties which are typical for actual star-forming regions in galaxies. As we did in LH95, we offer three choices for the stellar initial mass function (IMF). The reference model is a power law with exponent $`\alpha =2.35`$ between low-mass and high-mass cut-off masses of $`M_{\mathrm{low}}`$ = 1 M and $`M_{\mathrm{up}}`$ = 100 M, respectively. This approximates the classical Salpeter (1955) IMF. Most observations of star-forming and starburst regions are consistent with a Salpeter IMF — although the uncertainties can be large (Leitherer 1998; Scalo 1998). For comparison, we also show the results for an IMF with $`\alpha =3.3`$ between 1 and 100 M, which has a higher proportion of low-mass stars. Finally, we computed models with a truncated Salpeter IMF: $`\alpha =2.35`$, $`M_{\mathrm{low}}`$ = 1 M, $`M_{\mathrm{up}}`$ = 30 M. The models can easily be re-normalized to other $`M_{\mathrm{low}}`$ values. The lower mass cut-off is often only a scaling factor since low-mass stars have a negligible contribution to the properties of the stellar population, except for the mass locked into stars. Absolute quantities (such as, e.g., the number of ionizing photons) can be rescaled to other $`M_{\mathrm{low}}`$ values by multiplying our results by factors of 0.39, 1.00, 1.80, 3.24 for $`M_{\mathrm{low}}`$ = 0.1, 1.0, 3.5, 10 M. Relative quantities (like colors) should, of course, not be scaled. Note that 10 M stars are no longer negligible for many model predictions, and the models should be recalculated for this cut-off mass, rather than scaling the existing data. Common sense is necessary when applying this scaling factor to instantaneous models in particular. The lifetime of a 2 M star is only somewhat above 1 Gyr. Therefore burst models at 1 Gyr are actually only represented by a population of 1 – 2 M stars. Clearly even a small change of the lower mass cut-off will drastically alter the model predictions. <sup>2</sup><sup>2</sup>2A typographical error was introduced in eq. (3) of LH95: the numerator should be 0.80 instead of –0.35. We have implemented the new set of stellar evolution models of the Geneva group — as opposed to the Maeder (1990) models in LH95. For masses of 12 –25 M (depending on metallicity) and above we are using the tracks with enhanced mass loss of Meynet et al. (1994). The tracks of Schaller et al. (1992), Schaerer et al. (1993a, 1993b), and Charbonnel et al. (1993) with standard mass loss are used between 12 and 0.8 M. We have favored the enhanced mass-loss models over the standard ones because they are a better representation of most Wolf-Rayet (WR) star properties, except for the mass-loss rate ($`\dot{M}`$) itself. The high mass-loss rates of WR stars are in conflict with observations (Leitherer, Chapman, & Koribalski 1997). This may suggest the lack of one or another ingredient in the evolution models, like rotation induced mixing processes (Maeder 1995). The evolutionary tracks do not take into account binary evolution. Inclusion of Roche-lobe overflow in binary systems can modify some of the predictions, in particular during the WR phase. Models taking these effects into account were computed by Cerviño, Mas-Hesse, & Kunth (1997), Vanbeveren, Van Bever, & de Donder (1997), Schaerer & Vacca (1998), and Dionne (1999). The Geneva models include the early asymptotic-giant-branch (AGB) evolution until the first thermal pulse for masses $`>`$ 1.7 M. Stellar evolution models in the AGB phase are rather dependent on input assumptions, and therefore quite uncertain. A recent comparison between different sets of AGB models was made by Girardi & Bertelli (1998). Five metallicities are available: $`Z=0.040`$, 0.020 (= Z), 0.008, 0.004, and 0.001. These are the metallicities of the evolutionary models. All element and isotope ratios are independent of metallicity. Note that we do not treat chemical evolution self-consistently: each stellar generation has the same metallicity during the evolution of the population. The error introduced by this simplification is negligible as long as star formation proceeds for less than about 1 Gyr, otherwise self-consistent models are needed, such as those by Möller, Fritze-v. Alvensleben, & Fricke (1997). The models cover the age range $`10^6`$ to $`10^9`$ yr. In principle we could have evolved the populations up to a Hubble time but we decided not to do so. The evolutionary models are optimized for massive stars and become less reliable for older ages. Many properties can still be safely predicted beyond 1 Gyr whereas others (such as, e.g., near-infrared colors) become increasingly uncertain. We will comment on uncertainties when the individual figures are discussed. Atmosphere models in LH95 were from Kurucz (1992) for most stars, supplemented by the Schmutz, Leitherer, & Gruenwald (1992) models for stars with strong winds, and black bodies for the coolest stars. Lejeune et al. (1997) have produced a new, homogeneous grid of atmospheres covering the entire Hertzsprung-Russell diagram (HRD) populated by the evolutionary models, including the coolest stars. This new grid has been implemented in our synthesis code. We used the corrected grid whose fluxes produce colors in agreement with empirical color-temperature relations. The original atmosphere models do not include metallicities of 0.008 and 0.004. We interpolated between adjacent metallicities to obtain grids whose metallicities match those of the evolutionary models. No interpolation was required for 0.040, 0.020, and 0.001. As in LH95, the Schmutz et al. (1992) extended model atmospheres are used for stars with strong mass loss. The prescription for the switch between extended and plane-parallel atmospheres is the same as in LH95. The extended atmospheres are assigned to all stars with surface temperature $`>`$25,000 K and hydrogen content on the surface $`<`$0.4. We decided not to use extended model atmospheres for stars closer to the main-sequence since the benefit of improved wind treatment would be more than compensated by the limited metallicity range available for such models. The nebular continuum has been added to all spectrophotometric quantities, except to the energy distributions in Figures 7a-e, 8a-e, 9a-e, 10a-e, 11a-e, and 12a-e. We have assumed that all the stellar far-UV photons below 912 Å are converted into free-free and free-bound photons at longer wavelengths (case B). However, the stellar far-UV flux has not been removed in the spectral energy distributions, as this would probably not be desirable for the user. To alert the user, the hydrogen and helium emission coefficients of Ferland (1980) do not extend longward of 4.5 $`\mu `$m. Therefore the nebular continuum at these wavelengths is undefined in our code. In practice, this is hardly relevant since dust emission often dominates at $`\lambda >3`$ $`\mu `$m (Carico et al. 1988) and our models become inapplicable anyway. All models shown here have been computed with the isochrone synthesis method, as opposed to the classical evolutionary synthesis method in LH95. Isochrone synthesis was introduced by Charlot & Bruzual (1991) as a technique to overcome the discrete appearance of the model predictions at late evolutionary stages when the resolution in mass of the evolutionary models becomes inadequate. Instead of binning stars in mass and assigning them to a specific track, continuous isochrones are calculated by interpolating between the tracks in the HRD on a variable mass grid using a subroutine originally provided by G. Meynet. This scheme generally produces smooth output products, the only exception being quantities related to supernovae (see below). The time resolution of the model series is 0.1 Myr. Spectral energy distributions, however, are given only at time steps during which significant changes occur. The model predictions are discussed in the following sections. A summary of all the figures is in Table 1. Each figure has five panels (a-e) for each of the five metallicities. An exception are Figures 13 through 36 which show UV line spectra. These spectra are for Z only, as the library stars are currently available only for this metallicity. The models will be extended as soon as non-solar library stars become available (Robert 1999). The data files which were used for the generation of the plots can be viewed and downloaded (see instructions on the website). The files with spectra (Figures 1 through 36) contain data at more time steps than plotted. For clarity reasons we included only a subset of all time steps in the plots. Only the figures for solar metallicity models are reproduced in the hardcopy version of the journal (panels ‘b’ of Figures 1 through 13 and 37 through 120, and Figures 13 through 36). The full data set for all metallicities is included in the electronic version of the journal and can be accessed on our website at http://www.stsci.edu/science/starburst99/. ## 3 Spectral energy distributions Spectral energy distributions for all five metallicities, the three IMF parameterizations, and the two star-formation laws were calculated. The wavelength coverage and spectral resolution are identical to those of the model atmosphere set Lejeune et al. (1997). The minimum and maximum wavelengths are 91 Å and 160 $`\mu `$m, respectively. The spectral resolution is wavelength dependent. It is typically about 10 to 20 Å in the UV to optical range. The energy distributions were calculated at intervals of 1, 10, and 100 Myr between 1 and 20 Myr, 20 and 100 Myr, and 100 and 1000 Myr, respectively. The standard IMF case ($`\alpha =2.35`$; $`M_{\mathrm{up}}`$ = 100 M) is in Figure 1. Panels a-e show the energy distributions for the five considered metallicities. Not all time steps are plotted in the figures but they are included in the ascii table file which can be downloaded. The wavelength range has been restricted to 100 – 10,000 Å in the figures, but the full range is covered in the tables. The most dramatic changes occur in the far-UV below 912 Å, and even more below 228 Å in the He<sup>++</sup> continuum. O stars dominate during the first few Myr, followed by a brief period at $``$4 Myr when WR stars contribute with their strong far-UV flux. Eventually the burst population fades away. Metallicity enters both via the continuum and line opacity (compare the 2600 Å region at low and high metallicity) and via stellar evolution. The latter effect can most readily be seen in the strength of the He<sup>++</sup> continuum. It is strongest at high metallicity when WR stars preferentially form, and it is weakest at low metallicity (see also the discussion in § 8). The continuous star formation case for the same IMF is in Figure 2a-e. Most of the prior discussion holds for this figure as well. Once the stellar types responsible for the photon contribution to a wavelength interval have reached equilibrium between birth and death, the particular wavelength region becomes time independent. We move on to different IMFs. The instantaneous and continuous cases, with $`\alpha =3.3`$, $`M_{\mathrm{up}}`$ = 100 M are in Figures 3a-e and 4a-e, respectively. The truncated IMF, $`\alpha =2.35`$, $`M_{\mathrm{up}}`$ = 30 M is covered in Figures 5a-e and 6a-e. The spectral energy distributions produce a softer radiation field as compared with the standard IMF, in particular if stars with $`M_{\mathrm{up}}`$ $`>`$ 30 M are suppressed. The radiative properties shown later in the paper were obtained from these energy distributions. For instance, colors were derived by convolving the spectra by the filter profiles. Users who wish to compute colors in different filter systems can do so with the spectra in Figures 1 through 6. Another application would be to use the spectral energy distributions as input to a photoionization code. In this case, Figures 1 through 6 should not be used because the nebular continuum was added to the stellar flux. We computed an additional set of stellar spectra with exactly the same parameters as before, but without including the nebular continuum. This set may also be useful to compute magnitudes and colors for comparison with objects which have no nebular continuum contribution. An example could be a background subtracted star cluster, where the background subtraction removed the nebular emission as well. There may also be cases where the nebular emission of a galaxy or H II region is more extended than the ionizing cluster, and part of the nebular flux is outside the instrument aperture. Finally, the H II region may not be ionization-bounded. Such cases may be easier to model using energy distributions without nebular continuum, and applying an empirical correction, e.g., from the observed H$`\alpha `$ flux in the aperture. The models without nebular continuum are in Figure 7a-e ($`\alpha =2.35`$, $`M_{\mathrm{up}}`$ = 100 M, instantaneous star formation), Figure 8a-e ($`\alpha =2.35`$, $`M_{\mathrm{up}}`$ = 100 M, continuous star formation), Figure 9a-e ($`\alpha =3.3`$, $`M_{\mathrm{up}}`$ = 100 M, instantaneous star formation), Figure 10a-e ($`\alpha =3.3`$, $`M_{\mathrm{up}}`$ = 100 M, continuous star formation), Figure 11a-e ($`\alpha =2.35`$, $`M_{\mathrm{up}}`$ = 30 M, instantaneous star formation), and Figure 12a-e ($`\alpha =2.35`$, $`M_{\mathrm{up}}`$ = 30 M, continuous star formation). The spectra in Figures 7 through 12 and the curves in Figures 45, 46, 75, and 76 are the only models in this paper which do not include the nebular continuum. ## 4 Ultraviolet line profiles Two spectral regions with useful diagnostic lines were computed at higher resolution. The wavelength range from 1200 Å to 1850 Å has the strong resonance lines of C IV $`\lambda `$1550, Si IV $`\lambda `$1400, and N V $`\lambda `$1240. At shorter wavelengths, O VI $`\lambda `$1035 and Ly$`\beta `$ are of interest. The spectra described in this section were computed with a normalized library of stellar UV spectra and absolute fluxes derived from the energy distributions shown in the previous section. We did not use the energy distributions directly since they are not featureless. Rather we derived a featureless continuum by fitting a spline through line-free sections of the model atmospheres. ### 4.1 The 1200 Å to 1850 Å region The 1200 Å to 1850 Å region is readily accessible to many satellites, like IUE or HST. We synthesized spectra from an IUE high-dispersion library of O- and WR-spectra and a low-dispersion for B stars. The spectral library and the method are discussed in Robert, Leitherer, & Heckman (1993) and Leitherer et al. (1995). The resolution is 0.75 Å, adequate for typical spectral data of extragalactic objects. This resolution is only reached during phases dominated by O- and WR stars, as is the case for young ($`t<7`$ Myr) bursts and for the continuous case at any time. Otherwise the effective spectral resolution approaches the resolution of IUE low-dispersion spectra (6 Å). The upgrade of the B-star library from low- to high-dispersion has been completed and will be implemented in the future (de Mello, Leitherer, & Heckman, in preparation; Robert, Leitherer, & Heckman, in preparation). Only solar metallicity models are shown since the library stars have solar or somewhat sub-solar metallicity. A spectral library of massive stars in the Magellanic Clouds is being developed and will be available in the future (Robert 1999; Robert, Leitherer, & Heckman 1999, in preparation). Figure 13 shows rectified spectra between 1 and 20 Myr for an instantaneous burst. A standard IMF was used. The evolution of the strongest stellar-wind features of C IV $`\lambda `$1550, Si IV $`\lambda `$1400, N V $`\lambda `$1240, N IV $`\lambda `$1720, and He II $`\lambda `$1640 can be readily seen. These features disappear or become photospheric lines after about 7 Myr when the transition from an O-star to a B-star dominated population occurs. A purely photospheric line is, e.g., S V $`\lambda `$1502. The library stars have strong interstellar lines as well. One of the strongest lines is C II $`\lambda `$1335. The continuous case for the same IMF is shown in Figure 14. The same basic effects are there as well but the time-dependence is much weaker, and a quasi-equilibrium state is reached after about 5 Myr. Figures 15 and 16 show spectra with the same parameters as in Figures 13 and 14, but in absolute luminosity units. As we did for the spectral energy distributions in the previous section, we re-computed the UV spectra for the two other IMFs, a steeper IMF with $`\alpha =3.3`$ and a truncated IMF with $`M_{\mathrm{up}}`$ = 30 M. These spectra are in Figures 17 through 24. The general trend is a weakening of all stellar-wind lines, most notably C IV $`\lambda `$1550 and Si IV $`\lambda `$1400, as these lines are due to massive O stars. These lines are very sensitive IMF tracers. Their detection immediately indicates an O-star population and therefore star-formation over the past $``$10 Myr. ### 4.2 The 1015 Å to 1060 Å region The wavelength region shortward of Ly$`\alpha `$ in local starburst galaxies is accessible to, e.g., the FUSE mission (Sahnow et al. 1996) and can be observed in star-forming galaxies at high redshift from the ground. One of the strongest stellar-wind lines is O VI $`\lambda `$1035 (Walborn & Bohlin 1996). Synthetic spectra for the region between 1015 Å and 1060 Å were presented by González Delgado et al. (1997) and were regenerated for the Starburst99 package. They are based on an empirical Copernicus library of O and B stars and have a resolution of 0.2 Å. A few library stars were observed with HUT at a spectral resolution of about 3 Å. Only solar metallicity models are considered, as we have no library stars with non-solar metallicity. Available time steps are 1 to 20 Myr at $`\mathrm{\Delta }t=1`$ Myr and 20 to 100 Myr at $`\mathrm{\Delta }t=10`$ Myr. Not every time step is plotted in the figures. We discuss the most notable features using the standard IMF, instantaneous burst, rectified model (Figure 25). The strongest lines are O VI $`\lambda `$1035, Ly$`\beta `$, C II $`\lambda `$1036, and the Lyman and Werner bands of H<sub>2</sub>, like the one at 1050 Å. In the first few Myr of the starburst, O VI $`\lambda `$1035 has a strong P Cygni profile, indicating winds from massive O stars. O VI $`\lambda `$1035 becomes weaker with age and gradually blends with Ly$`\beta `$ at 1026 Å. Ly$`\beta `$ is mostly stellar, with some interstellar contribution. The line increases in strength with increasing B-star fraction. C II $`\lambda `$1036 follows Ly$`\beta `$ in its temporal behavior. It is a strong B-star line with some interstellar contribution and can be used to estimate the starburst age (González Delgado et al. 1997). The continuous case with the same IMF parameters behaves accordingly (Figure 26). The counterparts of Figures 25 and 26 in luminosity units are in Figures 27 and 28. The remaining figures for O VI $`\lambda `$1035 are organized the same way as the figures for the 1200 Å to 1850 Å region. They are intended to highlight IMF variations (Figures 29 through 36). An IMF biased against massive O stars suppresses O VI $`\lambda `$1035 and strengthens C II $`\lambda `$1036. Even at the earliest ages the absorption trough around 1025 Å is mostly due to Ly$`\beta `$, and not O VI $`\lambda `$1035. ## 5 Massive-star inventory In this section we give number predictions for massive stellar types that can be easily “counted” even in distant galaxies: O stars, WR stars, and supernovae (SNe). O-star numbers are in Figures 37a-e and 38a-e for the instantaneous and continuous case, respectively. Panels a, b, c, d, and e are for metallicities 0.040, 0.020, 0.008, 0.004, and 0.001, respectively. All remaining figures in this paper have the same structure. The O-star numbers include stars with spectral types O3 to O9.5 of all luminosity classes. The adopted spectral-type versus effective temperature ($`T_{\mathrm{eff}}`$) and luminosity ($`L`$) relation is from Schmidt-Kaler (1982). The chosen star-formation histories produce up to about $`10^3`$ to $`10^4`$ O stars at any time, depending on the age of the population. A flatter IMF increases the O-star number. WR stars are the evolved descendants of massive O stars. We define them as stars with $`\mathrm{log}T_{\mathrm{eff}}>4.4`$ and surface hydrogen abundance less than 0.4 by mass. The mass limits for the formation of WR stars are taken from Maeder & Meynet (1994). WR stars have very extended atmospheres producing strong emission lines which can be detected in distant starburst populations (Conti 1991). The quantity of interest for comparison with models is the WR/O ratio, which is plotted in Figures 39a-e and 40a-e for the instantaneous and continuous case, respectively. This ratio is very metallicity dependent: metal-rich starbursts are predicted to have more WR stars at any time and to have longer phases when WR stars are present. This behavior is generally in agreement with observations (Meynet 1995). The panels with sub-solar metallicity have no graph for the IMF with $`M_{\mathrm{up}}`$ = 30 M. This results from the absence of WR stars at low metallicity. The detailed shape of the graphs in Figures 39a-e and 40a-e is very model dependent and future revisions of the evolutionary models may change the WR/O ratio. Dedicated evolutionary synthesis models with particular attention to the WR phase were published by Meynet (1995) and Schaerer & Vacca (1998). The binary channel to form WR stars in stellar populations was investigated by Cerviño & Mas-Hesse (1997), Vanbeveren et al. (1997), Schaerer & Vacca (1998), and Dionne (1999). WR stars come in two main subclasses: nitrogen-rich WN stars and carbon-rich WC stars. We adopt the classification schemes of Conti, Leep, & Perry (1983) for WN stars and of Smith & Hummer (1988) and Smith & Maeder (1991) for WC stars. The WC/WN ratio measures the relative lifetimes spent in the two phases (Figures 41a-e and 42a-e). However, the calculations are affected by uncertainties in the interpolation of the relatively sparse tracks, as discussed in Schaerer & Vacca (1998). In this respect, the present models are identical to those of Schaerer & Vacca, who predict fewer WC stars than Meynet (1995) despite the use of the same tracks. Most of the comments on the WR/O ratio apply to these figures as well. The WC/WN ratio varies very little with the IMF exponent: the $`\alpha =2.35`$ and 3.3 cases are almost indiscernible in the figures. This indicates that both stellar types have progenitors of very similar mass. As a result, applying different weights to different mass intervals does not affect the ratio. The same argument applies to several other quantities discussed further below, like, e.g., the CO index (Figure 95). All massive stars with initial masses above 8 M are assumed to explode as SNe. This leaves open the question if a critical mass exists above which stars directly form a black hole rather than producing a core-collapse SN (Maeder 1992). If so, the predictions of Figures 43a-e and 44a-e would hardly be changed because the SN rate is strongly weighted towards the lowest progenitor masses. The user should be aware of a computational issue which can be seen in Figures 43a-e and 44a-e. The graphs show discontinuities at the factor-of-2 level where there should be none (e.g., around 10 Myr in Figure 43). A simple argument suggests that the SN rate should be a smooth and almost constant function of time. The rate scales with the product of the IMF and the mass-age relation. For a Salpeter IMF and a mass-age relation $`t(m)m^\gamma `$ with $`\gamma 1.2`$ (Schaerer et al. 1993b) the SN rate of an instantaneous burst becomes essentially time-independent (Shull & Saken 1995). In other words, there are more and more SN events for lower masses since there are more progenitors available, but at the same time it takes longer for a star to turn into a SN. Obviously the discontinuities in Figure 43 are not physical. They result from a peculiarity of the isochrone synthesis interpolation technique. The code attempts to optimize the mass interval size for the interpolation. This works extremely well, except when only stars in an infinitesimal mass interval are relevant, as is the case for the SN rate. The same applies to all quantities which are directly dependent on the SN rate, like the energy release from SNe. We decided not to artificially smooth the curves since this would be a subjective process and we did not want to treat SN-related quantities different from the other predictions. We remind the user to apply common sense when interpreting the figures. The correct supernova rate between 5 and 35 Myr for an instantaneous burst in Figure 43 is about $`10^3`$ yr<sup>-1</sup> (LH95). ## 6 Luminosities We give the total and monochromatic luminosities at selected wavelengths. We have chosen the $`V`$, $`B`$, and $`K`$ passbands, as well as a UV wavelength at 1500 Å since these are the most interesting cases for comparison with observations. Luminosities for other bands can be obtained from the spectral energy distributions in Figures 1 through 6. The bolometric luminosity ($`M_{\mathrm{Bol}}`$) is defined such that the Sun has $`M_{\mathrm{Bol}}=4.75`$. The results in Figures 45a-e and 46a-e were obtained by integrating the spectral energy distributions without the nebular continuum. The total luminosity of the combined stellar and nebular continuum is smaller than the pure stellar continuum since only a fraction of the absorbed stellar ionizing flux is re-emitted in the nebular continuum. The curves are smooth without strong discontinuities and reflect the behavior of the most massive stars which provide most of the radiative energy output. The absolute magnitudes $`M_\mathrm{V}`$, $`M_\mathrm{B}`$, and $`M_\mathrm{K}`$ are in Figures 47a-e (instantaneous) through 52a-e (continuous). The absolute magnitude $`M_\mathrm{V}`$ was calculated from the bolometric luminosity and the bolometric correction, which is 0.00 for a solar metallicity star with $`T_{\mathrm{eff}}`$ = 7000 K and $`\mathrm{log}g=1.0`$. The Sun has a bolometric correction of $`0.19`$ in this system. The absolute magnitudes $`M_\mathrm{B}`$ and $`M_\mathrm{K}`$ follow from $`M_\mathrm{V}`$ and the $`(BV)`$ and $`(VK)`$ colors. All three luminosities have conspicuous variations around 10 Myr when red supergiants (RSG) appear. The effect is strongest in the $`K`$ band, which is closest to the energy peak of RSGs. The RSG feature is very metallicity-dependent. The RSG contribution in Figure 51a-e is strongest at $`Z=0.040`$ and weakest at $`Z=0.001`$. This apparent metallicity dependence results from the failure of the evolutionary models to predict correct RSG properties at $`Z0.008`$. This effect was pointed out before by Mayya (1997) and Origlia et al. (1998). Low-metallicity RSG models have too high surface temperatures and too short RSG lifetimes. Uncertainties in the mixing processes at low metallicity are a possible explanation (Langer & Maeder 1995). Currently there are no self-consistent stellar evolution models available which correctly predict the variations of blue-to-red supergiants with metallicity. Therefore our (and most other) synthesis models are incorrect during phases when RSGs are important. An empirically adjusted set of synthesis models was prepared by Origlia et al. (1998) but these adjustments were not made for the model set in this paper. Luminosities at 1500 Å (Figures 53a-e and 54a-e) were computed by averaging over the wavelength interval 1490 Å – 1510 Å. This wavelength becomes observable from the ground at redshifts larger than $``$2, and the luminosity at 1500 Å is a useful indicator of the star-formation rate, or, in connection with the H$`\alpha `$ luminosity of dust obscuration (Pettini et al. 1997). The 1500 Å luminosity is a very robust prediction, with few uncertainties since the continuum at this wavelength comes from well-understood late-O/early-B stars. ## 7 Colors Optical and near-infrared (IR) colors were calculated by convolving the spectral energy distributions with the filter profiles. The filters are in the Johnson (1966) system and are the same as in LH95. The zero point is defined by a star with $`Z=0.020`$, $`T_{\mathrm{eff}}`$ = 9400 K, and $`\mathrm{log}g=3.95`$, which has zero colors in all passbands. Colors in other photometric systems can be obtained by convolving the spectral energy distributions with the desired filter profiles. These colors are available: $`(UB)`$ (Figures 55a-e and 56a-e), $`(BV)`$ (Figures 57a-e and 58a-e), $`(VR)`$ (Figures 59a-e and 60a-e), $`(VI)`$ (Figures 61a-e and 62a-e), $`(VJ)`$ (Figures 63a-e and 64a-e), $`(VH)`$ (Figures 65a-e and 66a-e), $`(VK)`$ (Figures 67a-e and 68a-e), and $`(VL)`$ (Figures 69a-e and 70a-e). As a reminder, the continuous nebular emission is included in the colors, but not the line emission. The $`R`$ filter is the most likely passband to suffer from nebular line contamination since H$`\alpha `$ is included. A first check of the expected degree of contamination can be made by comparing the H$`\alpha `$ equivalent widths of Figures 83a-e and 84a-e with the width of the $`R`$ filter (about 2000 Å). Line strengths of other strong lines in H II regions can be estimated from the photoionization models of Stasińska & Leitherer (1996). The RSG issue raised before applies to some of the color plots for low metallicities as well. The strong metallicity dependence of the RSG feature around $`10^7`$ yr is related to the effects discussed in § 6. Rather than colors, we give continuum slopes in the UV. We define the slope $`\beta `$ as the spectral index of the spectral energy distribution: $`F_\lambda \lambda ^\beta `$. Two indices are shown, one for the average slope between 1300 Å and 1800 Å (Figures 71a-e and 72a-e), and one for the 2200 Å to 2800 Å region (Figures 73a-e and 74a-e). The slopes were simply derived by fitting a first-order polynomial to the spectra through the wavelength intervals 1280 Å – 1320 Å and 1780 Å –1820 Å for $`\beta _{1550}`$, and through 2180 Å – 2220 Å and 2780 Å – 2820 Å for $`\beta _{2500}`$. This should serve as an approximate guide for the variation of the slope with time but becomes increasingly meaningless if the actual spectrum deviates from a power law. Figures 1 through 6 suggest that spectral energy distributions with ages less than $``$200 Myr are indeed well approximated by a power law in the UV but that this assumption is no longer correct at older ages. The UV slopes are quite independent of evolution and IMF effects for the first 30 Myr. This property makes them very useful for deriving UV extinctions in galaxy spectra (Calzetti, Kinney, & Storchi-Bergmann 1994). Therefore the spectral slope of the restframe ultraviolet spectra of star-forming galaxies at high redshift can be used to estimate the effects of dust obscuration in the early universe (Calzetti & Heckman 1999, and references therein). ## 8 Far-ultraviolet properties The predictions in this section rely on our capabilities to model the stellar far-UV continuum below 912 Å since this spectral region is generally not accessible to direct observations. A comparison between different models for hot stars in this spectral region has been made by Schaerer & de Koter (1997). The internal consistency, as judged from differences between the models, is within 0.1 dex between 912 Å and 504 Å, which is relevant for the flux ionizing neutral hydrogen ($`N(\mathrm{H}^{})`$). Uncertainties become larger towards shorter wavelengths, in particular below 228 Å, where the emergent flux becomes strongly dependent on stellar-wind properties. This of course leaves open the question of the external, absolute uncertainties. Model atmospheres generally do a good job above 228 Å when combined with photoionization models and compared to H II region spectra (García-Vargas 1996) although there are some properties which are sensitive to the flux distribution in the neutral He continuum (Stasińska & Schaerer 1997). This makes errors in the integrated photon fluxes by more than 0.3 dex unlikely. The region below 228 Å has not yet been tested in such detail and the uncertainties are potentially large. Far-UV models for stars colder than about 30,000 K (types B and later) are lagging behind. Cassinelli et al. (1995) found discrepancies by two orders of magnitude between the observed extreme-UV flux of the B2 star $`ϵ`$ CMa and model predictions. Even slight errors in the adopted wind parameters can produce huge far-UV flux variations at these relatively low temperatures (Schaerer & de Koter 1997). We define the Lyman break as the ratio of the average flux in the wavelength interval 1080 Å – 1120 Å over that between 870 Å and 900 Å. These wavelength intervals are relatively free of line-blanketing so that we measure mostly temperature rather than line opacity. The Lyman break is about a factor of 2 to 3 in O-star dominated phases and drops thereafter (Figures 75a-e and 76a-e). The warnings about model uncertainties for the Lyman continuum of cooler stars apply to the instantaneous case. Figure 75a-e should not be overinterpreted after about 50 Myr since B stars like $`ϵ`$ CMa (see previous paragraph) could dominate. The Lyman break for a continuous population is always dominated by hot stars so that Figure 76a-e can be trusted over the entire range plotted. The number of photons capable of ionizing neutral hydrogen ($`N(\mathrm{H}^{})`$), neutral helium ($`N(\mathrm{He}^{})`$), and ionized helium ($`N(\mathrm{He}^+)`$) were calculated by integration of the spectra below 912 Å, 504 Å, and 228 Å, respectively. They are shown in Figures 77a-e and 78a-e ($`N(\mathrm{H}^{})`$), Figures 79a-e and 80a-e ($`N(\mathrm{He}^{})`$), and Figures 81a-e and 82a-e ($`N(\mathrm{He}^+)`$). As stated before, the numbers become increasingly uncertain with shorter wavelength. $`N(\mathrm{He}^+)`$ is almost entirely produced by WR stars, and uncertainties in the wind properties enter. We also give equivalent widths of several popular hydrogen recombination lines. The continuum is taken from the model atmospheres and does not include underlying stellar absorption. The nebular continuum is of course taken into account. The individual plots are for H$`\alpha `$ (Figures 83a-e and 84a-e), H$`\beta `$ (Figures 85a-e and 86a-e), Pa$`\beta `$ (Figures 87a-e and 88a-e), and Br$`\gamma `$ (Figures 89a-e and 90a-e). The transformation relations from $`N(\mathrm{H}^{})`$ to the line luminosities are in LH95. ## 9 Other diagnostic lines In this section we present a few more diagnostics that can be useful to isolate a stellar population in a galaxy spectrum. They are all related to stars off the main-sequence: WR stars, RSGs and SNe. In Figures 81 and 82 we predicted $`N(\mathrm{He}^+)`$, which can immediately be converted into the emission-line flux of nebular He II $`\lambda `$4686. Generally, a hot-star population capable of producing nebular He II will also show broad, stellar He II $`\lambda `$4686 (Schaerer & Vacca 1998). Our model predictions for the stellar line are in Figures 91a-e and 92a-e. The feature is commonly referred to as the “WR bump”. It includes only He II and none of the other nearby spectral features like C III, N III, \[Ar IV\], and \[Fe III\]. Note that our code predicts other WR lines from the list of Schaerer & Vacca but they are not given here. \[Fe II\] $`\lambda `$1.26 is useful to count supernovae in starbursts (Figures 93a-e and 94a-e). The supernova shock wave destroys interstellar dust grains, thereby releasing iron atoms and ions which had condensed on the dust grains. We adopted the scaling relation of Calzetti (1997) to convert the supernova rates into \[Fe II\] line luminosities. The remaining figures in this section are related to RSG properties. We repeat again that the predicted properties of post-main-sequence stars are not reliable and that all phases dependent on RSG properties at sub-solar metallicity are suspect. We begin with the spectroscopic CO index at 2.2 $`\mu `$m (Figures 95a-e and 96a-e). We follow the definition of Doyon, Joseph, & Wright (1994) who expressed the CO strength as a function of temperature for dwarfs, giants, and supergiants. The index is set to 0 for $`T_{\mathrm{eff}}`$ $`>`$ 6000 K. We also computed the strength of the calcium triplet at $`\lambda \lambda `$8498 Å, 8542 Å, 8662 Å (Figures 97a-e and 98a-e). The equivalent width of the sum of all three lines is related to gravity ($`\mathrm{log}g`$) and metallicity by the relation $`W(CaT)=10.210.95\mathrm{log}g+2.18\mathrm{log}Z/Z_{}`$ (Díaz, Terlevich, & Terlevich 1989; García-Vargas, Mollá, & Bressan 1998). The predicted values neither take into account nebular emission nor stellar absorption of higher Paschen lines which fall in this wavelength region. Observations must be corrected for these lines, if present, before a comparison is made. Evolutionary synthesis models for the first and second overtones of CO at 2.29 $`\mu `$m and 1.62 $`\mu `$m and for Si I at 1.59 $`\mu `$m were presented by Origlia et al. (1998). The models in Figures 99a-e and 100a-e (CO $`\lambda `$1.62 $`\mu `$m), Figures 101a-e and 102a-e (CO $`\lambda `$2.2 $`\mu `$m), and in Figures 103a-e and 104a-e (Si I $`\lambda `$1.59 $`\mu `$m) are expanded versions of the unmodified case discussed by Origlia et al. In that paper, the effect of modifying the evolutionary tracks was studied and several sets of IR-line models were published. To be consistent with the other quantities shown here, we opted for including only the standard models. The models for Si I $`\lambda `$1.59 $`\mu `$m were not covered by Origlia et al. and are shown here for the first time. The theoretical library of Origlia et al. (1993) which was used for Si I is less reliable than that for CO so that care is required when using the Si I models. ## 10 Mass and energy return The previous sections cover stellar numbers and radiative properties. Here we turn to non-radiative properties of the stellar population. The input physics is discussed in greater detail in Leitherer et al. (1992). The figures in this section show the mass and energy input by stars and SNe. Only core-collapse SNe are considered. They are assumed to release $`10^{51}`$ erg per event in the form of kinetic energy, independent of metallicity. We do not address the efficiency of thermalization, which would require hydrodynamical modeling. The simulations of Thornton et al. (1998) suggest that about 10% of the available kinetic energy can actually be used to pressurize the interstellar gas. The remaining 90% are radiated away. The rate of mass return of stellar-wind and SN material is plotted in Figures 105a-e and 106a-e. The adopted mass-loss rates are not those of the evolutionary models but those favored by Leitherer et al. (1992) and LH95. We prefer this approach over simply using the evolutionary mass-loss rates. In our opinion, the evolutionary mass-loss rates are too high (see Section 2) and should be considered only as an adjustable parameter in evolution models but are not directly related to the observed mass-loss rate. Slight differences between Figures 105 and 106 and the corresponding figures in LH95 are not due to a different mass-loss parameterization but because of different stellar parameters ($`T_{\mathrm{eff}}`$, $`L`$) in the revised tracks. The individual contributions from stellar winds and SNe are broken down in Figures 107a-e and 108a-e for the standard IMF case. Stellar winds are generally more important for young bursts whereas SNe take over at later times. The total mass return from winds and SNe is in Figures 109a-e and 110a-e. The quantity plotted is $`\dot{M}𝑑t`$, i.e. the integral of the curves in Figures 107 and 108 over time. This quantity is useful to evaluate the exhaustion of the gas supply in a galaxy or the degree of chemical pollution by wind and supernova material. Figures 111a-e and 112a-e give the mechanical luminosity $`L_{\mathrm{mech}}`$ released by winds and supernovae. The curves are similar to those for the mass return in Figures 105 and 106. The relative contributions to $`L_{\mathrm{mech}}`$ from stellar winds and SNe are in Figures 113a-e and 114a-e. A further break-down into individual stellar-wind components is given in Leitherer et al. (1992). Generally, most of the wind power comes from WR stars, with some contribution from O stars. All other stellar phases are negligible since wind velocities of cool stars are lower by two orders of magnitude. The energy return ($`L_{\mathrm{mech}}𝑑t`$) from winds and SNe is in Figures 115a-e and 116a-e. It is instructive to perform a differential comparison between the radiative and the non-radiative energy output of young stellar populations. This is done in Figures 117a-e and 118a-e for the ratio of the ionizing ($`<`$912 Å) over the bolometric luminosity, and in Figures 119a-e and 120a-e for the ratio of the mechanical ($`L_{\mathrm{mech}}`$) over the bolometric luminosity. The non-radiative energy input into the interstellar medium becomes significant in comparison with ionizing radiation once the strong WR winds are turned on. For a single population, non-thermal and ionizing energy input become equally important around $``$10 Myr. ## 11 Conclusions We have computed a large grid of predictions for observable properties of galaxies with active star formation. The distribution of the models is purely web-based. We believe the community will find this method more useful than a hardcopy publication. All the figures discussed in this paper are at http://www.stsci.edu/science/starburst99/. This webpage provides links to other spectrophotometric databases as well. It is worthwhile to recall the most important short-comings and uncertainties of the models: Chemical evolution is not treated self-consistently. Each stellar generation has the same chemical composition. This becomes a concern for models which are evolved over times during which significant changes in the metallicity of the ISM occur. In this case spectra will have a wavelength dependent metallicity. Generally, light at shorter wavelengths is produced by more massive and younger stars, which are chemically more evolved than older stars. Binary evolution has been neglected. Although about 50% of all stars form in binaries, it is not clear if a significant fraction of these binaries has an evolutionary history that differs substantially from single-star evolution. There is disagreement in the literature on this point. Under these circumstances we took the approach of implementing the simpler of the two alternatives, as nature itself often prefers to do. Mass loss and mixing processes in stellar evolution are still poorly understood. Stellar phases, like WR stars or RSGs, are particularly affected by such uncertainties. The main culprit is the lack of a self-consistent theory which makes the introduction of adjustable parameters necessary. These parameters then are sometimes extrapolated into a regime for which they were not calibrated, such as metallicity. The situation is particularly disturbing for red supergiants whose properties are badly reproduced at low metallicity. There is no easy work-around for the user of evolutionary models, except for an empirical adjustment of the tracks. We have discussed such an adjusted model set. Clearly, a strong effort on the stellar evolution modeling side is called upon for improvements. Our models put most of the emphasis on early evolution phases. Later phases, like AGB stars or white dwarfs are covered only crudely or not at all. While improvements to our code can be made, our prime goal and expertise is related to massive, hot stars, and we decided to optimize this stellar species first. Despite the warnings, the model set should turn out to be useful for the interpretation of observations of star-forming galaxies. For maximum benefit, the user is encouraged to compare our model predictions with those of other groups, such as those mentioned earlier in the paper. To provide maximum flexibility, we offer the user to run tailored models at our website. Instructions on how to run the code and how to access the results are given at the website. The Fortran code is distributed freely and can be retrieved from the website as well. Harry Payne helped us trouble-shoot numerous bugs and pitfalls we encountered during the construction of our webpage. D. Foo Kune acknowledges financial support from the STScI Summer Student Program. Salary support for D. Devost and D. Schaerer came from the STScI Director’s Discretionary Research Fund. We appreciate advice on electronic publishing and data maintenance from Bob Hanisch. Partial support for this work was provided by NASA through grant number NAG5-6903, from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
no-problem/9902/astro-ph9902255.html
ar5iv
text
# Dust Emissivity in the Far-Infrared ## 1 Introduction Assessing the quantity of dust in spiral galaxies is of primary importance in both understanding the intrinsic properties of galaxies themselves and interpreting observations of the distant universe: large quantities of dust can modify the optical appearance of galactic structures like spiral arms (Trewhella TrewhellaMNRAS1998 (1998)); if the distribution of dust is extended, a large fraction of the radiation from the distant universe can be blocked (Ostriker & Heisler OstrikerApJ1984 (1984)); star formation as determined from UV fluxes could be severely underestimated thus altering our knowledge of the star formation history of the universe (Hughes et al. HughesPrep1998 (1998)). Dust mass can be retrieved from extinction or from emission in the FIR. In the former case information about the star-dust relative geometry is needed and the method can only be applied to nearby edge-on galaxies, where the dust distribution can be inferred from extinction features (Xilouris et al. XilourisA&A1997 (1997, 1998)). In the latter case there are no such limitations, and the wealth of data in the FIR and Sub-mm from instruments like the Sub-mm camera SCUBA and from the satellites ISO and COBE, can be used to measure dust mass. Unfortunately, the determination of dust mass is entangled with that of dust temperature and they both rely on knowledge of the dust emissivity (Hildebrand HildebrandQJRAS1983 (1983)), the form of which is currently highly uncertain. The emissivity (or emission efficiency, i.e. the ratio between the emission cross section and the geometric cross section), $`Q_{\mathrm{em}}(\lambda )`$ is usually described by a function of the form $$Q_{\mathrm{em}}(\lambda )=Q_{\mathrm{em}}(\lambda _0)\left(\frac{\lambda _0}{\lambda }\right)^\beta $$ (1) where $`Q_{\mathrm{em}}(\lambda _0)`$ is the value of the emissivity at the reference wavelength $`\lambda _0`$, and $`\beta `$ is the wavelength dependence index. While a value $`\beta =1`$ seems to be plausible for $`\lambda <100`$$`\mu `$m (Hildebrand HildebrandQJRAS1983 (1983); Rowan-Robinson RowanRobinsonMNRAS1992 (1992)), there is observational evidence for a steeper emissivity at longer wavelengths. The difference in emissivity is not unexpected, since emission in the Mid-Infrared (25-60 $`\mu `$m) is dominated by transiently heated grains, while at $`\lambda >100`$ grains emit at thermal equilibrium (Whittet WhittetBook1992 (1992)). Sub-mm observations of spiral galaxies (Bianchi et al. BianchiMNRAS1998 (1998); Alton et al. 1998b ) show that it is not possible to use an emissivity with $`\beta =1`$ to fit the 450 and 850 $`\mu `$m emission. Reach et al. (ReachApJ1995 (1995)) came to a similar conclusion. They used the spectrum of the Galactic plane observed by the spectrophotometer FIRAS on board the satellite COBE, to find that the data are well fitted by an emissivity: $$Q_{\mathrm{em}}(\lambda )\frac{\lambda ^2}{\left[1+\left(\lambda _1/\lambda \right)^6\right]^{1/6}},$$ (2) for the range 100 $`\mu `$m to 1 cm. Eq. (2) behaves like (1) with $`\beta =1`$ at small $`\lambda `$ ($`\lambda \lambda _1`$) and $`\beta =2`$ at large $`\lambda `$ ($`\lambda \lambda _1`$) (they set $`\lambda _1=200`$-$`\mu `$m). Masi et al. (MasiApJ1995 (1995)) measure a value $`\beta =1.54`$ by fitting a single temperature grey-body spectrum to Galactic plane data in four bands between 0.5 and 2-mm taken by the balloon born telescope ARGO. Reach et al. (ReachApJ1995 (1995)) suggest that a single temperature fit may bias towards lower values of $`\beta `$ (see also Wright et al. WrightApJ1991 (1991)); over the whole FIRAS spectral range, a two temperature grey-body with $`\beta =2`$ at large $`\lambda `$ provides a significantly better fit than a single temperature spectrum with $`\beta 1.5`$. At long wavelengths theoretical calculations for crystalline substances constrain $`\beta `$ to be an even integer number (Wright WrightProc1993 (1993)). For amorphous materials $`\beta `$ depends on the temperature: Agladze et al. (AgladzeApJ1996 (1996)) find $`1.2<\beta <2`$ for amorphous silicate grains at a temperature of 20 K. A value for the emissivity at a specific wavelength $`Q_{\mathrm{em}}(\lambda _0)`$ normalised to the extinction efficiency in the optical can be determined by carrying out an energy balance in a reflection nebula, comparing the energy absorbed from the central star with the FIR output from the surrounding dust. Alternatively, the extinction measured toward the star can be directly compared to the optical depth in the FIR (Whitcomb et al. WhitcombApJ1981 (1981); Hildebrand HildebrandQJRAS1983 (1983); Casey CaseyApJ1991 (1991)). These methods are complicated by the unknown nebular geometry and by temperature gradients in the dust; as an example, Casey (CaseyApJ1991 (1991)) found that the extinction method usually retrieves higher values than the energy balance. In this paper we use the extinction method comparing the Galactic extinction to FIR emission: in this case the same column density of dust is responsible both for emission and extinction and a reliable result can be obtained. ## 2 The method Schlegel et al. (SchlegelApJ1998 (1998); hereafter SFD) have presented a new map of Galactic extinction. After removing emission from zodiacal light and a cosmic infrared background, they have combined the 100 $`\mu `$m map of Galactic emission taken by the DIRBE experiment on board the COBE satellite with the 100 $`\mu `$m large-area ISSA map from satellite IRAS, to produce a map of Galactic emission with the quality calibration of DIRBE and the high resolution of IRAS. The dust temperature has been retrieved using the DIRBE maps at 100 $`\mu `$m and 240 $`\mu `$m assuming $`\beta `$=2. Knowing the temperature, the 100 $`\mu `$m map has been converted into a dust column density map and subsequently calibrated to E(B-V) using colours and Mg<sub>2</sub>-index of elliptical galaxies. We would like to stress that the colour excess has been derived from the 100 $`\mu `$m emission without any assumption about the value of the emissivity at any wavelength. Moreover, the choice of $`\beta `$ does not affect significantly their results: when $`\beta `$=1.5 is used, the dust column density map varies only of 1%, aside from an overall multiplicative factor that is taken account of when calibrating with the colour excess. We have accessed the electronic distribution of this remarkable dataset to retrieve the 9.5$`\mathrm{}`$/pixel maps of the intensity at 100 $`\mu `$m, I(100 $`\mu `$m), the temperature and the colour excess E(B-V) for the north and south Galactic hemispheres. When the same dust grains are responsible for emission and extinction, the ratio between the extinction coefficient in the V-band and the emissivity at 100 $`\mu `$m is equivalent to the ratio of the optical depths $$\frac{Q_{\mathrm{ext}}(V)}{Q_{\mathrm{em}}(\text{100 }\mu \text{m})}=\frac{\tau (V)}{\tau (\text{100 }\mu \text{m})}.$$ (3) The above formula is correct if all of the dust grains are identical. In a mixture of grains of different sizes and materials, the ratio of emissivities in Eq. (3) can still be regarded as a mean value characteristic of diffuse galactic dust, if the dust composition is assumed to be the same on any line of sight. The optical depth at 100 $`\mu `$m, in the optically thin case, is measured using $$\tau (\text{100 }\mu \text{m})=\frac{I(\text{100 }\mu \text{m})}{B(\text{100 }\mu \text{m},T_\mathrm{d})},$$ (4) where $`B(\text{100 }\mu \text{m},T_\mathrm{d})`$ is the value of the Planck function at $`100\mu \text{m}`$ for a dust temperature $`T_\mathrm{d}`$, both the intensity $`I(\text{100 }\mu \text{m})`$ and $`T_\mathrm{d}`$ coming from the maps of SFD. The optical depth in the V-band can be found from the colour excess E(B-V) maps, $$\tau (V)=\frac{A(V)}{1.086}=2.85E(BV),$$ (5) where we have used a mean galactic value $`A(V)/E(BV)`$=3.1 (Whittet WhittetBook1992 (1992)). Reach et al. (ReachApJ1995 (1995)) suggest that dust emitting in the wavelength range 100-300 $`\mu `$m traces interstellar extinction. Since the FIR optical depth in eq. (4) has been measured using data at 100 and 240 $`\mu `$m, it is then justified to compare it with extinction as in eq. (5) to find the ratio of the extinction coefficient and emissivity. Knowing the optical depths from (4) and (5), we can compute a map of the ratio as in eq. (3); we obtain a mean value of $$\frac{Q_{\mathrm{ext}}(V)}{Q_{\mathrm{em}}(\text{100 }\mu \text{m})}=760\pm 60$$ for both hemispheres. This value is included, together with other multiplicative factors, in the calibration coefficient $`p`$ as in eq. (22) in SFD. As pointed out by the referee, an estimate for $`Q_{\mathrm{ext}}(V)/Q_{\mathrm{em}}`$ can be easily derived from that equation, if the DIRBE colour corrections factors, slowly depending on T, are omitted. Following this way we obtained a value of 765.5. SFD give an error of 8% for $`p`$ and this is the value quoted here. Since most ($``$ 90%) of the elliptical galaxies used to calibrate colour excess maps have galactic latitude $`b>20^{}`$, one may argue that the measured value is characteristic only of high latitude dust. Reach et al. (ReachApJ1995 (1995)) find that the emissivity (eq. 2) is best determined by fitting the FIRAS spectrum on the Galactic plane. They say that high latitude data have a smaller signal-to-noise ratio and can be fitted satisfactory with $`\beta =2`$ (eq. 1) although the same emissivity as on the plane cannot be excluded. Under the hypothesis that the same kind of dust is responsible for the diffuse emission in the whole Galaxy, we have corrected SFD temperatures using Reach et al. emissivity (eq. 2). The new temperatures are a few degrees higher than those measured with $`\beta =2`$ (as an example we pass from a mean value of 18K in a 20 diameter regions around the north pole to a new estimate of 21K). It is interesting to note that the difference between the two estimates of temperature is of the same order as the difference between the temperatures of warm dust at high and low Galactic latitude in Reach et al. (ReachApJ1995 (1995)) and this may only be a result of the different emissivity used to retrieve the temperature. When the correction is applied $$\frac{Q_{\mathrm{ext}}(V)}{Q_{\mathrm{em}}(\text{100 }\mu \text{m})}=2390\pm 190.$$ The new ratio is about three times higher, and this is a reflection of the change of temperature in the black body emission in (4): for a higher temperature, a lower emissivity in the FIR is required to produce the same emission. Uncertainties in $`Q_{\mathrm{ext}}(V)/Q_{\mathrm{em}}(\lambda _0)`$ are thus greatly affected by assumptions about the emissivity spectral behaviour. ## 3 Comparison with other measurements We now compare our emissivity for $`\beta =2`$ with literature results derived under the same hypothesis. Since no emissivity has been derived to our knowledge assuming eq. (2), we do not attempt any comparison with that result. All the data are scaled to $`\lambda _0=100`$ $`\mu `$m. Studying the correlation between gas and dust emission from FIRAS and DIRBE, Boulanger et al. (BoulangerA&A1996 (1996)) derived an emissivity $`\tau /N_H=1.010^{25}`$ cm<sup>2</sup> at 250 $`\mu `$m for dust at high galactic latitude; assuming the canonical $`N_H=5.810^{21}E(BV)`$ cm<sup>-2</sup> mag<sup>-1</sup> and $`A(V)/E(BV)`$=3.1 (Whittet WhittetBook1992 (1992)), this is equivalent to $`Q_{\mathrm{ext}}(V)/Q_{\mathrm{em}}(\text{100 }\mu \text{m})`$=790. Quite similar values are found in the Draine & Lee (DraineApJ1984 (1984)) dust model, which has a $`\beta =2`$ spectral dependence in this wavelength range. At 125 $`\mu `$m the optical depth is $`\tau /N_H=4.610^{25}`$ cm<sup>2</sup> which corresponds to $`Q_{\mathrm{ext}}(V)/Q_{\mathrm{em}}(\text{100 }\mu \text{m})`$=680. Sodroski et al. (SodroskiApJ1997 (1997)) finds a value for the ratio at 240 $`\mu `$m, using literature data identifying a correlation between B-band extinction and 100 $`\mu `$m IRAS surface brightness in high latitude clouds, assuming a dust temperature of 18 K. Converted to our notation, using a standard extinction law, the ratio is $`Q_{\mathrm{ext}}(V)/Q_{\mathrm{em}}(\text{100 }\mu \text{m})`$=990. The measurement by Whitcomb et al. (WhitcombApJ1981 (1981)) on the reflection nebula NGC 7023 is the most commonly quoted value for the emissivity (Hildebrand HildebrandQJRAS1983 (1983)). Their value derived at 125 $`\mu `$m for $`\beta =2`$ is only marginally consistent with our result. Following our notation, their result is equivalent to $`Q_{\mathrm{ext}}(V)/Q_{\mathrm{em}}(\text{100 }\mu \text{m})=`$ 220 and 800<sup>1</sup><sup>1</sup>1Whitcomb et al. (WhitcombApJ1981 (1981)) and Casey (CaseyApJ1991 (1991)) originally presented values for $`Q_{\mathrm{ext}}(UV)/Q_{\mathrm{em}}(FIR)`$: we have corrected to $`Q_{\mathrm{ext}}(V)/Q_{\mathrm{em}}(FIR)`$ using the provided $`\tau (UV)=2\tau (V)`$., using the energy balance and the extinction method, respectively. The values obtained by Casey (CaseyApJ1991 (1991)) on a sample of five nebulae using the energy balance method are a factor of 3 smaller than ours (corresponding to $`Q_{\mathrm{ext}}(V)/Q_{\mathrm{em}}(\text{100 }\mu \text{m})=`$ 80-400). In Fig. 1 we show the literature data (plotted at the wavelength they have been derived in the original papers) together with our derived emissivity laws. We have added the value for Draine & Lee (DraineApJ1984 (1984)) model at 250 $`\mu `$m. ## 4 Gas-to-dust ratio of external spiral galaxies We now exploit the FIR emissivity derived in this work by determining dust masses for nearby spiral galaxies. Following Hildebrand (HildebrandQJRAS1983 (1983)) dust masses can be measured from FIR emission using $$M_{\mathrm{dust}}=\frac{F(\lambda )D^2}{B(\lambda ,T_\mathrm{d})}\frac{4a\rho }{3Q_{\mathrm{em}}(\lambda )},$$ (6) where $`F(\lambda )`$ is the total flux at the wavelength $`\lambda `$, $`D`$ the distance of the object, $`B(\lambda ,T_\mathrm{d})`$ the Planck function, $`a`$ the grain radius (0.1 $`\mu `$m) and $`\rho `$ the grain mass density (3 g cm<sup>-3</sup>). The emissivity $`Q_{\mathrm{em}}(\lambda )`$ is derived from the ratio $`Q_{\mathrm{ext}}(V)/Q_{\mathrm{em}}(\lambda )`$ assuming $`Q_{\mathrm{ext}}(V)`$=1.5 (Casey CaseyApJ1991 (1991); Whittet WhittetBook1992 (1992)). Alton et al. (1998a ) provide total fluxes at 100 $`\mu `$m and 200 $`\mu `$m from IRAS and ISO for a sample of spiral galaxies. We have derived dust temperatures and masses using $`Q_{\mathrm{ext}}(V)/Q_{\mathrm{em}}(\text{100 }\mu \text{m})`$=760 and 2390, for $`\beta `$=2 and Reach et al. (ReachApJ1995 (1995)) emissivities, respectively. Using literature values for gas masses, we have computed the gas-to-dust ratios. Values of gas masses, temperatures and gas-to-dust ratios are presented in Table 1. The mean value of the gas-to dust ratio for the sample is 100 using eq. (1), 110 using eq. (2). Mean temperatures go from 18K with the $`\beta =2`$ emissivity to 21K when the Reach et al. (ReachApJ1995 (1995)) behaviour is assumed (as for the north galactic pole in Sect. 2). Alton et al. (1998a ) pointed out that ISO 200 $`\mu `$m fluxes could be overestimated by about 30%; correcting for this we obtain a mean gas-to-dust ratio of 220-240 (for $`\beta `$=2 and Reach et al. (ReachApJ1995 (1995)) emissivity, respectively). As shown above, dust masses obtained with the two methods are quite similar. This can be explained substituting eqs. (3) and (4) into (6). For $`\lambda =\text{100 }\mu \text{m}`$ we can derive $$M_{\mathrm{dust}}\frac{B(\text{100 }\mu \text{m},T_\mathrm{d}^\mathrm{G})}{B(\text{100 }\mu \text{m},T_\mathrm{d})},$$ where $`T_\mathrm{d}^\mathrm{G}`$ is the mean temperature of dust in the Galaxy. From the equation it is clear that the dust mass determination is insensitive to the emissivity law used, as long as the dust temperature in external galaxies and in our own are similar. Our range of values for the gas-to-dust ratio (100–230) encompasses the Galactic value of 160 (Sodroski et al. SodroskiApJ1994 (1994)). As a comparison, the mid-value of Whitcomb et al. (WhitcombApJ1981 (1981)) would have given dust-to-gas ratios larger by a factor 1.5. ## 5 Conclusion We have derived the dust emissivity $`Q_{\mathrm{em}}`$ in the FIR using the wavelength dependence derived from the FIR Galactic spectrum (Reach et al. ReachApJ1995 (1995)). The emissivity has been normalised to the extinction efficiency in the V band using dust column density maps calibrated to Galactic extinction (SFD). $`Q_{\mathrm{em}}`$ depends strongly on the assumed wavelength dependence. For a $`\beta =2`$ emissivity index we obtained $$Q_{\mathrm{em}}(\lambda )=\frac{Q_{\mathrm{ext}}(V)}{760}\left(\frac{\text{100 }\mu \text{m}}{\lambda }\right)^2.$$ This result is consistent with other values derived from FIR Galactic emission (Boulanger et al. BoulangerA&A1996 (1996); Sodroski et al. SodroskiApJ1997 (1997)) and with the Draine & Lee (DraineApJ1984 (1984)) dust model. The widely quoted emissivities of Whitcomb et al. (WhitcombApJ1981 (1981); Hildebrand HildebrandQJRAS1983 (1983)) derived from the reflection nebula NGC 7023 are only marginally consistent with our values, while the emissivity measured by Casey (CaseyApJ1991 (1991)) on a sample of five nebulae are smaller by a factor of 3. This may suggest a different grain composition for dust in the diffuse inter-stellar medium compared to reflection nebulae. When the wavelength dependence derived by Reach et al. (ReachApJ1995 (1995)) on the Galactic plane is used, we obtain $$Q_{\mathrm{em}}(\lambda )=\frac{Q_{\mathrm{ext}}(V)}{2390}\left(\frac{\text{100 }\mu \text{m}}{\lambda }\right)^2\frac{2.005}{\left[1+\left(\text{200 }\mu \text{m}/\lambda \right)^6\right]^{1/6}}.$$ We have used the derived emissivities to measure dust masses from 100 $`\mu `$m and 200 $`\mu `$m fluxes of a sample of six spiral galaxies (Alton et al. 1998a ). We have retrieved similar dust masses with both the spectral dependences. The gas-to dust ratios of our sample (100-230) are close to the Galactic value of 160 (Sodroski et al. SodroskiApJ1994 (1994)). ###### Acknowledgements. We thank M. Edmunds and S. Eales for stimulating discussions, and an anonymous referee for useful comments that have improved the paper.
no-problem/9902/hep-ex9902020.html
ar5iv
text
# Measurement of the W Mass from LEP2 ## I Introduction The success of the Standard Model (SM) over the last two decades should not obscure the importance of thoroughly investigating the weak interaction. It is interesting to consider that 15 years ago, when neutrino scattering experiments had measured $`\mathrm{sin}^2\theta _\mathrm{W}=0.217\pm 0.014`$, the following SM constraints were available : $`\mathrm{M}_\mathrm{W}(\mathrm{indirect})`$ $`=`$ $`83.0\pm 2.8\mathrm{GeV}`$ (1) $`\mathrm{M}_{\mathrm{Z}^0}(\mathrm{indirect})`$ $`=`$ $`93.8\pm 2.3\mathrm{GeV}`$ (2) Tree level deviations could be accommodated in those errors! Today we have measured $`\mathrm{sin}^2\theta _\mathrm{W}`$ to $`0.0002`$, $`\mathrm{M}_{\mathrm{Z}^0}`$ to $`0.002`$ GeV, and $`\mathrm{M}_\mathrm{W}`$ to $`0.07`$ GeV — the success of the SM is so thorough that it can only be wrong at the quantum loop level, and even then, beyond leading order. Despite this rousing success, it is still necessary to test the SM by confronting experimental observations with theoretical predictions as any deviations might point to new physics. As a fundamental parameter of the SM, the mass of the W boson, $`\mathrm{M}_\mathrm{W}`$, is of particular importance. Aside from being an important test of the SM in its own right, the direct measurement of $`\mathrm{M}_\mathrm{W}`$ can be used to set constraints on the mass of the Higgs boson, $`\mathrm{M}_\mathrm{H}`$, by comparison with theoretical predictions involving radiative corrections sensitive to $`\mathrm{M}_\mathrm{H}`$. The constraints imposed using $`\mathrm{M}_\mathrm{W}`$ are complimentary to the constraints imposed by the asymmetry ($`\mathrm{A}_{\mathrm{FB}}^\mathrm{b}`$, $`\mathrm{A}_{\mathrm{FB}}^{\mathrm{}}`$, $`\mathrm{A}_{\mathrm{LR}}`$,…) and width ($`\mathrm{R}_{\mathrm{}}`$, $`\mathrm{R}_\mathrm{b}`$, $`\mathrm{R}_\mathrm{c}`$,…) measurements. For example, the very precise asymmetry measurements presently yield the tightest constraints on $`\mathrm{M}_\mathrm{H}`$, but are very sensitive to the uncertainty in the hadronic contribution to the photon vacuum polarisation, $`\mathrm{\Pi }_{\mathrm{had}}^{\gamma \gamma }`$. In contrast, the constraint afforded by a direct measure of $`\mathrm{M}_\mathrm{W}`$ is comparably tight but with a much smaller sensitivity to $`\mathrm{\Pi }_{\mathrm{had}}^{\gamma \gamma }`$, and is presently dominated by statistical uncertainties . ### A WW Production at LEP At LEP W bosons are predominantly produced in pairs through the reaction $`\mathrm{e}^+\mathrm{e}^{}\mathrm{W}^+\mathrm{W}^{}`$, with each W subsequently decaying either hadronically ($`\mathrm{q}\overline{\mathrm{q}}`$), or leptonically ($`\mathrm{}\overline{\nu }`$, $`\mathrm{}=e`$, $`\mu `$, or $`\tau `$). This yields three possible four-fermion final states, hadronic ($`\mathrm{W}^+\mathrm{W}^{}`$$``$$`\mathrm{q}\overline{\mathrm{q}}`$$`\mathrm{q}\overline{\mathrm{q}}`$), semi-leptonic ($`\mathrm{W}^+\mathrm{W}^{}`$$``$$`\mathrm{q}\overline{\mathrm{q}}`$$`\mathrm{}\overline{\nu }`$), and leptonic ($`\mathrm{W}^+\mathrm{W}^{}`$$``$$`\mathrm{}^{}\overline{\nu }_{\mathrm{}}`$$`\mathrm{}_{}^{}{}_{}{}^{+}\nu _{\mathrm{}^{}}`$), with branching fractions of $`45\%`$, $`44\%`$, and $`11\%`$ respectively. The $`\mathrm{W}^+\mathrm{W}^{}`$ production cross-section varies from $`3.6`$ pb at $`\sqrt{s}=161`$ GeV to $`16.7`$ pb at $`\sqrt{s}=189`$ GeV. These can be contrasted with the production cross-sections for the dominant backgrounds $`\sigma \left(\mathrm{e}^+\mathrm{e}^{}\mathrm{Z}^{}/\gamma ^{}\mathrm{q}\overline{\mathrm{q}}\right)100`$ pb, $`\sigma \left(\mathrm{e}^+\mathrm{e}^{}\mathrm{Z}^0\mathrm{e}^+\mathrm{e}^{}\right)2.8`$ pb, $`\sigma \left(\mathrm{e}^+\mathrm{e}^{}(\mathrm{Z}^{}/\gamma ^{})(\mathrm{Z}^{}/\gamma ^{})\right)0.6`$ pb, and $`\sigma \left(\mathrm{e}^+\mathrm{e}^{}\mathrm{W}\mathrm{e}\overline{\nu }\right)0.6`$ pb. Aside from the $`\mathrm{Z}^{}/\gamma ^{}\mathrm{q}\overline{\mathrm{q}}`$ process, which falls from $`150`$ pb at $`\sqrt{s}=161`$ GeV, these background cross-sections vary slowly for $`\sqrt{s}<185`$ GeV, when the $`\mathrm{e}^+\mathrm{e}^{}\mathrm{ZZ}`$ process begins to turn-on. ### B LEP Measurement Techniques There are two main methods available for measuring $`\mathrm{M}_\mathrm{W}`$ at LEP2. The first exploits the fact that the $`\mathrm{W}^+\mathrm{W}^{}`$ production cross-section is particularly sensitive to $`\mathrm{M}_\mathrm{W}`$ for $`\sqrt{s}2\mathrm{M}_\mathrm{W}`$. In this threshold (TH) region, assuming SM couplings and production mechanisms, a measure of the production cross-section yields a measure of $`\mathrm{M}_\mathrm{W}`$. In early 1996 the four LEP experiments collected roughly $`10\mathrm{pb}^1`$ of data at $`\sqrt{s}=161`$ GeV, resulting in a combined determination of the W boson mass of $`\mathrm{M}_\mathrm{W}(\mathrm{TH})=80.40\pm 0.20(\mathrm{exp})\pm 0.03(\mathrm{E}_{\mathrm{bm}})`$ GeV, where the uncertaintiess correspond to experimental and LEP beam energy respectively . The second method uses the shape of the reconstructed invariant mass distribution to extract a measure of $`\mathrm{M}_\mathrm{W}`$. This method is particularly useful for $`\sqrt{s}170`$ GeV where the $`\mathrm{W}^+\mathrm{W}^{}`$ production cross-section is larger and phase-space effects on the reconstructed mass distribution are smaller. Each experiment collected roughly $`10\mathrm{pb}^1`$ at $`\sqrt{s}=172`$ GeV in later 1996, and in 1997, roughly $`55\mathrm{pb}^1`$ at $`\sqrt{s}=183`$ GeV. Since most of the LEP2 data has been collected at center-of-mass energies well above the $`\mathrm{W}^+\mathrm{W}^{}`$ threshold, the LEP2 $`\mathrm{M}_\mathrm{W}`$ determination is dominated by these direct reconstruction (DR) methods. For this reason, the rest of this article will concentrate on the details of this method. ## II Direct Reconstruction of $`𝐌_𝐖`$ To measure $`\mathrm{M}_\mathrm{W}`$ using direct reconstruction techniques one must 1. Select $`\mathrm{W}^+\mathrm{W}^{}\mathrm{f}\overline{\mathrm{f}}\mathrm{f}\overline{\mathrm{f}}`$ events. 2. Obtain the reconstructed invariant mass,$`\mathrm{m}_{\mathrm{rec}}`$, for each event. 3. Extract a measure of $`\mathrm{M}_\mathrm{W}`$ from the $`\mathrm{m}_{\mathrm{rec}}`$ distribution. Each of these steps are discussed in detail in the section below and in Reference . It should be noted that none of the LEP experiments presently exploits the $`\mathrm{W}^+\mathrm{W}^{}`$$``$$`\mathrm{}^{}\overline{\nu }_{\mathrm{}}`$$`\mathrm{}_{}^{}{}_{}{}^{+}\nu _{\mathrm{}^{}}`$ final state in the DR methods <sup>*</sup><sup>*</sup>*A measure of $`\mathrm{M}_\mathrm{W}`$ can be obtained from the $`\mathrm{W}^+\mathrm{W}^{}`$$``$$`\mathrm{}^{}\overline{\nu }_{\mathrm{}}`$$`\mathrm{}_{}^{}{}_{}{}^{+}\nu _{\mathrm{}^{}}`$ channel by using the lepton energy spectrum. However, it is estimated to be a factor of 4-5 less sensitive than the measurements available from the other $`\mathrm{W}^+\mathrm{W}^{}`$ final states.; it is therefore discussed no further. ### A Event Selection The expected statistical error on $`\mathrm{M}_\mathrm{W}`$ varies as, $`\mathrm{\Delta }\mathrm{M}_\mathrm{W}(\mathrm{stat})\frac{1}{\sqrt{\mathrm{N}_{\mathrm{WW}}}}\frac{1}{\sqrt{\mathrm{Purity}}}`$, so that high efficiency, high purity selections are important. The $`\mathrm{W}^+\mathrm{W}^{}`$ selection efficiencies and purities are given in Table I for each of the four LEP experiments. For the data taken at $`\sqrt{s}=183`$ GeV, these efficiencies and purities give approximately 700 $`\mathrm{W}^+\mathrm{W}^{}`$ candidate events per experiment, about 100 of which are non-$`\mathrm{W}^+\mathrm{W}^{}`$ background. The selection efficiencies have a total uncertainty of about $`1\%`$ (absolute) and have a negligible effect ($`<1`$ MeV) on the $`\mathrm{M}_\mathrm{W}`$ determination. The accepted background cross-sections have a total uncertainty of $`1020\%`$ (relative) and effect the $`\mathrm{M}_\mathrm{W}`$ determination at the $`1015`$ MeV level (cf. Section IV). ### B Invariant Mass Reconstruction There are several methods available for reconstructing the invariant mass of a $`\mathrm{W}^\pm `$ candidate. The best resolution is obtained by using a kinematic fit which exploits the fact that the center-of-mass energy of the collision is known a priori Strictly speaking, this is not true since any initial state radiation (ISR) reduces the collision energy to less than twice the beam energy. The kinematic fits assume no ISR. The effect of ISR uncertainties is incorporated in the total systematic error discussed in Section IV.. The are two “flavours” of kinematic fit: 1. 4C-fit: Enforces $`\mathrm{\Sigma }(𝐏,\mathrm{E})=(\mathrm{𝟎},\sqrt{s})`$ constraints; yields two reconstructed masses per event, $`(\mathrm{m}_{\mathrm{rec}_1},\mathrm{m}_{\mathrm{rec}_2})`$, one for each $`\mathrm{W}^\pm `$ in the final state. 2. 5C-fit: In addition to the four constraints above, ignores the finite width of the $`\mathrm{W}^\pm `$ and requires that $`\mathrm{m}_{\mathrm{rec}_1}=\mathrm{m}_{\mathrm{rec}_2}`$; yields a single reconstructed mass per event. The type of fit used depends on the final state. For instance, in the $`\mathrm{q}\overline{\mathrm{q}}`$$`\mathrm{e}\overline{\nu }`$ and $`\mathrm{q}\overline{\mathrm{q}}`$$`\mu \overline{\nu }`$ channels, because the prompt neutrino from the leptonic $`\mathrm{W}^\pm `$ decay takes three degrees-of-freedom ($`dof`$), $`𝐏_\nu `$, the fits effectively become 1C and 2C fits respectively. For the $`\mathrm{q}\overline{\mathrm{q}}`$$`\tau \overline{\nu }`$ channel, high energy neutrinos from the $`\tau `$-decay itself lose at least one additional $`dof`$ and so require that all 5 constraints be used, thus yielding a 1C fit Such a fit is possible only if one assumes that the $`\tau `$-lepton direction is given by the direction of the visible decay products associated with the $`\tau `$.. In the $`\mathrm{q}\overline{\mathrm{q}}`$$`\mathrm{q}\overline{\mathrm{q}}`$ channel, since there are (nominally) four jets, there exist three possible jet-jet pairings. This pairing ambiguity gives rise to a combinatoric background unique to the $`\mathrm{q}\overline{\mathrm{q}}`$$`\mathrm{q}\overline{\mathrm{q}}`$ channel. Each LEP experiment employs a different technique for choosing the best combination(s). L3 uses the 5C-fit probabilities (the equal mass constraint yields a different fit $`\chi ^2`$ for each combination) to choose the two best combinations per event. At the cost of some additional combinatorics, this algorithm has the correct combination among those chosen about $`90\%`$ of the time. Opal , Delphi and Aleph employ a 4C-fit and exploit kinematic information to choose the best combination. The algorithms employed by Aleph and Opal choose a single combination per event; this combination corresponds to the correct combination approximately $`85\%`$ of the time at no additional cost in combinatorics. Delphi uses all combinations and weights each according to the likelihood that it corresponds to the correct combination. ### C Extracting $`𝐌_𝐖`$ The ensemble of selected events yields a $`\mathrm{m}_{\mathrm{rec}}`$ distribution from which a measure of $`\mathrm{M}_\mathrm{W}`$ is extracted. There are several methods available for extracting $`\mathrm{M}_\mathrm{W}`$. Aleph , L3 , and Opal all employ a traditional maximum likelihood comparison of data to Monte Carlo (MC) spectra corresponding to various $`\mathrm{M}_\mathrm{W}`$. In addition to its simplicity, this method has the advantage that all biases (ie. from resolution, ISR, selection, etc.) are implicitly included in the MC spectra. The disadvantage of this method is that it does not make optimal use of all available information. Delphi employs a convolution technique, which makes use of all available information; in particular, events with large fit-errors are de-weighted relative to fits with small fit-errors. The convolution has the limitations that it requires various approximations (ie. the resolution is often assumed to be Gaussian) and often requires an a posteriori correction as the fit procedure does not account for all biases, notably from ISR and selection. ## III Results The results from each LEP experiment, using data collected at $`\sqrt{s}=183`$ GeV, are given in Table II for $`\mathrm{q}\overline{\mathrm{q}}`$$`\mathrm{}\overline{\nu }`$ channel and in Table III for the $`\mathrm{q}\overline{\mathrm{q}}`$$`\mathrm{q}\overline{\mathrm{q}}`$ channel<sup>§</sup><sup>§</sup>§From these results, only the Opal numbers are final while the rest are the latest available pre-liminary results.. Also included is the mass obtained when combining all four measurementsNote that since the Opal numbers have changed since the last “official” LEP combination, the combinations given here are the author’s own.. For the LEP combinations, the ISR, hadronization, LEP beam energy, and color-reconnection/Bose-Einstein (CR/BE) uncertainties are taken as completely correlated between the four experiments. The errors given correspond to the observed statistical and the total systematic (including that associated with the LEP beam energy) uncertainties respectively. For the $`\mathrm{q}\overline{\mathrm{q}}`$$`\mathrm{q}\overline{\mathrm{q}}`$ channel, the error associated with CR/BE uncertainties is given separately and is taken as a $`60`$ MeV common error. Also shown in Tables II and III is the expected statistical error, $`\widehat{\sigma }_{\mathrm{stat}}`$, for each experiment. As an example, the Opal fits are shown in Figure 1. Using data taken at $`\sqrt{s}=172`$ and $`183`$ GeV, the preliminary LEP combined $`\mathrm{M}_\mathrm{W}`$ using DR methods for the $`\mathrm{q}\overline{\mathrm{q}}`$$`\mathrm{}\overline{\nu }`$ and $`\mathrm{q}\overline{\mathrm{q}}`$$`\mathrm{q}\overline{\mathrm{q}}`$ channels separately is: $`\mathrm{M}_\mathrm{W}(\mathrm{q}\overline{\mathrm{q}}\mathrm{}\overline{\nu })`$ $`=`$ $`80.33\pm 0.09(\mathrm{stat})\pm 0.03(\mathrm{syst})\mathrm{GeV}`$ (3) $`\mathrm{M}_\mathrm{W}(\mathrm{q}\overline{\mathrm{q}}\mathrm{q}\overline{\mathrm{q}})`$ $`=`$ $`80.39\pm 0.09(\mathrm{stat})\pm 0.04(\mathrm{syst})\pm 0.06(\mathrm{CR})\mathrm{GeV}`$ (4) Note that these results are statistically consistent with each other. ## IV Systematic Errors The systematic errors for a typical LEP experiment are given in Table IV. It should be noted that for all four LEP experiments the errors associated with ISR, hadronization, and four-fermion interference uncertainties are limited by the statistics of the comparison. Uncertainties associated with the selection efficiencies and accepted backgrounds are included in the line labeled “fit procedure”. For the $`\mathrm{q}\overline{\mathrm{q}}`$$`\mathrm{}\overline{\nu }`$ channel the largest single contribution to the systematic uncertainty is due to detector effects (eg. energy scales, resolutions, and modelling). These errors are expected to decrease as more data is collected. For the $`\mathrm{q}\overline{\mathrm{q}}`$$`\mathrm{q}\overline{\mathrm{q}}`$ channel the dominant systematic uncertainty is due to CR/BE effects. There has been recent progress in experimentally constraining the available CR models by comparing event shape and charged particle multiplicity distributions as predicted by various MC models (both including and excluding CR effects) with those observed in the data. On the basis of these studies, some of the models have been excluded as they fail to adequately describe the data . In particular, the VNI model is excluded, which predicted systematic shifts to the measured $`\mathrm{M}_\mathrm{W}(\mathrm{q}\overline{\mathrm{q}}\mathrm{q}\overline{\mathrm{q}})`$ on the order of $`100`$ MeV. The surviving models are used to estimate the systematic uncertainty associated with the modeling of CR effects and yield estimates in the range of $`2055`$ MeV. For a more complete discussion, see Reference . Additional data should help to further constrain the remaining CR models and thus improve these errors. ## V Conclusions Using approximately $`10\mathrm{pb}^1`$ of data collected at $`\sqrt{s}=161`$ and 172 GeV and $`55\mathrm{pb}^1`$ at $`\sqrt{s}=183`$ GeV the LEP experiments have measured the mass of the W boson. The LEP combined result, assuming the Standard Model relation between the W decay width and mass, is $`\mathrm{M}_\mathrm{W}=80.38\pm 0.07(\mathrm{exp})\pm 0.03(\mathrm{CR}/\mathrm{BE})\pm 0.02(\mathrm{E}_{\mathrm{bm}})`$ GeV, where the errors correspond to experimental, colour-reconnection/Bose-Einstein, and LEP beam energy uncertainties respectively. This value ($`80.38\pm 0.08`$ GeV) is consistent with the direct measurement from the TeVatron ($`80.41\pm 0.09`$ GeV) , and the indirect determinations from NuTeV ($`80.26\pm 0.11`$ GeV) and SM fits to precision electroweak data ($`80.37\pm 0.03`$. During 1998 LEP delivered approximately $`180\mathrm{pb}^1`$ per experiment at $`\sqrt{s}189`$ GeV. This additional data increased the presently available statistics for the DR method by more than a factor of two. Incorporating this data should yield a statistical error for the LEP combined determination of $`\mathrm{M}_\mathrm{W}`$ of $`4050`$ MeV and will allow for tighter experimental constraints on various color-reconnection and Bose-Einstein models in the $`\mathrm{q}\overline{\mathrm{q}}`$$`\mathrm{q}\overline{\mathrm{q}}`$ final state. ## Acknowledgements Many thanks to my colleagues in the LEP Electroweak working group for their comments and suggestions.
no-problem/9902/chao-dyn9902017.html
ar5iv
text
# Untitled Document Comment on “Entropy Generation in Computation and the Second law of Thermodynamics”, by S. Ishioka and N. Fuchikami (chao-dyn/9902012 17 Feb 1999). In the above cited paper, the authors claim that a more precise expression of Landauer’s principle, namely: “\[L\] erasure of information is accompagnied by heat generation to the amount of $`kT\mathrm{ln}2`$/bit;” should be: “\[IF\] erasure of information is accompagnied by entropy generation $`k\mathrm{ln}2`$/bit.” However, as they probably ignore, Landauer’s statement about heat dissipation in computation has been a priori derived from phase space contraction arguments which can be stated equivalently in terms of entropy. Hence, \[L\] and \[IF\] are equivalent, even from Landauer’s viewpoint, and there’s no need to argue for a difference. To clarify this point, let us consider the example of the bistable potential (cf. cited paper). According to Ishioka and Fuchikami, entropy is generated when a particle, initially trapped in one side of the bistable potential, is brought to a state where it can move freely between the two wells by lowering the potential barrier. Logically, this leads to the erasure of the information contained in the “position” bit (0 or 1 depending on the initial state in the double well), and, physically, to an increase of the entropy of the particle from $`S_i=0`$ (definite memory state) to $`S_f=\mathrm{ln}2`$ (unknown final position). Hence, their conclusion \[IF\]. In Landauer’s analysis of the problem on the other hand, a similar conclusion is reached with the difference that the erasure action is somewhat inversed. Landauer considers a particle in a bistable potential which is used to record the position of a particle in another bistable potential. From an outside point of view, the outcome of one measurement of the position generates a random bit that is stored in the recording bistable system. Hence, in the process, the recording particle goes from a definite state of zero entropy (the ready-for-measurement state) to a record state which is 0 with some probability $`p`$ and 1 with probability $`1p`$, thereby increasing the entropy of the recording device. The erasure of the information then proceeds by putting the recording particle back to its ready-for-measurement state. This leads to a dissipation of entropy conveyed as a dissipation of heat (\[L\]). The two exposed viewpoints differ only by the role of the recording device and the position taken in the act of erasure. In Zurek’s terminology (Phys. Rev. A 40(8) 4731, 1989), Ishioka and Fuchikami analyze their system from an inside point of view which assigns no probabilities for the recording state, whereas the second analysis considers the measured system and the recording system from the same probabilistic perspective. Evidently, both are correct and lead to the same result, namely, that entropy is generated in the erasure process. The dissipation of heat is just an a posteriori conclusion which follows from the connection with thermodynamics. To conclude, let us also note that the “writing process” discussed in the article of Ishioka and Fuchikami can be performed without doing any work. In Szilard’s engine, for example, one can insert the partition after waiting until the particle goes on the side that corresponds to the bit to be registered. Hugo Touchette MIT, htouchet@mit.edu 02/19/1999
no-problem/9902/nucl-th9902067.html
ar5iv
text
# Calculation of exciton densities in SMMC ## Abstract We develop a shell-model Monte Carlo (SMMC) method to calculate densities of states with varying exciton (particle-hole) number. We then apply this method to the doubly closed-shell nucleus <sup>40</sup>Ca in a full $`0s`$-$`1d`$-$`0f`$-$`1p`$ shell-model space and compare our results to those found using approximate analytic expressions for the partial densities. We find that the effective one-body level density is reduced by approximately 22% when a residual two-body interaction is included in the shell model calculation. 1. Introduction Particle-hole, or exciton, level densities enter into the description of partial decay rates in nuclear preequilibrium emission. These level densities have been modeled using analytic expressions that describe nuclear excitations in terms of the number of particles, $`p`$, and holes, $`h`$, measured from the Fermi surface, with the exciton number $`N_e=(p+h)/2`$. For a single species of particles, Williams derived an expression for the partial density of states given by $$\rho _{N_e}(E)=g\frac{\left(gEG\right)^{2N_e1}}{p!h!\left(2N_e1\right)!},$$ (1) where $`E`$ is the excitation energy measured above the ground-state configuration, $`g`$ is the single-particle density of states, and $`G/g`$ plays the role of an effective Pauli energy with $`G=(p^2+h^2)/4+(ph)/4+h/2`$. There exist more complicated expressions that distinguish between protons (with single-particle density $`g_p`$) and neutrons ($`g_n`$) in a given nucleus, but we will not quote them here. In the most naive picture, a uniform spacing of single-particle states, $`d`$, is assumed, in which case the single-particle level density is $`g=1/d`$, measured in units of MeV<sup>-1</sup>. Equation 1 and its neutron/proton counterpart suffer from several deficiencies including the assumption of an unlimited number of single-particle states, an inexact treatment of the Pauli principle, and the assumption of a uniform single-particle level spacing. Extensions to the basic model that ameliorate some of these effects have also been pursued. As examples, we mention Bogilia et al. in which the energy dependence of the single-particle level spacing was included in a general way; using the equidistant single-particle picture, Kalbach and Zhang and Yang considered Pauli principle corrections to the state densities; and De and Hua considered the effects of pairing in addition to the Pauli blocking on the state densities. These effects were combined and extended to non-uniform level spacings by, for example, Harangozo et al. . A further difficulty with the simple formula is that the residual two-body interaction, which is present beyond the nuclear mean field, and which includes important contributions beyond $`J=0`$ pairing, is not incorporated. While some progress has been made to approximate the effects of the residual interaction on the partial level densities, no interacting shell-model calculations have been performed in large model spaces that would indicate the effect of the two-body interaction on the single-body density parameter, $`g`$, nor have there been any partial density-of-state calculations in the interacting shell model. In this article, we describe calculations that study the effects of the residual two-body interaction on the partial level densities, and present results for partial level densities in <sup>40</sup>Ca. Our approach is to study a related quantity, $`Y_{N_e}(\beta )`$, which is the ratio of the particle-hole partition functions, $`Z_{N_e}(\beta )`$, to the full partition function, $`Z_A(\beta )`$, as a function of the inverse temperature, $`\beta `$, (measured in MeV<sup>-1</sup>) in the system. We perform our calculations in a full $`0s`$-$`1d`$-$`0f`$-$`1p`$ model space using shell-model Monte Carlo (SMMC) techniques , and an interaction that describes reasonably well the low-lying spectral properties of nuclei in the $`sd`$-$`fp`$ region. We will compare our results with those obtained from eq. (1) and its proton/neutron counterparts. Finally, we will show partial densities of states for several exciton numbers. In section 2, we give an overview of our calculational method. We present results in section 3, and conclude with a brief summary in section 4. 2. Calculation of Excitons in SMMC Investigations into both ground-state and thermal properties of nuclei have been described using the SMMC technique . This method offers an alternative description of nuclear structure properties in the shell-model context that is complementary to direct diagonalization. SMMC is designed to give thermal or ground-state expectation values for various one- and two-body observables. Indeed, for larger nuclei, SMMC may be the only way to obtain information on the thermal properties of the system from a shell-model perspective. In this method, we make use of the imaginary time many-body propagator $`\widehat{U}=\mathrm{exp}(\beta \widehat{H})`$ to calculate the expectation values. For example, the excitation energy of a nucleus is $`E(\beta )=\widehat{H}(\beta )\widehat{H}(\mathrm{})`$, where $`\widehat{H}(\mathrm{})`$ is the ground-state energy. In order to find the excitation energy of a nucleus with particle number $`A`$, we must then calculate $$\widehat{H}=\frac{\text{Tr}\widehat{P}_A\widehat{U}\widehat{H}}{\text{Tr}\widehat{P}_A\widehat{U}},\frac{\text{Tr}_A\widehat{U}\widehat{H}}{\text{Tr}_A\widehat{U}},$$ (2) where $`\widehat{P}_A=\delta (\widehat{N}A)`$ projects the trace over all many-body states in the system to those states that have the desired particle number. Two-body terms in $`\widehat{H}`$ are linearized through the Hubbard-Stratonovich transformation, which introduces auxiliary fields over which one must integrate to obtain physical answers. Since $`\widehat{H}`$ contains many two-body terms that do not commute, one must discretize $`\beta =N_t\mathrm{\Delta }\beta `$. The method can be summarized as $`Z_A`$ $`=`$ $`\text{Tr}_A\widehat{U}=\text{Tr}_A\mathrm{exp}(\beta \widehat{H})\text{Tr}_A\left[\mathrm{exp}(\mathrm{\Delta }\beta \widehat{H})\right]^{N_t}`$ (3) $``$ $`{\displaystyle 𝒟[\sigma ]G(\sigma )\text{Tr}_A\underset{n=1}{\overset{N_t}{}}\mathrm{exp}\left[\mathrm{\Delta }\beta \widehat{h}(\sigma _n)\right]},`$ (4) where $`\sigma _n`$ are the auxiliary fields at a given imaginary time-step $`\mathrm{\Delta }\beta `$ (there is one $`\sigma `$-field for each two-body matrix-element in $`\widehat{H}`$ when the two-body terms are recast in quadratic form), $`𝒟[\sigma ]`$ is the measure of the integrand, $`G(\sigma )`$ is a Gaussian in $`\sigma `$, and $`\widehat{h}`$ is a one-body Hamiltonian. Thus, the shell-model problem is transformed from the diagonalization of a large matrix to one of large dimensional quadrature. Dimensions of the integral can reach up to 10<sup>5</sup> for systems of interest, and it is thus natural to use Metropolis random walk methods to sample the space. Such integration is most efficiently performed on massively parallel computers. Further details are discussed in Koonin et al. . In order to obtain density-of-state information, we calculate in SMMC the expectation of the energy and integrate the thermodynamic relationship $$E(\beta )=\frac{d\mathrm{ln}Z_A(\beta )}{d\beta }$$ (5) to obtain $$\mathrm{ln}Z_A(\beta )=_0^\beta 𝑑\beta ^{}E(\beta ^{})\mathrm{ln}Z_A(0),$$ (6) where $`Z_A(0)=\text{Tr}_A\mathrm{𝟏}`$ is the total number of $`A`$-particle states in the system. $`Z_A(\beta )`$ and $`\rho (E)`$ are related by the inverse Laplace transform $$Z_A(\beta )=_{\mathrm{}}^{\mathrm{}}𝑑E\mathrm{exp}(\beta E)\rho (E),$$ (7) which can be solved in a saddle point approximation to yield $`\rho (E)`$ $`=`$ $`{\displaystyle \frac{\mathrm{exp}(S)}{\sqrt{2\pi \beta ^2C}}}`$ (8) $`S`$ $`=`$ $`\beta E+\mathrm{ln}Z(\beta );\beta ^2C={\displaystyle \frac{dE}{d\beta }}.`$ (9) In this expression $`C`$ is the heat capacity of the system. For our discussion, we study <sup>40</sup>Ca. The non-interacting ground state is a filled $`sd`$-shell with no particles in the $`fp`$-shell. Our excitons are then enumerated with respect to the filled $`sd`$-shell. We may excite both protons ($`\pi `$) and neutrons ($`\nu `$) so that $`N_e=(p_\pi +h_\pi +p_\nu +h_\nu )/2=N_{fp}`$, where $`N_{fp}`$ ($`N_{sd}`$) gives the number of particles in the $`fp`$\- ($`sd`$-) shell. For example, $`N_e=2`$ includes the following particle-hole excitations: ($`0p_\pi 0h_\pi `$, $`2p_\nu 2h_\nu `$) ($`1p_\pi 1h_\pi `$, $`1p_\nu 1h_\nu `$) ($`2p_\pi 2h_\pi `$, $`0p_\nu 0h_\nu `$). Furthermore, $`0N_e24`$ since, at most, 24 particles can be excited from the $`sd`$-shell into the $`fp`$-shell. The ratio of the partition function for $`N_e`$ excitons to the full partition function for the $`A`$-particle system may be found by introducing a second number projection operator, $$\widehat{P}_{N_e}=\delta (N_{sd}\widehat{N}_{sd})\delta (N_{fp}\widehat{N}_{fp}),$$ (10) provided that $`A=N_{sd}+N_{fp}`$. In reality we perform this projection for both protons and neutrons simultaneously, but this only complicates notation and will not be discussed here. We calculate the ratio of partition functions as $$Y_{N_e}(\beta )=\frac{Z_{N_e}(\beta )}{Z_A(\beta )}=\frac{\text{Tr}\widehat{P}_{N_e}\widehat{U}}{\text{Tr}\widehat{P}_A\widehat{U}}.$$ (11) Therefore, $`_{N_e}Y_{N_e}(\beta )=1`$ which we use as a convenient numerical check. We also extract the energy of the particle-hole excitations, $`E_{N_e}`$, as $$E_{N_e}(\beta )=\frac{d\mathrm{ln}Y_{N_e}(\beta )}{d\beta }+E(\beta ).$$ (12) We may now employ eq. (9) for the partial density of states: $`\rho _{N_e}(E)`$ $`=`$ $`{\displaystyle \frac{\mathrm{exp}(S_{N_e})}{\sqrt{2\pi \beta ^2C_{N_e}}}}`$ (13) $`S_{N_e}`$ $`=`$ $`\beta _{N_e}E_{N_e}+\mathrm{ln}Z_{N_e}(\beta );\beta _{N_e}^2C_{N_e}={\displaystyle \frac{dE_{N_e}}{d\beta }}.`$ (14) Here $`\beta _{N_e}=\beta _{N_e}(E_{N_e})`$ is determined by inverting the relation $`E_{N_e}=E_{N_e}(\beta _{N_e})`$, and $`C_{N_e}`$ is the heat capacity for the particular exciton number. 3. Results We now turn to a description of our <sup>40</sup>Ca calculation in the $`0s`$-$`1d`$-$`0f`$-$`1p`$ shell model space. Our starting point for an appropriate interaction is taken from ref. . In order to obtain a microscopic effective interaction, we begin with a free nucleon-nucleon interaction which is appropriate for a description of low-energy nuclear structure. The choice made in ref. was to work with the charge-dependent version of the Bonn nucleon-nucleon potential model as found in ref. . Standard perturbation techniques, as discussed in ref. , were employed to obtain an effective interaction in the full $`sd`$-$`fp`$ model space. Finally, the interaction was modified in the monopole terms using techniques developed by Zuker and co-workers . SMMC calculations for realistic interactions typically have a Monte Carlo sign problem which can be overcome by an extrapolation technique discussed in ref. , and successfully applied to the $`sd`$-$`fp`$ region in . This extrapolation technique was also applied to thermal properties of nuclei , but the statistical error inherent in the energy upon extrapolation prevents a full description of the density of states unless one has good justification to spend the computational resources to reduce the statistical error. It was recently demonstrated that a good reproduction of the experimental density of states could be obtained for nuclei in the $`0f1p`$-$`0g_{9/2}`$ shell using a pairing-plus-quadrupole interaction that was free from the sign problem. In this work, we fit our realistic two-body interaction discussed above to a pairing-plus-multipole interaction given by $`\widehat{H}_2=g_0\pi \widehat{P}_{00}^{}\widehat{P}_{00}+4\pi {\displaystyle \underset{\nu \mu }{}}\chi _\nu :{\displaystyle \underset{\mu }{}}()^\mu \widehat{Q}_{\nu \mu }\widehat{Q}_{\nu \mu }:,`$ (15) where $`::`$ denotes normal ordering and $`\widehat{P}_{\lambda \mu }^{},\widehat{Q}_{\nu \mu }`$ are pair and quadrupole operators given by $`\widehat{P}_{\lambda \mu }^{}`$ $`=`$ $`{\displaystyle \underset{ab}{}}()^\mathrm{}_b(j_a𝒴_\nu j_b)[\widehat{a}_{j_a}^{}\times \widehat{a}_{j_b}^{}]_{\lambda \mu }`$ (16) $`\widehat{Q}_{\nu \mu }`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2\nu +1}}}{\displaystyle \underset{ac}{}}(j_ar^\nu 𝒴_\nu j_c)[\widehat{a}_{j_a}^{}\times \widehat{\stackrel{~}{a}}_{j_c}]_{\nu \mu }.`$ (17) In eq. (16) $`an\mathrm{}j`$ denotes a single-particle orbit and $`\widehat{\stackrel{~}{a}}_{jm}=()^{j+m}\widehat{a}_{jm}`$. We fit $`g_0`$ and the $`\mu =2,4,6`$ multipoles to the realistic interaction from . A least-squares fit gives an interaction which indeed has a good Monte Carlo sign. After some minor adjustments to the pairing strength in order to obtain a better gap between ground states and first excited states in several light nuclei, we use the following parameter set: $`g_0=0.63`$ MeV, $`\chi _2=0.047`$ MeV fm<sup>-4</sup>, $`\chi _4=0.001`$ MeV fm<sup>-8</sup>, and $`\chi _6=0.17\times 10^3`$ MeV fm<sup>-12</sup>. (Large enhancements of the $`ar^\nu b`$ matrix elements as $`\nu `$ increases is the reason for the decrease in $`\chi _\nu `$ values, although contributions to two-body matrix elements arising from the higher multipoles is significant.) Our single-particle energies are 0.0, 5.36, 0.64, 8.21, 14.21, 10.14, and 12.07 MeV for the $`0d_{5/2}`$, $`0d_{3/2}`$, $`1s_{1/2}`$, $`0f_{7/2}`$, $`0f_{5/2}`$, $`1p_{3/2}`$, and $`1p_{1/2}`$ orbitals, respectively. We do not correct for center-of-mass motion in these calculations, although such a contamination to the $`Y_{N_e}`$ should be fairly small in this system. Furthermore, we do not include odd multipoles which upon fitting were found to give coefficients that cause Monte Carlo sign problems. Thus our negative parity states are probably less well described by this choice of Hamiltonian. We use the results of the non-interacting case to demonstrate the validity of our technique for finding the partial partition functions. In order to show this, we calculate by enumeration the total number of many-body states for each $`N_e`$. This can most easily be done by using eq. (6) to find $`Z_A(\beta =0)`$. We also find $`Y_{N_e}(0)`$ by an extrapolation from small, but finite, $`\beta `$. We show in fig. 1 our results for the number of states as a function of the exciton number. The SMMC results are compared to an exact counting of the number of states of a given exciton number. The agreement is excellent. The total number of calculated SMMC states is 3.834$`\times `$10<sup>16</sup> as compared to the exact value of 5.095$`\times `$10<sup>16</sup>. We show in fig. 2 a comparison of the non-interacting (left) and interacting (right) calculation. The $`N_e=0`$ calculation gives some indication of the thermal freeze-out of the ground state. The non-interacting calculation requires fairly large $`\beta `$ to fully reach the ground state, since the first excited state is only 0.64 MeV above the ground state. We pursued these calculations to $`\beta =4.0`$ MeV<sup>-1</sup>, for which $`H`$ is 0.139 MeV from the ground-state value. Since it takes more thermal energy to overcome the pairing interaction and to excite them from the ground-state configuration, the interacting curves representing different $`N_e`$ excitations are compressed in $`\beta `$ relative to the non-interacting curves. (The excitation energy of the first excited state is approximately 3.5 MeV.) However, the same features remain. Clearly, as thermal energy is decreased, and $`\beta `$ becomes larger (temperature decreases), it is more difficult to produce large particle-hole excitations in the system. The converse is also true at higher temperatures, where it is difficult to obtain only $`N_e=2`$, for example. In the interacting case, the low temperature tail of the $`N_e=2`$ exciton tends to spread further in $`\beta `$ than does the $`N_e=1`$ tail. As we shall see, this has direct consequences on the partial densities of states. Furthermore, since $`_{N_e}Y_{N_e}(\beta )=1`$, we can interpret $`Y_{N_e}`$ as a measure of likelihood to find $`N_e`$ excitons at a given temperature. Since the excitation energy is a monotonic function of the temperature, one expects the density of states to be dominated by excitons of a particular type in a given energy range. As we shall see, this is indeed the case. We compare our results to those obtained from eq. (1) and its proton/neutron equivalent by adjusting $`g`$ to obtain a fit to the calculated $`Y_{N_e}`$ curves. This is demonstrated for the $`N_e=4`$ curve in fig. 3. We fit each of our SMMC curves to eq. (1) for an effective one-body density parameter $`g_{\mathrm{eff}}=g_p+g_n`$ , and the effective level density parameter is $`a=(\pi ^2/6)g_{\mathrm{eff}}`$. A uniform Fermi gas yields $`aA/15`$ (=2.67 for A=40) MeV<sup>-1</sup>, a harmonic oscillator potential yields $`aA/10`$ MeV<sup>-1</sup> (=4.0), and the empirical value is $`A/8`$ MeV<sup>-1</sup>. We obtain $`a=5.43`$ ($`g_{\mathrm{eff}}=3.3`$) MeV<sup>-1</sup> for the non-interacting case, and $`a=4.27`$ ($`g_{\mathrm{eff}}=2.6`$) MeV<sup>-1</sup> in the interacting case, rather independent (within 0.02 MeV<sup>-1</sup>) of $`N_e2`$. For $`N_e=1`$ the comparison cannot be made as the Blann-Williams formula breaks down. Thus, $`g_{\mathrm{eff}}`$ is reduced by 22% in the presence of an interaction. The decomposition of the $`N_e=4`$ case into the various proton-neutron components is also shown in fig. 3. The ($`0p_\pi 0h_\pi `$, $`4p_\nu 4h_\nu `$) plus its neutron counterpart carries very little weight here, while the ($`2p_\pi 2h_\pi `$, $`2p_\nu 2h_\nu `$) component of $`Y_4`$ carries the most weight. As expected, in all $`N_e`$ cases the largest component of $`Y_{N_e}`$ is the one in which the number of excited neutrons equals the number of excited protons. Finally we present our result for the calculation of $`\rho _{N_e}`$ derived by using $`Z_{N_e}(\beta )`$ in eq. (6) and $`E_{N_e}`$ from eq. (12). The natural log of $`\rho _{N_e}`$ is shown in fig. 4 for the $`N_e`$=1–6 excitations. We also include in the figure the total state density as a function of the excitation energy in the system. The saddle point approximation breaks down in regions where there are very few states, which makes it difficult to describe well-separated states in the low-lying spectrum ($`E^{}<3`$ MeV) for the full density or for the individual exciton densities. We also propagated our statistical error bars through the calculation of $`\rho _{N_e}`$, but, as can be seen, they are very small except in the case of the $`N_e=1`$ excitons. Note that the majority of states, for example at $`E^{}=25`$ MeV, are $`N_e=4`$ states, while at $`E^{}=35`$ MeV the $`N_e=5`$ states contribute most. This localization in excitation energy of excitons was reflected in our earlier discussion of the behavior of $`Y_{N_e}`$. Interestingly, the $`N_e=1`$ density of states begins in energy slightly above 2p2h state density. Recall that experimentally the first excited state of <sup>40</sup>Ca is a 0<sup>+</sup> 2p2h state (at 3.3 MeV), and that the first negative parity state (a 3<sup>-</sup>) occurs at a slightly higher energy of 3.7 MeV. Our Hamiltonian fairly closely gives the correct relative starting energies for these two exciton configurations, although, due to the breakdown of the saddle-point approximation for low state densities, we cannot precisely determine the excitation energy of the first excited 0<sup>+</sup> level. We also note an interesting pairing effect that shows up in the partial densities. Note that the $`N_e=1`$ state density starts about 0.6 MeV above the $`N_e=2`$ case. The $`N_e=3`$ state density begins approximately 2 MeV above the $`N_e=4`$ case. This is a manifestation of pairing in the system. It takes more excitation energy to produce an odd particle-hole excitation than it does to produce an even particle-hole excitation since energy must be expended to break pairs for odd excitations. This effect was already apparent from the discussion of the low-temperature behavior of the $`Y_{N_e}`$, as indicated above. 4. Conclusion We have discussed in this article how one may obtain information on particle-hole excitations using SMMC methods for calculations of a nuclear system. Our technique uses the ratio of the particle-hole partition function to the full partition function of the system. The method incorporates exact Pauli blocking, non-equidistant single-particle energies, and gives the exact partial densities for a given nuclear effective interaction, within statistical errors. It also has a well-defined energy scale. One drawback of the present calculation is that the space size is limited to two major oscillator shells, although this can be rather easily overcome. The projection operator introduced in this may be applied in any Monte Carlo technique where ratios of partition functions are needed. Our results also indicate that the effective $`g`$-parameter used in eq. (1) should be reduced by approximately 22% to account for the inclusion of the two-body interaction which acts to correlate the nucleus beyond the simple mean-field or pairing prescription. The method we have described here could be further advanced in two ways. One may increase the model space used, thus allowing for a broader range of energies and excitation modes to be explored. The method is also applicable to $`mp`$-$`nh`$ excitations if we extend our studies to open-shell nuclei such as, e.g., <sup>42</sup>Ca. This may be pursued in future work. ###### Acknowledgements. This work was supported in part through grant DE-FG02-96ER40963 from the U.S. Department of Energy. Oak Ridge National Laboratory (ORNL) is managed by Lockheed Martin Energy Research Corp. for the U.S. Department of Energy under contract number DE-AC05-96OR22464. We also acknowledge support from the U.S. National Science Foundation under Grants No. PHY-9722428, PHY94-12818 and PHY94-20470.
no-problem/9902/hep-th9902093.html
ar5iv
text
# Absorption cross section and Hawking radiation in two-dimensional AdS black hole ## Abstract We calculate the absorption coefficient of scalar field on the background of the two-dimensional AdS black hole, which is of relevance to Hawking radiation. For the massless scalar field, we find that there does not exist any massless radiation. preprint: SOGANG-HEP 255/99 hep-th/9902093 There has been much attention to the lower-dimensional black holes to study the final state of black holes at the end of evaporation without encountering some complexities of realistic four-dimensional gravity. The evaporation of the black hole is essentially based on Hawking radiation . The Hawking radiation of Callan-Giddings-Harvey-Strominger(CGHS) model is given by the vacuum expectation value of energy-momentum tensors, which means that the black hole is no more black, instead, grey hole after quantization. Therefore the absorption coefficient is less than one and the reflection coefficient should be nontrivial. On the other hand, two-dimensional anti-de Seitter(AdS<sub>2</sub>) geometry which has a constant curvature scalar appears in the CGHS model with the help of quantum back reaction of the geometry on a constant dilaton background , where the AdS<sub>2</sub> black hole geometry is given by $$ds^2=\left(M+\frac{r^2}{\mathrm{}^2}\right)dt^2+\left(M+\frac{r^2}{\mathrm{}^2}\right)^1dr^2$$ (1) where the horizon is $`r_H=\sqrt{M}\mathrm{}`$. This metric (1) is asymptotically nonflat and has a constant negative curvature scalar $`R=\frac{2}{\mathrm{}^2}`$ with a negative cosmological constant $`\mathrm{\Lambda }=\frac{1}{\mathrm{}^2}`$, which is drastically different from the asymptotic flat CGHS model. So one might wonder whether conventional Hawking radiation appears or not in such a AdS<sub>2</sub> black hole. The recent study shows that the AdS black hole may be quantum-mechanically stable without Hawking radiation as far as we consider massless radiation from the black hole. In Ref. , the energy-momentum tensor for the N massless scalar fields on the AdS<sub>2</sub> black hole background yields remarkably vanishing result as $$T_{}^{\mathrm{Qt}}(\sigma ^+,\sigma ^{})=T_{}^{\mathrm{Bulk}}+T_{}^{\mathrm{boundary}}=0$$ (2) where the energy-momentum tensor induced by quantum correction $`T_{}^{Qt}`$ is conveniently defined by bulk and boundary contributions, $`T_{}^{\mathrm{Bulk}}(\sigma ^+,\sigma ^{})`$ $`=`$ $`{\displaystyle \frac{\kappa M}{4\mathrm{}^2}},`$ (3) $`T_{}^{\mathrm{boundary}}(\sigma ^+,\sigma ^{})`$ $`=`$ $`{\displaystyle \frac{\kappa M}{4\mathrm{}^2}}`$ (4) in the conformal gauge, $`g_+=\frac{1}{2}e^{2\rho (\sigma ^+,\sigma ^{})}`$, $`g_{\pm \pm }=0`$,and $`\kappa =\frac{N}{12}`$. For the asymptotically flat CGHS black hole, however, the Hawking radiation has been determined by only the boundary contribution since the bulk part vanishes at the spatial infinity. In our AdS case, the bulk contribution is definite due to the nontrivial asymptotic behavior of the geometry. One might still wonder how does the Hawking radiation of AdS<sub>2</sub> black hole vanish. So, in this Brief Report, we shall study massless scalar field as a test probe on the AdS<sub>2</sub> black hole background and calculate absorption(or reflection) coefficient, which will be a different test whether Hawking radiation appears or not in AdS<sub>2</sub> black hole. Let us now consider the wave equation for the massless scalar field on the AdS<sub>2</sub> black hole background (1), which is given by $$\mathrm{}\mathrm{\Psi }(t,r)=0$$ (5) with $`\mathrm{\Psi }(t,r)=e^{i\omega t}\mathrm{\Phi }(r)`$, and it yields explicitly $$\left(r^2r_H^2\right)_r^2\mathrm{\Phi }(r)+2r_r\mathrm{\Phi }(r)+\frac{\omega ^2\mathrm{}^4}{r^2r_H^2}\mathrm{\Phi }(r)=0.$$ (6) By changing variable $`r`$ in terms of $`z=\frac{rr_H}{r+r_H}`$ ($`0z1`$), Eq.(6) becomes $$z(1z)_{z}^{}{}_{}{}^{2}\mathrm{\Phi }(z)+(1z)_z\mathrm{\Phi }(z)+\frac{\omega ^2\mathrm{}^4}{4r_H^2}\left(\frac{1}{z}1\right)\mathrm{\Phi }(z)=0,$$ (7) it is exactly solved as $$\mathrm{\Phi }(r)=C_1\left(\frac{rr_H}{r+r_H}\right)^\alpha +C_2\left(\frac{rr_H}{r+r_H}\right)^\alpha ,$$ (8) where $`\alpha =\pm \frac{i\omega \mathrm{}^2}{2r_H}`$. This solution (8) contains the ingoing and outgoing wave modes together, which will be easily seen by rewriting Eq.(8) as $$\mathrm{\Phi }(r)=C_1e^{\frac{i\omega \mathrm{}^2}{2r_H}\mathrm{ln}(\frac{rr_H}{r+r_H})}+C_2e^{\frac{i\omega \mathrm{}^2}{2r_H}\mathrm{ln}(\frac{rr_H}{r+r_H})}.$$ (9) Thus the coefficient $`C_1`$ and $`C_2`$ are defined with ingoing and outgoing amplitudes, respectively. The wave equation is exactly solved on the $`AdS_2`$ black hole background, and we need not have matching procedure, which is contrasted to the higher-dimensional analysis of greybody factors . We now choose boundary condition in the near horizon region and simply set $`C_2=0`$, which means that the outgoing mode does not exist in the near horizon limit . On the other hand, the solution in the far region naturally contains only ingoing mode, and modes mixing does not occur. Once it means that ingoing mode starts at the far region, it does not change its direction in the massless case. This feature is drastically different from that of the wave solution of the three-dimensional AdS black hole so called Ba$`\stackrel{~}{\mathrm{n}}`$ados-Teitelboim-Zanelli(BTZ) black hole in that the mode mixing certainly appears and it is responsible for the Hawking radiation. Now, the flux is defined by the conserved current as $$F=\frac{2\pi }{i}\left(\frac{r^2r_H^2}{\mathrm{}^2}\right)(\mathrm{\Phi }^{}_r\mathrm{\Phi }\mathrm{\Phi }_r\mathrm{\Phi }^{}),$$ (10) and it is simply given by $$F=4\pi \omega |C_1|^2,$$ (11) by the use of Eq.(9). As easily seen, the flux does not depend on the outgoing amplitude everywhere outside the horizon, and the absorption coefficient defined by $$A=\frac{F_{\mathrm{near}}}{F_{\mathrm{far}}},$$ (12) is exactly given as $$A=1.$$ (13) From Eq.(13), we find our black hole is a (perfect) black body rather than the grey hole. The physical meaning of this result is remarkable in that there is no massless radiation in the AdS<sub>2</sub> spacetime. This result is already shown in Ref. by the use of the different method, direct calculation of stress-tensor on the AdS<sub>2</sub> black hole. Finally we compare our result with three-dimensional calculation of absorption coefficient of BTZ black hole in Ref. . For a limiting case of vanishing azimuthal angular momentum of scalar field in the nonrotating BTZ black hole, there is still Hawking radiation, which is contrast with the AdS$`2`$ black hole case. This is because the radial equation in BTZ black hole is different from Eq. (6), and the reflection coefficient is nonzero. In three-dimensions, in fact, ingoing mode in the near horizon is decomposed into ingoing and outgoing modes in the far region, and this is essential reason why the Hawking radiation appears. However, in our AdS<sub>2</sub> case, the ingoing mode in the near horizon is still ingoing mode in the far region, which reflects no massless Hawking radiation in AdS<sub>2</sub> black hole. Acknowledgments W.T. Kim would like to thank Mariano Cadoni for helpful discussion on Hawking radiation and Sergey N. Solodukhin for bringing his papers related to AdS<sub>2</sub> to my attention and Seungjoon Hyun and Julian Lee for enlightening discussion and hospitality during the course of visiting Korea Institute of Advanced Study. We also thank Myungseok Yoon for valuable comments. This work was supported by Korea Research Foundation, No. BSRI-1998-015-D00074.
no-problem/9902/hep-th9902069.html
ar5iv
text
# References Spacetime translational invariance can be broken spontaneously if any local operators acquire spacetime-dependent vacuum expectation values. This situation, however, seems implausible because a configuration which minimizes a potential is, in general, independent of spacetime coordinates and further because the spacetime-dependent vacuum configuration would produce nonzero kinetic energy, so that energetically such the configuration would be unfavorable. One might expect that even if some of space dimensions are compactified, the translational invariance of the compactified space (if exists) could not be broken spontaneously. The purpose of this paper is to propose a mechanism to break the translational invariance of compactified spaces spontaneously. To illustrate our mechanism, let us consider a real $`\varphi ^4`$ model in $`D`$ dimensions $$S=d^Dx\left\{\frac{1}{2}_M\varphi ^M\varphi V(\varphi )\right\},$$ (1) where the index $`M`$ runs from 0 to $`D1`$ and $$V(\varphi )=\frac{\lambda }{8}\left(\varphi ^22\frac{\mu ^2}{\lambda }\right)^2.$$ (2) We should note that the action has a $`Z_2`$ symmetry $$\varphi \varphi .$$ (3) It turns out that the existence of global symmetries is crucial to our mechanism and that the above $`Z_2`$ symmetry plays an important role in this model. One may conclude that the ground state would be given by $`\varphi =\pm \sqrt{\frac{2}{\lambda }}\mu `$, at which the scalar potential $`V(\varphi )`$ takes the minimum value and the $`Z_2`$ symmetry is broken. This is, however, a hasty conclusion, as we will see below. Let us suppose that one of the space coordinates, say, $`yx^{D1}`$ is compactified on a circle $`S^1`$ whose radius is $`R`$. Since $`S^1`$ is multiply-connected and the action has the $`Z_2`$ symmetry, we can impose the following nontrivial boundary condition associated with the $`Z_2`$ symmetry for the field $`\varphi `$: $$\varphi (x^\mu ,y+2\pi R)=\varphi (x^\mu ,y),$$ (4) where $`x^\mu `$ denote the coordinates of the uncompactified spacetime. Thanks to the $`Z_2`$ symmetry, the action (1) is still single-valued even with the boundary condition (4). An important consequence of the nontrivial boundary condition (4) is that any vacuum expectation value of $`\varphi (x^\mu ,y)`$ cannot be a ($`y`$-independent) nonzero constant, which is inconsistent with the boundary condition (4). In other words, any nonzero vacuum expectation value of $`\varphi (x^\mu ,y)`$ should have the $`y`$ dependence in order to satisfy the boundary condition (4), i.e. $$\varphi (x^\mu ,y)0\frac{}{y}\varphi (x^\mu ,y)0.$$ (5) It immediately follows that if the vacuum is translationally invariant the vacuum expectation value of $`\varphi (x^\mu ,y)`$ has to vanish, or conversely that if $`\varphi (x^\mu ,y)`$ acquires a nonzero vacuum expectation value, which implies the $`y`$ dependence of $`\varphi (x^\mu ,y)`$, then the translational invariance for the $`S^1`$-direction is spontaneously broken. In order to find a vacuum configuration, one might try to minimize the potential $`V(\varphi )`$. This is, however, wrong in the present model. To find a vacuum configuration, we should take account of kinetic terms in addition to potential terms since the translational invariance could be broken and then the vacuum configuration might be coordinate-dependent. Since the translational invariance of the (uncompactified) $`(D1)`$-dimensional Minkowski spacetime is expected to be unbroken, finding a vacuum configuration of the model may be equivalent to solving a minimization problem of the following functional <sup>§</sup><sup>§</sup>§The $`[\varphi ]`$ may be regarded as a potential from a viewpoint of the $`(D1)`$-dimensional Minkowski spacetime.: $$[\varphi ,R]_0^{2\pi R}𝑑y\left\{\frac{1}{2}(_y\varphi )^2+V(\varphi )\right\}.$$ (6) In this paper, our analysis will be restricted to the tree level. In the following, we shall ignore the $`x^\mu `$ dependence in $`\varphi `$ since we are interested in a vacuum configuration, for which the translational invariance of the $`(D1)`$-dimensional Minkowski spacetime is assumed to be unbroken. It should be emphasized that $`\varphi (y)`$ cannot be an arbitrary function but has to obey the (antiperiodic) boundary condition $$\varphi (y+2\pi R)=\varphi (y).$$ (7) If the translational invariance for the $`S^1`$-direction is unbroken, the vacuum expectation value of $`\varphi `$ has to vanish and then the functional $`[\varphi ,R]`$ becomes $$[\varphi =0,R]=\frac{\pi R\mu ^4}{\lambda }.$$ (8) If there exists any configuration of $`\varphi (y)`$ such that $$[\varphi ,R]<[\varphi =0,R]=\frac{\pi R\mu ^4}{\lambda },$$ (9) then the configuration $`\varphi =0`$ is no longer a vacuum configuration and hence the translational invariance for the $`S^1`$-direction has to be broken, as discussed previously. Before minimizing the functional $`[\varphi ,R]`$, we would like to make a comment about solutions to the field equation for $`\varphi (y)`$ $`0`$ $`=`$ $`{\displaystyle \frac{\delta [\varphi ,R]}{\delta \varphi (y)}}`$ (10) $`=`$ $`{\displaystyle \frac{d^2\varphi (y)}{dy^2}}+{\displaystyle \frac{\lambda }{2}}\varphi (y)\left((\varphi (y))^22{\displaystyle \frac{\mu ^2}{\lambda }}\right).`$ We first note that $`\varphi =0`$ is a trivial solution to eq.(10), while another trivial (would-be) solutions $`\varphi =\pm \sqrt{\frac{2}{\lambda }}\mu `$ have to be excluded due to the boundary condition (7). Using the above field equation to eliminate the term $`\frac{1}{2}(_y\varphi (y))^2`$ in eq.(6), we find $`[\varphi ,R]|_{\frac{\delta }{\delta \varphi }=0}`$ $`=`$ $`{\displaystyle \frac{\pi R\mu ^4}{\lambda }}{\displaystyle _0^{2\pi R}}𝑑y{\displaystyle \frac{\lambda }{8}}(\varphi (y))^4`$ (11) $``$ $`[\varphi =0,R].`$ Since the equality on the last line holds only when $`\varphi =0`$, we have thus arrived at an important conclusion that if there appear nontrivial solutions $`\varphi `$ to the field equation (10), then $`[\varphi ,R]`$ is lower than $`[0,R]`$ so that the translational invariance for the $`S^1`$-direction is broken spontaneously with the $`Z_2`$ symmetry breaking. Let us now proceed to find a vacuum configuration, which minimizes the functional $`[\varphi ,R]`$. To this end, we shall first construct whole solutions to the field equation (10) with the boundary condition (7), which are candidates of a vacuum configuration. We shall then determine which configuration gives the lowest value of $`[\varphi ,R]`$ (if there exist several solutions). The field equation (10) has been studied before in a quite different context , though the boundary condition has been imposed to be periodic but not antiperiodic. It turns out that most of the results given in ref. are useful for our purposes and that the nontrivial solutions to our problem will be given by $$\varphi (y)=\frac{2k\omega }{\sqrt{\lambda }}\mathrm{sn}(\omega (yy_0),k),$$ (12) where $$\omega \frac{\mu }{\sqrt{1+k^2}}.$$ (13) Here, $`\mathrm{sn}(u,k)`$ is the Jacobi elliptic function whose period is $`4K(k)`$, where $`K(k)`$ denotes the complete elliptic function of the first kind. Since the integration constant $`y_0`$ in eq.(12), which in fact reflects the translational invariance of the equation of motion, is irrelevant, we shall set $`y_0`$ to be equal to zero in the following analysis. The antiperiodic boundary condition (7) requires that the parameter $`k`$ ($`0k<1`$) and the radius $`R`$ should be related mutually through $$R=(2n1)\frac{K(k)}{\pi \omega }$$ (14) for some positive integer $`n`$. (For the periodic boundary condition, $`2n1`$ in eq.(14) should be replaced by $`2n`$ .) We may denote a solution specified by eq.(14) with an integer $`n`$ by $`\varphi _n(y)`$. We note that as $`k`$ runs from zero to one the right hand side of eq.(14) increases monotonically from $`R_n^{}(n\frac{1}{2})/\mu `$ to infinity. Thus, $`\varphi _n(y)`$ is a solution only when $`RR_n^{}`$. For $`0<RR_1^{}`$, there exists only one solution to the field equation (10), i.e. the trivial solution $`\varphi =0`$. Thus, the vacuum configuration is given by the trivial solution, and hence the translational invariance is unbroken for $`0<RR_1^{}`$. For $`R_1^{}<RR_2^{}`$, there exist two solutions to eq.(10), i.e. the trivial one and $`\varphi _1(y)`$. It follows from eq.(11) that the trivial solution $`\varphi =0`$ is no longer the vacuum configuration. Since $`\varphi _1(y)`$ depends on $`y`$, the translational invariance for the $`S^1`$-direction is spontaneously broken. For $`R_n^{}<RR_{n+1}^{}`$, there exist $`n+1`$ solutions to eq.(10), i.e. the trivial one and $`\varphi _m(y)`$ for $`m=1,2,\mathrm{},n.`$ Since $`[\varphi _m,R]<[0,R]`$ for every $`m`$, the trivial solution is no longer the vacuum configuration, and hence the translational invariance for the $`S^1`$-direction is spontaneously broken for $`R_n^{}<RR_{n+1}^{}`$. Therefore, we have found that for $`0<RR_1^{}`$ the translational invariance for the $`S^1`$-direction is unbroken, while for $`R>R_1^{}`$ it is broken spontaneously with the $`Z_2`$ symmetry breaking. Let us next discuss a problem which solution $`\varphi _n(y)`$ is the true vacuum configuration, i.e. which solution minimizes the functional $`[\varphi ,R]`$. For $`R>R_n^{}`$, $`\varphi _n(y)`$ becomes a solution for which we obtain a rather complicated expression $$[\varphi _n,R]=\frac{(2n1)\mu ^3}{3\lambda (1+k^2)^{3/2}}\left\{(1k^2)(5+3k^2)K(k)+8(1+k^2)E(k)\right\},$$ (15) where $`E(k)`$ is the complete elliptic function of the second kind. Although we could directly compare $`[\varphi _n,R]`$ for $`n=1,2,3,\mathrm{},`$ we shall here take another approach to solve the problem. It is not difficult to show $$\frac{d[\varphi _n,R]}{dR}=\frac{\pi \mu ^4}{\lambda }\left(\frac{1k^2}{1+k^2}\right)^20,$$ (16) which implies that $`[\varphi _n,R]`$ is a monotonically increasing function of $`R`$. At $`R=R_n^{}`$ $`(k=0)`$ and $`\mathrm{}`$ $`(k=1)`$, $`[\varphi _n,R]`$ takes the values $`[\varphi _n,R_n^{}]`$ $`=`$ $`(2n1){\displaystyle \frac{\pi \mu ^3}{2\lambda }},`$ $`[\varphi _n,\mathrm{}]`$ $`=`$ $`(2n1){\displaystyle \frac{4\sqrt{2}\mu ^3}{3\lambda }},`$ (17) respectively. It follows from eqs.(16) and (17) that $$(2n1)\frac{\pi \mu ^3}{2\lambda }[\varphi _n,R]<(2n1)\frac{4\sqrt{2}\mu ^3}{3\lambda }$$ (18) and especially $$[\varphi _1,R]<\frac{4\sqrt{2}\mu ^3}{3\lambda }.$$ (19) The above observations will be enough to show that for $`RR_n^{}`$ ($`n2`$) $$[\varphi _1,R]<[\varphi _n,R]<[0,R].$$ (20) Therefore, we have found the vacuum expectation value of $`\varphi `$ to be $$\varphi (x^\mu ,y)=\{\genfrac{}{}{0pt}{}{0\mathrm{for}RR_1^{}}{\varphi _1(y)\mathrm{for}R>R_1^{}.}$$ (21) It may be instructive to reanalyze the model from a viewpoint of the Fourier expansion. It follows from the boundary condition (7) that $`\varphi (y)`$ may be expanded in the Fourier-series for the $`S^1`$-direction as $$\varphi (y)=\frac{1}{\sqrt{\pi R}}\underset{l=1}{\overset{\mathrm{}}{}}\left\{a^{(2l1)}\mathrm{cos}\left((2l1)\frac{y}{2R}\right)+b^{(2l1)}\mathrm{sin}\left((2l1)\frac{y}{2R}\right)\right\},$$ or equivalently, $$\varphi (y)=\frac{1}{\sqrt{2\pi R}}\underset{lZ}{}\phi ^{(2l1)}e^{i(2l1)\frac{y}{2R}}$$ (22) with $`\phi ^{(2l1)}`$ = $`\frac{1}{\sqrt{2}}(a^{(2l1)}ib^{(2l1)})`$ = $`\phi ^{(2l+1)}`$. A key observation is that a constant zero mode is excluded in the above expansion due to the nontrivial boundary condition. Inserting eq.(22) into eq.(6), we have, up to the quadratic terms with respect to $`\phi ^{(2l1)}`$, $$_0[\phi ,R]=\underset{l=1}{\overset{\mathrm{}}{}}m_{(2l1)}^2|\phi ^{(2l1)}|^2,$$ (23) where $$m_{(2l1)}^2\mu ^2+\left(\frac{2l1}{2R}\right)^2.$$ (24) The second term in eq.(24) is the Kaluza-Klein mass, which comes from the “kinetic” term $`\frac{1}{2}\left(_y\varphi (y)\right)^2`$, and which gives a positive contribution to the squared mass term. We can easily see that for $`RR_1^{}`$ all the squared masses $`m_{(2l1)}^2`$ are positive semi-definite because of the induced Kaluza-Klein masses $`\left(\frac{2l1}{2R}\right)^2`$. On the other hand, for $`R>R_1^{}`$ it seems that negative squared masses appear. This is a signal of a phase transition and is consistent with the results obtained before. It may be interesting to point out that the translational invariance for the $`S^1`$-direction can be reinterpreted as a global $`U(1)`$ symmetry, which is in fact possessed by the theory after the compactification. To see this, we note that an infinitesimal translation $`yy+a`$ in eq.(22) can equivalently be realized, in terms of the Fourier modes, by the following transformation: $$\phi ^{(2l1)}e^{i(\frac{2l1}{2R})a}\phi ^{(2l1)},$$ (25) from which we may assign a $`U(1)`$ charge $`\frac{2l1}{2R}`$ to $`\phi ^{(2l1)}`$. Thus, spontaneous breakdown of the translational invariance for the $`S^1`$-direction may be interpreted as that of the $`U(1)`$ symmetry. One might then ask about a Nambu-Goldstone mode associated with spontaneous breakdown of the translational invariance or the $`U(1)`$ symmetry. It turns out that, to answer the question, the following mode expansion is more suitable than the Fourier mode expansion for $`R>R_1^{}`$ : $$\varphi (y)=\underset{l=1}{\overset{\mathrm{}}{}}\left\{a_{}^{}{}_{}{}^{(2l1)}Ec^{2l1}(\omega y,k)+b_{}^{}{}_{}{}^{(2l1)}Es^{2l1}(\omega y,k)\right\},$$ (26) where $`Ec^{2l1}(u,k)`$ and $`Es^{2l1}(u,k)`$ are eigenfunctions of the so-called Lamé equation with $`N=2`$ $$\left[\frac{d^2}{du^2}+N(N+1)k^2\mathrm{sn}^2(u,k)\right]\mathrm{\Psi }(u,k)=\mathrm{\Omega }(k)\mathrm{\Psi }(u,k),$$ (27) with the boundary condition $$\mathrm{\Psi }(u+2K(k),k)=\mathrm{\Psi }(u,k).$$ (28) The $`Ec^{2l1}(u,k)`$ and $`Es^{2l1}(u,k)`$ may further be supplemented by the following conditions The eigenfunctions $`Ec`$ and $`Es`$ are differently defined from those in ref..: $`Ec^{2l1}(u,k)`$ $`=`$ $`+Ec^{2l1}(u,k),`$ $`Es^{2l1}(u,k)`$ $`=`$ $`Es^{2l1}(u,k),`$ (29) and $`Ec^{2l1}(u,k)`$ $``$ $`{\displaystyle \frac{1}{\sqrt{\pi R}}}\mathrm{cos}\left((2l1)u\right)\mathrm{as}k0,`$ $`Es^{2l1}(u,k)`$ $``$ $`{\displaystyle \frac{1}{\sqrt{\pi R}}}\mathrm{sin}\left((2l1)u\right)\mathrm{as}k0.`$ (30) In the expansion (26), $`a_{}^{}{}_{}{}^{(2l1)}`$ and $`b_{}^{}{}_{}{}^{(2l1)}`$ correspond to normal modes around the background $`\varphi =\varphi _1(y)`$. If we set $`\mathrm{\Omega }(k)=(1+k^2)(1+\frac{m^2}{\mu ^2})`$, $`m^2`$ may correspond to a squared mass in ($`D1`$)-dimensional Minkowski spacetime. The lowest five eigenvalues and eigenfunctions for the Lamé equation with $`N=2`$ are exactly known, and the eigenfunctions are given by so-called Lamé polynomials . Only two of them satisfy the desired boundary condition (28), and are given by $`Ec^1(u,k)\mathrm{cn}(u,k)\mathrm{dn}(u,k)`$ and $`Es^1(u,k)\mathrm{sn}(u,k)\mathrm{dn}(u,k)`$ with $`m^2=0`$ and $`\left(\frac{3k^2}{1+k^2}\right)\mu ^2`$, respectively. Noting that $`Ec^1(\omega y,k)\frac{d\varphi _1(y)}{dy}`$, we know that the mode $`a_{}^{}{}_{}{}^{(1)}`$ is really the massless Nambu-Goldstone mode associated with spontaneous breakdown of the translational invariance or the $`U(1)`$ symmetry. We have shown that the translational invariance for the $`S^1`$-direction is spontaneously broken in the model (1) with the boundary condition (7) when the radius $`R`$ becomes larger than a critical radius $`R_1^{}`$. Our mechanism to break the translational invariance is not specific to this model at all. Let us briefly discuss a general strategy to construct models in which the translational invariance of compactified spaces can be broken spontaneously. Suppose that some of space dimensions are compactified on a manifold with the translational invariance. Let $`V(\varphi _i)`$ be a scalar potential. Our mechanism may require $`V(\varphi _i)`$ to satisfy the following two conditions: 1. The origin $`\varphi _j=0`$ is not the minimum of the potential $`V(\varphi _i)`$. 2. Let $`\overline{\varphi _j}`$ be a configuration which minimizes $`V(\varphi _i)`$. Then, some of $`\varphi _j`$ with $`\overline{\varphi _j}0`$ have to be non-singlets for some global symmetries of the theory. A key ingredient of our mechanism is to impose nontrivial boundary conditions on non-singlet fields $`\varphi _j`$ with $`\overline{\varphi _j}0`$, which have to be consistent with global symmetries of the theory. We would have a variety of models since we have a wide choice of potentials, compactified spaces and boundary conditions. A general feature of our models will be that the translational invariance of compactified spaces is expected to be unbroken when scales of the compactified spaces are sufficiently small and to be broken spontaneously with some global symmetries when the scales become large enough. Finally, we would like to make some comments on vacuum structures of our models and on an application to supersymmetric field theories. In the limit of $`R\mathrm{}(k1)`$, $`\varphi (y)`$ in eq.(12) will reduce to $`\sqrt{\frac{2}{\lambda }}\mu \mathrm{tanh}\left(\frac{\mu }{\sqrt{2}}(yy_0)\right)`$. This is nothing but a (static) single kink solution in $`D=2`$ dimensions . So, the model considered here may be regarded as a real $`\varphi ^4`$ model on a single kink background sitting on a line in the limit of $`R\mathrm{}`$. This observation suggests that models based on our breaking mechanism might be regarded as quantum field theories on (topologically) nontrivial backgrounds in a broken phase of the translational invariance. The second comment is that vacuum structures of our models are expected to be quite nontrivial, in general. To see this, let us consider a simple extension of the model (1) by replacing the real field $`\varphi `$ by a complex one $`\mathrm{\Phi }`$ with a $`U(1)`$ symmetry and also the boundary condition (7) by $`\mathrm{\Phi }(y+2\pi R)=e^{i2\pi \alpha }\mathrm{\Phi }(y)`$. One might expect that the vacuum structure is similar to the original real $`\varphi ^4`$ model. This is not, however, the case. In fact, as studied in ref., this model admits a rich set of solutions to the field equation for $`\mathrm{\Phi }(y)`$, and the vacuum configuration is quite different from that of the real $`\varphi ^4`$ model Since in ref. the field equation has been solved with the periodic boundary condition, we should reanalyze the field equation with a proper boundary condition.. The final comment is that it would be worth applying our mechanism to supersymmetric field theories. In ref., it has been shown that our mechanism can be used to break supersymmetry spontaneously and that this SUSY breaking mechanism is new, that is, it is different from the O’Raifeartaigh and Fayet-Iliopoulos mechanisms. It would be of great importance to use this new SUSY breaking mechanism to construct phenomenologically interesting supersymmetric models. ACKNOWLEDGMENTS We would like to thank to H. Hatanaka and C. S. Lim for useful discussions. K.T. would like to thank the I.N.F.N, Sezione di Pisa for hospitality.
no-problem/9902/cond-mat9902126.html
ar5iv
text
# 1 Introduction ## 1 Introduction In this note, we consider the Ising quantum chain with ferromagnetic exchange couplings $`\epsilon _j>0`$ which follow an, in general, non-periodic sequence of finitely many different values. For convenience, we choose the (constant) transversal field to be equal to one, and consider the following Hamiltonians $$H^{(\pm )}=\frac{1}{2}\left(\underset{j=1}{\overset{N}{}}\epsilon _j\sigma _j^x\sigma _{j+1}^x+\underset{j=1}{\overset{N}{}}\sigma _j^z\right)$$ (1.1) of (anti-) periodic approximants ($`\sigma _{N+1}^x=\pm \sigma _1^x`$) obtained by truncating the sequence of exchange couplings to the first $`N`$ elements. Here, $`\sigma _j^\alpha `$ denotes the Pauli matrix $`\sigma ^\alpha `$ acting on site $`j`$ of an $`N`$-fold tensor product space. Note that $`H^{(\pm )}`$ commutes with the operator $$Q=\underset{j=1}{\overset{N}{}}\sigma _j^z,Q^2=Id,$$ (1.2) which has eigenvalues $`\pm 1`$. We denote the projectors onto the corresponding eigenspaces (sometimes also called sectors) by $$P_\pm =\frac{1}{2}(Id\pm Q).$$ (1.3) In ref. , the Hamiltonian (1.1) is formulated in terms of fermionic operators by means of a Jordan-Wigner transformation and diagonalized by a suitable Bogoljubov-Valatin transformation. To be more precise, it is in fact the so-called mixed sector Hamiltonian $`\stackrel{~}{H}^{(+)}`$ defined by $$\stackrel{~}{H}^{(\pm )}=H^{(\pm )}P_++H^{()}P_{}$$ (1.4) which is considered. This is necessary since the Jordan-Wigner transformation of $`H^{(\pm )}`$ produces non-local boundary terms in the fermionic operators whereas the above Hamiltonians (1.4) corresponds to periodic (resp. antiperiodic) boundary conditions in terms of the fermions. Let us cite three results of ref. which are central to our subsequent discussion. The system described by the Hamiltonian (1.1) resp. (1.4) is critical at $`\mu =0`$ with $$\mu =\underset{N\mathrm{}}{lim}\frac{1}{N}\underset{j=1}{\overset{N}{}}\mathrm{log}(\epsilon _j).$$ (1.5) The dispersion relation for the low-energy one-particle excitations with energy $`\mathrm{\Lambda }`$ takes the following form $$\mathrm{\Lambda }^2=v^2(q^2+\mu ^2),$$ (1.6) where $`q`$ denotes the momentum, and $`v`$, the velocity of elementary excitations, is given by $$\frac{1}{v^2}=\underset{N\mathrm{}}{lim}\frac{1}{N^2}\underset{j=1}{\overset{N}{}}\underset{k=1}{\overset{N}{}}\underset{\mathrm{}=1}{\overset{k}{}}\epsilon _{j+\mathrm{}1}^2,$$ (1.7) provided the latter limit exists. In what follows, we consider sequences of coupling constants which are obtained by substitution rules, focussing on the case of two-letter substitution rules (although most properties can be generalized quite easily to the $`n`$-letter case). ## 2 Substitution Rules and Matrices We consider two-letter substitution rules $$\varrho :\begin{array}{ccc}a& & w_a\\ b& & w_b\end{array}$$ (2.1) where $`w_a`$ and $`w_b`$ are words in $`a`$ and $`b`$ (we do not allow for inverses of $`a`$ or $`b`$ here). If one defines multiplication of words through concatenation, the action of $`\varrho `$ is extended to arbitrary words in $`a`$ and $`b`$ via the homomorphism property $`\varrho (w_aw_b)=\varrho (w_a)\varrho (w_b)`$, for details see ref. and references included therein. To $`\varrho `$ we associate a $`2\times 2`$ matrix $`𝑹_\varrho `$ $$𝑹_\varrho =\left(\begin{array}{cc}\mathrm{\#}_a(w_a)\hfill & \mathrm{\#}_a(w_b)\hfill \\ \mathrm{\#}_b(w_a)\hfill & \mathrm{\#}_b(w_b)\hfill \end{array}\right)$$ (2.2) whose elements count the number of $`a`$’s and $`b`$’s in the words $`w_a`$ and $`w_b`$, respectively. Note that we use the transpose matrix in comparison with because we need only the statistical eigenvectors in our discussion. They are then the right-eigenvectors of $`𝑹_\varrho `$. With this convention, one also has $`𝑹_{\varrho \sigma }=𝑹_\varrho 𝑹_\sigma `$. By using the substitution rule $`\varrho `$ of (2.1) iteratively on an initial word $`w(0)`$, say $`w(0)=a`$ for definiteness, one obtains a sequence of words $`w(n)=\varrho (w(n1))`$. Since we are interested in sequences which have a unique limit word $`w`$, we restrict ourselves to substitution rules where $`w_a`$ begins with the letter $`a`$. In this case, the sequence $`w(n)=\varrho (w(n1))`$ obviously commences with $`w(n1)`$ and thus each iteration only appends letters to the previous word. The length (i.e., the number of letters) of the word $`w(n)`$ is given by $`f_\varrho (n)`$ defined as follows $`e_\varrho (n)`$ $`=`$ $`\left(\begin{array}{c}e_\varrho ^{(a)}(n)\hfill \\ e_\varrho ^{(b)}(n)\hfill \end{array}\right)=𝑹_\varrho e_\varrho (n1),e_\varrho (0)=\left(\begin{array}{c}1\hfill \\ 0\hfill \end{array}\right),`$ (2.7) $`f_\varrho (n)`$ $`=`$ $`e_\varrho ^{(a)}(n)+e_\varrho ^{(b)}(n).`$ (2.8) We denote the eigenvalues of $`𝑹_\varrho `$ in (2.2) by $`\lambda _\varrho ^{(\pm )}`$ where $`\lambda _\varrho ^{(+)}`$ stands for the Perron-Frobenius eigenvalue. The corresponding statistically normalized eigenvector is determined by $$𝑹_\varrho \left(\begin{array}{c}p_a\hfill \\ p_b\hfill \end{array}\right)=\lambda _\varrho ^{(+)}\left(\begin{array}{c}p_a\hfill \\ p_b\hfill \end{array}\right),p_a+p_b=\mathrm{\hspace{0.33em}\hspace{0.33em}1}.$$ (2.9) The eigenvalue $`\lambda _\varrho ^{(+)}`$ determines the asymptotic inflation factor for one substitution, whereas $`p_a`$ ($`p_b`$) is the frequency of the letter $`a`$ (resp. $`b`$) in the limit word. Let us now discuss under which conditions $`\lambda ^{()}`$ contains some information about fluctuations. ## 3 Fluctuations Consider the truncated sequences $`w^{(N)}`$ of the first $`N`$ letters of the limit word $`w`$ obtained from the substitution rule (2.1) with initial word $`a`$. To measure the fluctuation, we define $$g(N)=\mathrm{\#}_a(w^{(N)})p_aN,g_n=g(f_\varrho (n)),$$ (3.1) and $$h(N)=\underset{MN}{\mathrm{max}}|g(M)|,h_n=h(f_\varrho (n)).$$ (3.2) Note that it does not matter whether we look at fluctuations in the frequency of the letter $`a`$ or $`b`$ since $`\mathrm{\#}_a(w^{(N)})+\mathrm{\#}_b(w^{(N)})=N`$ and $`p_a+p_b=1`$. Hence, $`g(N)`$ just changes sign if one replaces $`a`$ by $`b`$ in Eqs. (3.1) and (3.2). Of course, if $`\lambda _\varrho ^{(+)}`$ is not degenerate, $`lim_N\mathrm{}(g(N)/N)=0`$. The behaviour of $`g(N)`$ for words of length $`N=f_\varrho (n)`$ which correspond to proper (or complete) iteration steps is governed by the second largest eigenvalue $`\lambda ^{()}`$ . In fact, writing the starting vector $`e_\varrho (0)`$ (2.8) as a linear combination of the eigenvectors of $`𝑹_\varrho `$ (2.2), one easily verifies $`g_n\lambda _{}^{()}{}_{}{}^{n}`$. Hence $`|\lambda ^{()}|<1`$ implies that $`g_n`$ converges to zero for $`n\mathrm{}`$. On the other hand, if $`|\lambda ^{()}|>1`$ then $`g_n`$ in general diverges. In the limiting case of $`|\lambda ^{()}|=1`$, $`|g_n|`$ is constant and therefore is bounded away from zero and infinity. This observation brings along, once again, the concept of Pisot-Vijayaraghavan numbers (PV-numbers for short) . They are real algebraic integers $`\vartheta >1`$ all algebraic conjugates of which (except $`\vartheta `$) lie inside the unit circle. If the characteristic polynomial of $`𝑹_\varrho `$ is irreducible over the integers and if the Perron-Frobenius eigenvalue is larger than 1, the PV-property really is what determines the conformal nature of the critical point. The same seems still to be true if the characteristic polynomial is reducible but all eigenvalues except the largest one lie inside the unit circle – in which case we say that the underlying substitution has bounded fluctuation property. However, the reducible case requires some care, as we will demonstrate by an example. To understand why this is so important, one has to realize that these considerations only apply to words which are obtained by proper iteration steps. In between, fluctuations can behave quite differently, especially in the so-called marginal case $`|\lambda ^{()}|=1`$ . There, depending on the actual substitution rule (and not on the substitution matrix alone), one can have the situation that $`h(N)`$ is bounded or that $`h(N)`$ diverges logarithmically with $`N`$ (or, in other words, $`h_n`$ diverges linearly with $`n`$). If $`|\lambda ^{()}|>1`$, $`h(N)`$ can diverge like a power law. Again, polynomials reducible over the integers are to be treated carefully, in particular for generalizations to the $`n`$-letter case. As an illustrative example, consider the substitution rules that have the substitution matrix $$𝑹_\varrho =\left(\begin{array}{cc}2\hfill & 1\hfill \\ 1\hfill & 2\hfill \end{array}\right)$$ (3.3) with eigenvalues $`\lambda ^{(+)}=3`$ and $`\lambda ^{()}=1`$, and $`p_a=p_b=1/2`$. To obtain a unique limit word, we want $`w_a`$ to commence with $`a`$, which leaves us with six different substitution rules: $$\begin{array}{ccc}\varrho _1:\begin{array}{ccc}a& & aab\\ b& & abb\end{array}\hfill & \varrho _2:\begin{array}{ccc}a& & aab\\ b& & bab\end{array}\hfill & \varrho _3:\begin{array}{ccc}a& & aab\\ b& & bba\end{array}\hfill \\ \varrho _4:\begin{array}{ccc}a& & aba\\ b& & abb\end{array}\hfill & \varrho _5:\begin{array}{ccc}a& & aba\\ b& & bab\end{array}\hfill & \varrho _6:\begin{array}{ccc}a& & aba\\ b& & bba\end{array}\hfill \end{array}$$ (3.4) Of these, $`\varrho _5`$ is special in the sense that it leads to the periodic sequence $`ababababa\mathrm{}`$ which means that in this case the fluctuations are certainly bounded (since in any finite part the numbers of $`a`$’s and $`b`$’s differ at most by one). As it turns out, this is only true for this special sequence, in all the five other cases $`h_n`$ grows linearly with $`n`$. More precisely, one observes $$2h_n=\{\begin{array}{cc}n+1\hfill & \text{for }\varrho _1\text{}\varrho _2\text{, and }\varrho _3\hfill \\ n\hfill & \text{for }\varrho _4\hfill \\ 1\hfill & \text{for }\varrho _5\hfill \\ \mathrm{max}(1,n1)\hfill & \text{for }\varrho _6\hfill \end{array}$$ (3.5) for $`n0`$. In Fig. 1, we show the different behaviour of the fluctuations $`g(N)`$ for four typical substitution rules, namely the Thue-Morse sequence (a), the Silver Mean sequence (b) (sometimes also called “Octonacci” sequence), the Period-Doubling sequence (c), and the Binary non-Pisot sequence (d). The substitution rules which define these sequences together with their statistical properties are summarized in Table 1. Fig. 2 shows the quantities $`|g_n|`$ (3.1) and $`h_n`$ (3.2) for the four different sequences. The marginal case of the Period-Doubling sequence ($`\lambda ^{()}=1`$) clearly shows the linear divergence of $`h_n`$ in $`n`$ whereas $`|g_n|`$ is constant. Table 1: Four typical substitution rules and the statistical properties of the corresponding sequences sequence substitution rule eigenvalues $`\lambda ^{(\pm )}`$ values of $`p_a`$ and $`p_b`$ $`p_a/p_b`$ Thue-Morse $`\varrho _{tm}:\begin{array}{ccc}\hfill a& & ab\hfill \\ \hfill b& & ba\hfill \end{array}`$ $`2`$, $`\mathrm{\hspace{0.17em}0}`$ $`\frac{1}{2}`$, $`\frac{1}{2}`$ $`1`$ Silver Mean $`\varrho _{sm}:\begin{array}{ccc}\hfill a& & aab\hfill \\ \hfill b& & a\hfill \end{array}`$ $`1\pm \sqrt{2}`$ $`1\frac{\sqrt{2}}{2}`$, $`\frac{\sqrt{2}}{2}`$ $`\sqrt{2}1`$ Period-Doubling $`\varrho _{pd}:\begin{array}{ccc}\hfill a& & ab\hfill \\ \hfill b& & aa\hfill \end{array}`$ $`2`$, $`1`$ $`\frac{2}{3}`$, $`\frac{1}{3}`$ $`2`$ Binary non-Pisot $`\varrho _{bnp}:\begin{array}{ccc}\hfill a& & ab\hfill \\ \hfill b& & aaa\hfill \end{array}`$ $`\frac{1\pm \sqrt{13}}{2}`$ $`\frac{5\sqrt{13}}{2}`$, $`\frac{3+\sqrt{13}}{2}`$ $`\frac{1+\sqrt{13}}{2}`$ ## 4 Critical Point and Fermion Velocity We now come back to the Hamiltonian (1.4). We relate the coupling constants $`\epsilon _j`$ to a substitution rule $`\varrho `$ (2.1) in the following way: $$\epsilon _j=\{\begin{array}{cc}\epsilon _a\hfill & \text{if }j\text{-th letter in }w\text{ is }a\hfill \\ \epsilon _b\hfill & \text{if }j\text{-th letter in }w\text{ is }b\text{.}\hfill \end{array}$$ (4.1) The condition $`\mu =0`$ (1.5) for criticality now reads $$p_a\mathrm{log}(\epsilon _a)+p_b\mathrm{log}(\epsilon _b)=\mathrm{\hspace{0.33em}\hspace{0.33em}0}$$ (4.2) which has the following one-parameter solution $$\epsilon _a=r^{p_b},\epsilon _b=r^{p_a},$$ (4.3) where $`r>0`$ is any positive real number. As is well known (compare and references therein), the finite-size scaling limit of the periodic Ising quantum chain (which corresponds to $`r=1`$) is described by the $`c=1/2`$ conformal field theory of a free Majorana fermion. To obtain a conformally invariant scaling limit one has to have a linear dispersion relation at criticality, which means that the limit in Eq. (1.7) must exist. This in turn is only to be expected if the fluctuations $`g(N)`$ remain bounded, i.e., $`h_n<S(\varrho )`$ for all $`n`$, since at criticality one obtains from Eqs. (3.1) and (4.3) $$\underset{j=1}{\overset{N}{}}\epsilon _j=r^{g(N)}$$ (4.4) and all products of this type enter in Eq. (1.7). On the other hand, if the limit in Eq. (1.7) does exist, the scaling limit will be the same as for the periodic quantum Ising chain. Although we are not going to use it in this note, we should add that there is a very elegant and efficient way to investigate this kind of systems by means of the corresponding trace map (compare and references therein). In this context, the critical point is obtained from a unique one-parameter family of bounded orbits in the accessible phase space region of the trace map . ## 5 Fermion and Conformal Spectrum We now consider the spectrum of the Hamiltonian (1.4) at criticality. It is described in terms of $`N`$ fermion frequencies $`\mathrm{\Lambda }_k0`$. In Fig. 3, we present the integrated density of the $`\mathrm{\Lambda }_k`$ (which we devided by the their largest value for convenience) for coupling constants defined by $`r=2`$ (4.3) and several sizes of the chain. In general there are no exact degeneracies in the spectrum in contrast to the periodic case. However, as the size of the system increases, the frequencies tend to accumulate which creates the nearly vertical steps in Fig. 3, especially close to the maximal frequency. The plots show characteristic gaps in the fermion spectrum the locations of which (on vertical axis) are in accordance with the general gap labeling theorem . We should mention though that the Thue-Morse chain does not show closed gaps here, in contrast to the situation with electronic spectra of Schrödinger operators , a phenomenon that deserves further exploration. For models whose continuum limit is described by a conformal field theory, conformal invariance specifies the behaviour of the low-energy excitations in the infinite size limit $`N\mathrm{}`$. Essentially, they have to show a leading $`1/N`$ behaviour and the level and degeneracy structure of the spectrum (after appropriate overall scaling of the gaps) is described by representations of the Virasoro algebra with central extension $`c`$ which is the central charge of the conformal field theory (see e.g. ). This of course means that it is the lower part of the integrated fermion density shown in Fig. 3 which is important for the conformal spectra. To have a closer look at this part, we consider the scaled energy gaps $$\overline{E_j}=\frac{N}{2\pi v}\left(E_jE_0\right)$$ (5.1) of the Hamiltonian $`\stackrel{~}{H}^{(+)}`$ (1.4), where $`E_0`$ denotes the ground-state energy and $`v`$ is the velocity of elementary excitations. For the Thue-Morse and the Silver Mean sequences, $`v`$ is finite (in the limit $`N\mathrm{}`$ and for finite $`r>0`$) and given by $$v=\{\begin{array}{cc}\left(\frac{r^{1/2}+r^{1/2}}{2}\right)^2\hfill & \text{for the Thue-Morse chain}\hfill \\ \left(\frac{rr^1}{2}\right)^1\mathrm{log}(r)\hfill & \text{for the Silver Mean chain.}\hfill \end{array}$$ (5.2) In fact, the second value generally applies for all quasiperiodic sequences which can be obtained from a certain section through a higher-dimensional periodic structure via the dualization method . On the other hand, the symmetric random dimer chain yields the same result as the Thue-Morse chain . It thus represents a disordered model (with bounded fluctuations) which nevertheless shows the same critical behaviour as the periodic Ising chain and, in particular, leads to a conformally invariant continuum limit. From our discussion of the fluctuations and from Eq. (1.7) above it is clear that for the Period-Doubling and Binary non-Pisot sequences the fermion velocity $`v`$ should vanish. In Fig. 4, the scaled spectrum of low-energy excitations is shown for our four exemplary sequences. For the Thue-Morse and Silver Mean chains, the fermion velocity (5.2) is used whereas for the other two sequences we simply normalize the first gap to $`1/2`$. (We did not systematically investigate the scaling laws here, although this might give independent access to the critical exponents calculated in ). The formation of the so-called conformal towers can be seen clearly for the spectra of the Thue-Morse and Silver Mean chain (a look at the actual data confirms that the degeneracies are those predicted from conformal invariance), whereas the spectra of the other two systems do not show any apparent regularities. As expected, the normalization factors grow with the size of the system in these cases. As a consequence, the nature of the critical point is quite different, compare , and conformal invariance is lost. ## 6 Concluding Remarks The interest in conformally invariant phase transitions and the almost immediate study of quasiperiodic systems of the Fibonacci type (see e.g. ) originally has led to the somewhat misleading conclusion that conformal invariance is robust with respect to many sorts of order or even disorder. Although this is true for quasiperiodic Ising quantum chains (because that implies the PV-property or the bounded fluctuation property ), many other ordered structures can be defined which destroy conformal invariance at the critical point. Even more, one can consider the bounded fluctuation situation to be somewhat exceptional (e.g., PV-numbers are nowhere dense in $`[1,\mathrm{})`$, see ) and thus conclude that the general case of a non-periodic Ising quantum chain does not lead to a conformally invariant scaling limit. ## Acknowledgements It is a pleasure to thank Dieter Joseph for critically reading the manuscript. M. B. would like to thank Paul A. Pearce and the Department of Mathematics, University of Melbourne, for hospitality, where part of this work was done. This work was supported by Deutsche Forschungsgemeinschaft. Figure 1: Fluctuations $`g(N)`$, compare (3.1), for the following sequences: (a) Thue-Morse, (b) Silver Mean, (c) Period-Doubling, and (d) Binary non-Pisot. The lengths of sequences which correspond to complete iteration steps are indicated. Figure 2: Fluctuations $`|g_n|`$ (3.1) (Figure 2A) and $`h_n`$ (3.2) (Figure 2B) for the following sequences: (a) Thue-Morse, (b) Silver Mean, (c) Period-Doubling, and (d) Binary non-Pisot. Figure 3: Integrated density of normalized fermion frequencies for the Hamiltonian $`\stackrel{~}{H}^{(+)}`$ (1.4). The coupling constants are given by Eq. (4.3) with $`r=2`$ and (a1)–(a3) the Thue-Morse sequence with $`n=5,6,7`$, (b1)–(b3) the Silver Mean sequence with $`n=4,5,6`$, (c1)–(c3) the Period-Doubling sequence with $`n=5,6,7`$, and (d1)–(d3) the Binary Non-Pisot sequence with $`n=4,5,6`$, where $`n`$ denotes the number of iterations. The spectra do not have true degeneracies, and the integrated density reaches 1 in all cases shown. Figure 4: Scaled low-energy spectra of $`\stackrel{~}{H}^{(+)}`$ (1.4) for couplings obtained from Eq. (4.3) with $`r=2`$ and (a1)–(a3) the Thue-Morse sequence with $`n=6,7,8`$, (b1)–(b3) the Silver Mean sequence with $`n=5,6,7`$, (c1)–(c3) the Period-Doubling sequence with $`n=6,7,8`$, and (d1)–(d3) the Binary Non-Pisot sequence with $`n=5,6,7`$, where $`n`$ denotes the number of iterations. Normalization of (a) and (b) is taken from (5.2), while for (c) and (d) the first gap is normalized to $`1/2`$.
no-problem/9902/astro-ph9902025.html
ar5iv
text
# Far-Infrared and Submillimeter Observations of High Redshift Galaxies ## Introduction Key scientific questions about the Universe that have been raised at this meeting and elsewhere include $``$ What is the history of star formation in the Universe? $``$ What is the history of metallicity and dust content in the Universe? $``$ What is the origin of the extragalactic background observed by the DIRBE experiment on COBE? $``$ What are the relative contributions of stars and of active galactic nuclei to the luminosity of the Universe, and how do they vary with redshift? In this paper, I will argue that observations in the far-infrared and submillimeter wavelength region (40 – 1000 $`\mu `$m, corresponding to rest wavelengths in the range 5 – 300 $`\mu `$m for galaxies at $`z=25`$) offer a unique probe of the high redshift Universe that will address these questions. The happy coincidence of several astronomical facts make far-infrared and submillimeter observations particularly powerful. First, galaxies are extremely luminous at far-infrared rest wavelengths. The spectrum of our own Milky Way galaxy, for example, shown in Figure 1, exhibits two distinct peaks, the first at around 1 $`\mu `$m resulting from the integrated emission from stars, and the second at around 100 $`\mu `$m resulting from interstellar dust emission. The representation given here (in which equal areas correspond to equal rates of photon emission) shows that most of the Galaxy’s photons emerge in the far-infrared region. The Milky Way is entirely unremarkable in this regard: indeed, in many starburst galaxies even the energy output is dominated by far-infrared radiation. The strength of the far-infrared emission from galaxies simply reflects that fact that the average visual extinction is sufficient to allow a significant fraction of the starlight to be reprocessed by interstellar dust. Second, the opacity of interstellar dust is a strongly decreasing function of wavelength, allowing embedded regions of star formation and nuclear activity that are invisible at optical and even near-infrared wavelengths to be detected in the far-infrared and submillimeter spectral regions. A second implication of the strong wavelength-dependence of the dust opacity is that the submillimeter region provides a unique cosmological window to the high redshift Universe, the background caused by dust emission from $`z0`$ (local galaxies) dropping rapidly with increasing wavelength longward of $`200\mu `$m and the background from $`z=1500`$ (the CMB) dropping rapidly with decreasing wavelength shortward of $`800\mu `$m. Third, the 5 – 300 $`\mu `$m (rest) wavelength range is extremely rich in atomic and molecular diagnostics that can serve as powerful probes of the physics and chemistry of interstellar gas and dust. The remarkable richness of the mid- and far-infrared spectral region is demonstrated by the Infrared Space Observatory (ISO) spectrum of the Orion region (van Dishoeck et al. 1998) in Figure 2, which shows numerous rotational lines of H<sub>2</sub> and H<sub>2</sub>O, fine structure emissions from a wide variety of atomic ions, as well as several broader features associated with interstellar dust. In the next section, I will discuss the potential importance of such spectral features in constraining the properties of high-redshift galaxies. ## Emission mechanisms at far-infrared and submillimeter wavelengths Interstellar dust is the dominant source of far-infrared and submillimeter radiation in galaxies. Recent SCUBA observations of dust continuum radiation – reported, for example, at this meeting by Ian Smail – have already demonstrated the power of submillimeter observations to probe the Universe at high redshift. Some – although by no means all – of the identified SCUBA sources are galaxies at redshifts $`z>2`$ (e.g. Ivison et al. 1998), and the $`25\%`$ of SCUBA sources for which no optical counterpart can be found may well be sources at very high redshift (or alternatively low-redshift galaxies or AGN that are very heavily extinguished). Although the field is currently in its infancy, observations of dust continuum radiation will ultimately allow the the effects of dust absorption to be corrected for quantitatively in models for the luminosity history of the Universe, and will elucidate the relative contribution of sources at different redshifts to the extragalactic background detected by the DIRBE experiment on COBE (Hauser et al. 1998). Although it shows a continuum spectrum, emission from dust is not featureless. It exhibits several broad features of large equivalent width, most notably the silicate feature at 9.7 $`\mu `$m and several bands in the 3.3 – 11.3 $`\mu `$m range that have been attributed to polycyclic aromatic hydrocarbons (Allamandola, Tielens & Barker 1985), and these features may allow redshifts to be estimated from far-infrared observations of very modest spectral resolving power. Interstellar gas emits a rich spectrum of atomic and molecular line radiation in the 5 – 300 $`\mu `$m range, which – although a negligible contribution to the overall far-infrared and submillimeter emission – dominates the cooling of the interstellar gas and provides valuable diagnostics of the physical and chemical conditions. Fine-structure emissions from the low-ionization species C<sup>+</sup> and O dominate the cooling of neutral atomic gas clouds. The C<sup>+</sup> $`{}_{}{}^{2}P_{3/2}^{}^2P_{1/2}`$ line at 158 $`\mu `$m has an upper state energy ($`E_u/k`$) corresponding to only 92 K and is therefore readily excited in cold atomic clouds. In most galaxies, the C<sup>+</sup> 158 $`\mu `$m line is the strongest source of line emission and accounts for 0.2 – 1$`\%`$ of the total far-infrared luminosity (Malhotra et al. 1997), this percentage representing the fraction of the absorbed radiant energy from stars that is deposited in the interstellar gas rather than the dust.<sup>1</sup><sup>1</sup>1Note, however, that the relative strength of the C<sup>+</sup> 158 $`\mu `$m line is considerably smaller in those galaxies that show the strongest far-infrared continuum emission. Malhotra et al. (1997) have argued that this effect likely arises because the larger ultraviolet fluxes incident upon cold clouds within such galaxies lead to larger positive charges on the interstellar dust grains and a resultant decrease in the efficiency of grain photoelectric emission that is the primary mechanism for heating the gas. Spectroscopic observations of the C<sup>+</sup> 158 $`\mu `$m line along with the O 63$`\mu `$m and 145$`\mu `$m lines ($`{}_{}{}^{3}P_{1}^{}^3P_2`$ and $`{}_{}{}^{3}P_{0}^{}^3P_1`$ with $`E_u/k=227`$ and 326 K respectively) from high-redshift galaxies will yield reliable redshifts and will allow the heating rate for the interstellar gas to be determined. In dense regions of the interstellar medium that are well shielded from starlight, the gas is primarily molecular and its emission is dominated by rotational emissions from molecules. Gas temperatures in molecular regions range from $`10\mathrm{K}`$ in quiescent clouds to several hundred Kelvin in gas that has been heated by a nearby star or protostar, or even several thousand Kelvin in shocked regions. The radiative cooling of molecular clouds is an essential feature of the star formation process, because cloud collapse involves the conversion of gravitational potential energy to thermal energy and can proceed only if the latter is efficiently removed. Theoretical calculations (e.g. Neufeld, Lepp & Melnick 1995) predict that over a wide range of physical conditions the radiative cooling of molecular gas is dominated by rotational transitions of the molecules H<sub>2</sub>, CO and H<sub>2</sub>O in the 7 – 600 $`\mu `$m region. At low temperatures, submillimeter transitions of CO are the primary coolant, while at higher temperatures pure rotational lines of H<sub>2</sub> (e.g. the S(0), S(1), S(2), S(3), S(4) and S(5) lines at 28.3, 17.0, 12.3, 9.66, 8.03, 6.91 $`\mu `$m) and of H<sub>2</sub>O (many lines in the 40 - 600 $`\mu `$m region) are expected to dominate the cooling. This prediction is corroborated by recent ISO observations of H<sub>2</sub> and H<sub>2</sub>O emissions from nearby regions of star formation (e.g. van Dishoeck et al. 1998, see Figure 2; Harwit et al. 1998) and well as by extensive ground-based observations of CO carried out previously toward both nearby and high-redshift galaxies (e.g. Omont et al. 1996). Measurements of line ratios permit the density, temperature, and molecular abundances within the molecular gas to be constrained. In addition to probing cold atomic and molecular gas, far-infrared and submillimeter observations of high redshift galaxies also promise to yield invaluable information about photoionized regions. Many galaxies are luminous sources of mid IR fine structure emissions from NeII (12.8 $`\mu `$m), OIII (52, 88 $`\mu `$m), NeIII (15.6, 36.0 $`\mu `$m), NeV (14.3, 24.2 $`\mu `$m) and several other ions that result from photoionization by radiation shortward of the Lyman limit. Such mid-IR lines provide unique information about the metallicity and gas density in ionized regions, as well the spectral shape of the ionizing radiation field (e.g. Voit 1992). These transitions show several important advantages over the optical wavelength lines traditionally used to study HII regions: they are not heavily extinguished by interstellar dust; their luminosities are only weakly dependent on temperature and therefore provide model-independent estimates of metallicity; and they provide line ratios that are useful diagnostics of density over a wide dynamic range (e.g. Spinoglio & Malkan 1992). The availability of rare gas elements (e.g. Ne, Ar) allows the metallicity to be determined without the complicating effects of interstellar depletion, and the availability of a wide range of ionization states (e.g. NeII and NeV) provides an excellent discriminant between regions that are ionized by hot stars and those that are ionized by a harder source of radiation such as an AGN. The power of that discriminant has been demonstrated by the recent ISO observations shown in Figure 3 (Moorwood et al. 1996). Here the otherwise similar spectra of the starburst galaxy M82 and the Circinus galaxy (which contains an active nucleus) are distinguished by the presence in Circinus of a variety of highly ionized species such as NeV and Ne VI that can only result from a very hard source of ionizing radiation. Figure 4, from the review paper of Hollenbach & Tielens (1997), is a schematic representation of the interaction between starlight and the interstellar medium that summarizes the various far-infrared and submillimeter emission mechanisms described above. Most of the radiant energy from starlight is deposited at moderate visual extinctions, $`A_V1`$. Roughly 99$`\%`$ of the starlight heats the interstellar dust and is reprocessed as far-infrared continuum radiation, while very roughly $`1\%`$ heats the gas and is reprocessed as line radiation (primarily the C<sup>+</sup> 158 $`\mu `$m line). At visual extinctions $`A_V>3`$ (rightmost region), the gas is primarily molecular, and the gas cooling is dominated by infrared and submillimeter rotational lines of molecules, particularly H<sub>2</sub>, CO, and H<sub>2</sub>O. In unshielded regions of high ultraviolet flux (leftmost region), the gas is highly ionized, and the cooling is dominated by optical and ultraviolet line emission. In this zone, mid-infrared fine structure lines, although not the major coolant, are powerful diagnostic probes. ## Observational capabilities in the next decade The next decade (2001 – 2010) promises substantial improvements in observational capabilities in the far-infrared and submillimeter spectral regions, thanks to several new observatories that are expected to begin operations. My goal in this article is not to give a comprehensive review of all these new facilities but rather to discuss briefly selected observatories that will offer capabilities most directly relevant to the study of galaxies at high redshift. The Space Infrared Telescope Facility (SIRTF)<sup>2</sup><sup>2</sup>2The SIRTF home page is at http://sirtf.jpl.nasa.gov, scheduled for launch at the end of 2001, will deploy a liquid helium cooled 85 cm telescope capable of carrying out observations of extremely high sensitivity. The Multiband Imaging Photometer (MIPS) instrument on board SIRTF will offer diffraction-limited imaging using sensitive detector arrays at wavelengths of 24, 70 and 160 $`\mu `$m, as well as very low resolution ($`\lambda /\mathrm{\Delta }\lambda 10`$) spectroscopy in the 50 – 100 $`\mu `$m region. The principal limitation of SIRTF for the detection of galaxies at far-infrared wavelengths is the large size of the diffraction-limited beam: particularly at 160 $`\mu `$m, MIPS will be source confusion limited for observations of relatively short duration. The Infrared Spectrograph (IRS) instrument will be capable of moderate resolution spectroscopy ($`\lambda /\mathrm{\Delta }\lambda 600`$) over the 10 – 37 $`\mu `$m range, with a large spectral multiplex advantage that will allow full, high-quality spectra to be obtained very much more quickly than was possible with ISO. While the wavelength coverage of IRS does not quite reach the 40 – 1000 $`\mu `$m range that is the subject of this article, IRS deserves mention here because of its capability for detecting mid-infrared line emission from ions in HII regions. The Far Infrared and Submillimeter Telescope (FIRST)<sup>3</sup><sup>3</sup>3The FIRST home page is at http://astro.estec.esa.nl/SA-general/Projects/First/first.html will be a space observatory with a much larger ($`350`$ cm) primary mirror, but one that is not actively cooled. Current plans call for the launch of FIRST in 2007, with instrumentation capable of carrying out broad band photometry, imaging spectroscopy, and high-resolution heterodyne spectroscopy. The wavelength coverage will extend to much longer wavelengths than SIRTF, allowing a far wider range of atomic and molecular line emissions to be studied spectroscopically. Again, the relatively large diffraction limit at these wavelengths for any single dish instrument of reasonable size means that source confusion will be significant except for observations of short duration (e.g. Blain, Ivison & Smail 1998). Thus interferometers will be critical for the study of all but the most luminous galaxies at high redshift, and the most important impact of SIRTF and FIRST on studies of high redshift galaxies is likely to be in measuring spectra of low redshift galaxies that can be used as templates for understanding future interferometric observations. The Millimeter Array (MMA)<sup>4</sup><sup>4</sup>4The MMA home page is at http://www.mma.nrao.edu will have an extremely powerful interferometric capability, providing spatial resolution as good as $`0.01^{\prime \prime }`$, wavelength coverage down to 350 $`\mu `$m, and high spectral resolution. MMA promises to allow large numbers of high redshift sources to be detected routinely and associated unambiguously with optical counterparts. It will make use of $`36`$ antennae of diameter $`10`$ m that can be deployed over baselines of several kilometers on a high plateau site in Chile. An observatory of more modest collecting area – the Smithsonian Astrophysical Observatory’s Submillimeter Array (SMA) – will operate in a Northern Hemisphere site (Mauna Kea). The primary limitations of these ground-based facilities are those imposed by Earth’s atmosphere, which only permits observations in a series of submillimeter windows all longward of 300$`\mu `$m, and by the fundamental sensitivity limits set by heterodyne receivers and warm telescopes. ## The longer term: far-infrared and submillimeter interferometry from space The ideal instrument for the study of far-infrared and submillimeter emissions from high redshift galaxies would combine (1) full wavelength coverage; (2) HST-like spatial resolution; (3) sensitivity approaching the fundamental limit imposed by photon-counting statistics; (4) high spectral resolution ($`\lambda /\mathrm{\Delta }\lambda `$ of at least $`10^4`$). The first and third of these capabilities require a space observatory; the second requires interferometry; and the third requires a cooled telescope (barely warmer than the CMB) equipped with a new (not presently existing) generation of incoherent detectors rather than heterodyne receivers; and the fourth can be accomplished by means of a Fabry-Perot or Michelson interferometer. In a recent white paper (Mather et al. 1998), we have presented a preliminary study of such an instrument – dubbed the Submillimeter Probe of the Evolution of Cosmic Structure, SPECS<sup>5</sup><sup>5</sup>5The SPECS home page is at http://www.gsfc.nasa.gov/astro/specs – in which we envisaged a Michelson interferometer providing spatial and spectral interferometry with three, cold, free-flying elements of diameter $`3`$ m deployable over baselines $`1\mathrm{km}`$. Although such a facility may lie significantly beyond what could be built today, Mather et al. 1998 have emphasized the importance of developing key technologies over the next decade to make such an instrument feasible in the decade 2011 – 2020; those technologies include formation flying, active cooling of large mirrors, and the development of sensitive incoherent detector arrays. In particular, photon-counting incoherent detectors – which do not yet exist at these wavelengths but would likely be some type of superconductive device – would offer enormous sensitivity advantages for faint sources both over current bolometers and relative to the fundamental limit of a heterodyne receiver.<sup>6</sup><sup>6</sup>6A perfect photon-counting detector is more sensitive than a perfect heterodyne receiver by a factor $`(\mathrm{\Delta }\nu /R)^{1/2}`$, where $`\mathrm{\Delta }\nu `$ is the bandwidth and $`R`$ is the photon arrival rate, a factor much larger than unity for faint extragalactic sources. I gratefully acknowledge the support of a grant from NASA’s Long Term Space Astrophysics Research Program. I thank Ewine van Dishoeck and David Hollenbach for making available Figures 2 and 4. It is a pleasure also to acknowledge helpful discussions with Mark Voit, Harvey Moseley and John Mather.
no-problem/9902/cond-mat9902295.html
ar5iv
text
# Height representation, critical exponents, and ergodicity in the four-state triangular Potts antiferromagnet ## I Introduction There has been considerable interest over the last few years in the properties of classical spin systems possessing highly degenerate ground states. Many such models, including ice models, the triangular Ising antiferromagnet, and dimer models have been found to have ground state ensembles which display critical properties such as algebraically decaying spin–spin correlations and divergent fluctuations in the order parameter. It is now known that these properties are associated with the existence of interface or “solid-on-solid” representations for the models, in which sites can be assigned heights $`h`$ which vary smoothly over the lattice and which can be mapped onto the states of the spins or other microscopic variables in a simple way. If we assume that this height field behaves as a Gaussian surface, which is justified if the model is in its rough phase, then the critical behavior follows in a straightforward fashion. The values of the critical exponents are related to the stiffness $`K`$ of the surface and the wavelength of the appropriate operator on the height lattice. A few models have been studied for which $`h`$ is vector rather than scalar and the ideas above generalize to this case also. In this paper we look at one particular model of this type, the four-state antiferromagnetic Potts model on the triangular lattice. This turns out to be an especially lucid example of a model with a vector height, being defined on a simple Bravais lattice and, as we will show, possessing a very straightforward height representation. It is clear that the four-state antiferromagnet does indeed have a highly degenerate ground state, since the triangular lattice is three-colorable. By taking a three coloring and introducing a finite density of the fourth color we can see that the ground state must have an extensive entropy. In this paper, we study the properties of the ground state ensemble by considering first the behavior of defects in the model at zero temperature. We show that there are six distinct types of non-trivial defects and from the conservation laws that govern their collisions we deduce that they have vector charges $`\frac{1}{3}\pi `$ apart. We use this observation to derive a Burgers vector for the model and hence show that when no defects are present the system has a two-dimensional height representation. The defects then correspond to screw dislocations on a $`2+2`$ dimensional lattice and we predict that pairs of them will be attracted or repelled with an entropic Coulomb force proportional to the dot product of their charges. We use our height representation to deduce a number of facts about the four-state model. First we show that it is equivalent to the three-state Potts antiferromagnet on the bonds of a hexagonal lattice. This equivalence has also been derived using a different approach by Baxter, but the derivation given here is nonetheless instructive because it respects the symmetries of the system under permutation of states in a way that Baxter’s does not. By a simple geometrical construction we show further that the model is equivalent to the $`q=3`$ antiferromagnet on the Kagomé lattice, a model which has previously been studied by Huse and Rutenberg. And employing results due to Kondev and Henley, we show that our model is equivalent to the fully-packed loop model on the honeycomb lattice with a loop fugacity of 2. We generalize these equivalences to several other cases, including one related to loop models on the square lattice. The existence of a height representation also implies, as mentioned above, that the ground state ensemble is critical and we have verified this by Monte Carlo simulation. Simulation of this model is not trivial, since no single-spin-flip algorithm is ergodic and the best-known cluster algorithm, that of Wang, Swendsen and Kotecký, is believed not to be ergodic under toroidal boundary conditions and has not been proved to be ergodic for any other case. (For other models, however, it is known to be ergodic, particularly for models defined on bipartite lattices.) Here we make use of our height representation to prove for the first time that the algorithm is in fact ergodic for the $`q=4`$ Potts model on any lattice with triangular plaquets satisfying certain topological conditions. This includes lattices with free boundary conditions or with the topology of a sphere or a projective plane. To take advantage of this result we have performed our simulations on the projective plane. The idea of changing the lattice’s large-scale topology in this way to make a sampling algorithm ergodic appears to be new. The outline of the paper is as follows. In Section II we introduce the model and study the types of defects which occur in it and their interactions. In Section III we derive the height representation of the model and thereby demonstrate the model’s equivalence to various others. In Section IV we show how the scaling exponents characterizing the large-$`r`$ behavior of correlation functions can be calculated from the height representation and in Section V we demonstrate the existence of entropic Coulomb forces between defects. In Section VI we describe the Monte Carlo algorithm we use to simulate the model and in Section VII give the results of our simulations. Finally, in Section VIII we give our conclusions. ## II Defects in the four-state triangular Potts antiferromagnet The $`q`$-state Potts model is a generalization of the Ising model in which a lattice is populated with spins $`s_i`$, one on each vertex $`i`$, which can take integer values $`s_i=1\mathrm{}q`$. The spin states are also sometimes referred to as “colors,” and we will occasionally make use of this metaphor. The energy of a configuration is defined to be proportional to the number of pairs of adjacent sites with the same state $$H=J\underset{ij}{}\delta _{s_is_j}.$$ (1) In this paper we consider the Potts model with $`q=4`$ on the triangular lattice in two dimensions and with $`J<0`$ which makes the model antiferromagnetic so that similarly-colored pairs of adjacent sites are energetically unfavorable. We refer to such pairs as “defects.” This model can also be thought of as a discretization of the classical Heisenberg model in which each site has a three-dimensional unit vector spin $`𝐒_i`$: $$H=J\underset{ij}{}𝐒_i𝐒_j.$$ (2) This is equivalent to (1) up to a rescaling and an additive constant if the $`𝐒_i`$ are restricted to the corners of a tetrahedron, since then the dot product of two spins depends only on whether they are the same or different. The ground state entropy per site for this model can be calculated analytically or closely bounded with series approximations; its exact value is $`3\mathrm{\Gamma }\left(\frac{1}{3}\right)^3/4\pi ^21.460998`$. Consider the behavior of the model under a single-spin-flip dynamics at zero temperature. Such a dynamics allows existing defects to diffuse and interact in a variety of ways, but creates no new defects. We now show that the defects in the model fall into a number of different classes with well-defined properties. The simplest case is that of a defect of the form $`\begin{array}{c}b\\ \begin{array}{cc}a& a\end{array}\\ b\end{array}`$ in which the sites to either side of the defect pair both have the same state (here denoted $`b`$). As Figure 1 shows, these defects retain this same form when they move. Moreover, it is possible for a defect of this type to disappear entirely if it encounters a neighborhood of the right configuration. This is shown in the rightmost portion of the figure, where the center site in the hexagon can become a 4, resulting in a defect-free configuration. Since these defects can be annealed away with only local moves, and do not require interaction with any other defects, we call them “false”; their density can be expected to fall off exponentially fast in a quench to $`T=0`$ and so can be ignored where the long-time relaxation of the system is concerned. This leaves us with defects $`\begin{array}{c}b\\ \begin{array}{cc}a& a\end{array}\\ c\end{array}`$ whose neighboring sites are in different states $`bc`$. As Figure 2 shows, these defects are persistent and an isolated defect cannot be annealed away by local moves alone. The rest of our analysis in this section will concentrate on these “true” defects. The true defects possess two properties which are conserved both during diffusion and in interactions with other defects. The first depends on the states $`a`$, $`b`$ and $`c`$ which make up the defect and its immediate neighborhood. Since $`a`$, $`b`$ and $`c`$ are all different for an isolated true defect, they divide the four spin states in the model into two pairs, where $`a`$ belongs to one pair, and the sites on either side belong to the other. In the case of the defect $`\begin{array}{c}2\\ \begin{array}{cc}1& 1\end{array}\\ 3\end{array}`$, for example, the defect sites belong to the pair $`\{1,4\}`$ and the adjoining sites to the pair $`\{2,3\}`$. There are three distinct ways of dividing up the states in this fashion. The other conserved property of a true defect is a handedness defined as follows. The states $`a`$, $`b`$ and $`c`$, in that order, describe a path which is either clockwise or counter-clockwise on the outside of one of the faces of a tetrahedron whose vertices are labelled with the four spin states as shown in Figure 3. In order that this property be correctly conserved we must in addition stipulate that it remains the same under $`120^{}`$ rotations of a defect, but changes sign under $`60^{}`$ or $`180^{}`$ ones. Thus for example the defect $`\begin{array}{c}2\\ \begin{array}{cc}1& 1\end{array}\\ 3\end{array}`$ considered above has a counter-clockwise (or positive) handedness, while an inversion or $`60^{}`$ rotation gives us a clockwise (or negative) handedness. Using these two conserved properties, we can divide the 72 possible defects into 6 equivalence classes which we label $`X`$, $`Y`$, $`Z`$ and $`\overline{X}`$, $`\overline{Y}`$, $`\overline{Z}`$. Here the letters denote the pairing of states and the bars (or absence of them) denote the handedness. Representative members of the classes are shown in Figure 3. Considering again a single-spin-flip dynamics, we show in Figure 4 a selection of possible collisions between various types of defects. Although more complex collisions than these can occur, we always find (and it is proved below) that $`X+\overline{X}0`$, $`X+Y\overline{Z}`$, and similarly for cyclic permutations. If we wish to assign a charge $`\chi `$ to each particle such that $`\chi (X)+\chi (Y)+\chi (Z)=0`$, $`\chi (X)=\chi (\overline{X})`$, and so on, we can do this with six vectors $`\frac{1}{3}\pi `$ apart as in Figure 5. We adopt the convention that $`|\chi |=1`$, $`\chi (X)=(1,0)`$, and $`\chi (Y)=(1/2,\sqrt{3}/2)`$. The proof that charge is locally conserved goes as follows. It turns out that the total charge within an area can be expressed quite simply as a sum over the plaquets making up that area. We start by writing the charge inside a diamond $`\begin{array}{c}a\\ \begin{array}{cc}b& c\end{array}\\ d\end{array}`$ as a sum of functions of the upward- and downward-pointing triangular plaquets, $`f(a,b,c)+g(d,c,b)`$. Since a $`180^{}`$ rotation reverses the charges, we have $`g=f`$, and since a $`120^{}`$ rotation keeps them the same, $`f`$ is symmetric under cyclic permutations, $`f(a,b,c)=f(b,c,a)=f(c,a,b)`$. Furthermore, since a defect-free plaquet has zero charge, $`f(a,b,c)=0`$ if $`a,b,c`$ are all different, and since rotating the tetrahedron of Figure 3 around the corner corresponding to spin state $`a`$ rotates the charge plane of Figure 5 around the origin, $`f(a,a,a)=0`$ for any $`a`$. This just leaves the case where exactly two of $`a`$, $`b`$ and $`c`$ are equal. Solving the equations $`f(2,1,1)f(3,1,1)=\chi (X)`$, $`f(3,1,1)f(4,1,1)=\chi (Y)`$, and so on, gives $$f(a,a,b)=\{\begin{array}{cc}\alpha \hfill & \text{if }\{a,b\}=\{1,2\}\text{ or }\{3,4\}\hfill \\ \beta \hfill & \text{if }\{a,b\}=\{1,3\}\text{ or }\{2,4\}\hfill \\ \gamma \hfill & \text{if }\{a,b\}=\{1,4\}\text{ or }\{2,3\}\hfill \end{array}$$ (3) where $`\alpha `$ $`=`$ $`({\displaystyle \frac{1}{2}},{\displaystyle \frac{1}{2\sqrt{3}}}),`$ (4) $`\beta `$ $`=`$ $`({\displaystyle \frac{1}{2}},{\displaystyle \frac{1}{2\sqrt{3}}}),`$ (5) $`\gamma `$ $`=`$ $`(0,{\displaystyle \frac{1}{\sqrt{3}}}),`$ (6) as shown in Figure 5. If the charge is conserved in all collisions as we have claimed, we should be able to write it as an integral of some quantity around the perimeter of the area we are interested in. It turns out that this is indeed possible if we define a tensor $`B`$ on each edge of the lattice equal to the outer product $`B=𝐭𝐄`$ of two vectors, $`𝐭`$ and $`𝐄`$. The first of these points along the edge and gives it a direction as in Figure 6. This ensures that $`B`$ has the necessary change of sign under $`60^{}`$ rotations. The second is a vector in the charge plane which depends symmetrically on the states $`a`$ and $`b`$ at the two ends of the edge thus: $$𝐄=\{\begin{array}{cc}\alpha /2\hfill & \text{if }\{a,b\}=\{1,2\}\text{ or }\{3,4\}\hfill \\ \beta /2\hfill & \text{if }\{a,b\}=\{1,3\}\text{ or }\{2,4\}\hfill \\ \gamma /2\hfill & \text{if }\{a,b\}=\{1,4\}\text{ or }\{2,3\}\hfill \\ 0\hfill & \text{if }a=b.\hfill \end{array}$$ (7) Then the charge inside a finite region of the lattice is an integral around a counter-clockwise perimeter $$d𝐬B=(\mathrm{\Delta }𝐬𝐭)𝐄.$$ (8) In Figure 6, we show this for a diamond around a single defect and for a defect-free triangle. Since all larger regions can be formed by gluing together diamonds and triangles like these, and since the integral cancels along the shared edges within the region because of the sign change imparted on $`B`$ by $`𝐭`$, it follows that the charge within any region is correctly given by Equation (8). This proves our contention that the charge is conserved in all defect collisions, since the value of the integral around a line completely enclosing any such collision does not change when the collision takes place. ## III Height representation and equivalence to other models The integral in Equation (8) defines a Burgers vector for a defect in our model. Around any defect free region, the Burgers vector is zero, and hence on a lattice possessing no defects the integral of $`B`$ between any two points is path independent. This allows us to define a height representation for the model in which the height $`h`$ at any site is specified uniquely as the integral of $`\mathrm{d}𝐬B`$ from a single reference site of known height to the site of interest. The heights are thus, like the Burgers vector itself, two-dimensional vectors living on a triangular lattice with lattice vectors $`\alpha `$, $`\beta `$ and $`\gamma `$, multiplied by $`\pm \sqrt{3}`$ in order to make the lattice constant 1. In fact, it is straightforward to show that there is a unique mapping of heights onto spin states, which is a four-coloring of the height lattice as shown on the left-hand side of Figure 7. The particular permutation of the colors in this figure depends on the definition of $`\alpha `$, $`\beta `$ and $`\gamma `$ given in Equations (4) through (6) and on the choice of reference site. Once we have the height representation for the model, there are a number of results which follow. In this section, we use it to demonstrate the equivalence of the ground state ensemble to a number of other models, some of which have been studied previously. First, imagine coloring the edges of the triangular lattice in a defect-free configuration of the model with three colors $`\alpha `$, $`\beta `$ and $`\gamma `$ according to the height difference along them, as in Figure 8. If we define two edges as neighboring when they bound the same triangle, then neighboring edges must have different colors since otherwise two of the vertices of the triangle would have the same spin state. Thus the model is equivalent to a three-coloring of the bonds of the triangular lattice in which no two adjacent bonds are the same color. The reverse mapping is also possible. Since $`\alpha +\beta +\gamma =0`$, the change in height sums to zero around any triangle, and therefore around any closed curve, so the height and therefore the state of every site is well-defined once we have chosen the spin state of one reference site. As there are four choices for this reference state, every configuration of this bond-coloring model corresponds to four ground states of the four-state triangular Potts model. Since the edges of a lattice are in one-to-one correspondence with the edges of its dual lattice, we can also think of the model as a three-coloring of the edges of the honeycomb lattice, where two edges are neighbors if they share a vertex (see Figure 8). A simple extension of these mappings is to put a vertex at the midpoint of each edge on the triangular lattice (or the honeycomb lattice) and connect those vertices which fall on neighboring edges. The result, as shown in Figure 8, is a Kagomé lattice. (In general, this construction is called the “medial graph”.) Thus the four-state triangular Potts antiferromagnet and the three-state one on the Kagomé lattice also have equivalent ground-state ensembles. A number of these results have appeared previously in one form or another. Huse and Rutenberg found a two-dimensional height representation for the $`q=3`$ antiferromagnet on the Kagomé lattice, which is equivalent to ours once the equivalence between models demonstrated above is taken into account. Baxter (see also Ref. ) demonstrated the equivalence of the four-state antiferromagnet and the bond-coloring model on the hexagonal lattice using an approach somewhat different from ours. He defined a cyclic ordering $`43214`$, drawing arrows from higher states to lower ones (modulo 4) and leaving edges between states 1 and 3 or 2 and 4 blank. However, since the four-state antiferromagnet is invariant under all permutations of the four states, not just cyclic ones, we feel that the mapping given here better respects the symmetries of the system under such permutations. Kondev and Henley showed that the bond-coloring model on the honeycomb lattice is also equivalent to a fully packed loop (FPL) model on the honeycomb lattice, where loops are defined as sets of edges alternating between two of the three colors. The loops are then contours of the component of the height perpendicular to the direction corresponding to the third color. Each loop can have its colors exchanged without affecting the surrounding configuration. Such loops are said to have a fugacity $`n=2`$. (When $`n=1`$, the FPL model is equivalent to the triangular Ising antiferromagnet.) An exact solution for the ground state entropy of the FPL model on the honeycomb lattice for general $`n`$ has been given by Batchelor, Suzuki and Yung using a Bethe ansatz, and differs from Baxter’s solution for the entropy of the $`q=4`$ triangular Potts antiferromagnet by exactly $`\mathrm{log}4`$, as we would expect given the equivalence demonstrated above. The equivalence between four-state antiferromagnets, three-state bond-coloring models, and fully-packed loop models applies on other lattices as well. If a lattice has triangular plaquets and vertices with even coordination number, we can define an orientation on the bonds with vectors $`𝐭`$ which go either only clockwise or only counter-clockwise around each plaquet. Then we can define $`B=𝐭𝐄`$ as before, and any path around a defect-free plaquet will have $`\mathrm{\Delta }h=\pm (\alpha +\beta +\gamma )=0`$, so that a consistent definition of $`h`$ exists. Furthermore, if we color the bonds with colors $`\alpha `$, $`\beta `$ and $`\gamma `$ then each vertex of the dual lattice has one bond of each color. Bonds of any two colors comprise a set of fully packed loops with fugacity 2, and these are contours of the component of $`h`$ perpendicular to the remaining color. As an example, the lattice shown on the left-hand side of Figure 9 has vertices of coordination number 4 and 8, and the orientation of the bonds can be defined as shown. The $`q=4`$ Potts antiferromagnet on this lattice is equivalent to a bond-coloring or fully-packed loop model on its dual lattice, the truncated square lattice, whose plaquets are squares and octagons. If we take the trace over the sites with four neighbors, we are left with a model on the square lattice where plaquets of the form $`\begin{array}{cc}a& b\\ c& d\end{array}`$ are prohibited, while those of the form $`\begin{array}{cc}a& b\\ b& c\end{array}`$ and $`\begin{array}{cc}a& b\\ b& a\end{array}`$ have fugacities 1 and 2 respectively. This model was studied by Nienhuis and is also equivalent to a model on the square lattice where loops can collide but not cross. A similar model where the latter two types of plaquet have equal fugacity was studied by Burton and Henley, who found a five-dimensional height representation for it. We close this section by defining another model equivalent to the $`q=4`$ antiferromagnet. Let us define a chirality on each plaquet of the triangular lattice according to whether the states at its three corners are oriented like an interior or exterior face of the tetrahedron in Figure 3, or equivalently, whether the colors on its edges in the three-coloring model above cycle clockwise or anticlockwise. These plaquets correspond to the vertices of the honeycomb lattice, and it is not hard to see that 0, 3, or 6 of the vertices of each hexagon must have positive chirality. There are 12 configurations of the four-state triangular antiferromagnet for each state of this model, one for each choice of the states of two adjacent reference sites. ## IV Free energy and calculation of scaling exponents Consider the restricted entropy of the four-state triangular model for a particular value of some (unspecified) coarse-graining of the height field $`h`$. It is not hard to convince oneself that this entropy is lowest when $`|h|`$ is large, and highest for configurations that are macroscopically flat. For instance, a three-coloring of the lattice is flat since any two sites with the same spin state have the same height, and the set of configurations in the vicinity of such a three-coloring contributes a large entropy to the ground-state ensemble since every site has a choice of two colors. On the other hand, a four-coloring whose height increases linearly across the lattice corresponds to only one microstate, since no site has any choices at all. Building on considerations such as these we can derive expressions for the scaling exponents of the model. Our presentation follows that of Burton and Henley. If the model is in its rough phase, the arguments of the previous paragraph suggest that it has an effective free energy of the form $$G=\frac{1}{2}K^{\kappa \lambda }h_\kappa h_\lambda \mathrm{d}x\mathrm{d}y,$$ (9) where $`K^{\kappa \lambda }`$ is a stiffness tensor. However, since the model is invariant under permutations of the four spin states, and since these permutations are equivalent to rotations of the height lattice, $`K`$ must be a scalar and we have $$G=\frac{1}{2}K\left(|h_1|^2+|h_2|^2\right)𝑑x𝑑y,$$ (10) where $`h_1`$ and $`h_2`$ are components of the height field along any two perpendicular directions in the height space. (In our calculations we have taken $`h_1`$ and $`h_2`$ along the directions indicated in Figure 7.) In frequency space this free energy decouples into a sum over independent Gaussians, giving spatial correlations between the heights at points a distance $`r=|𝐫_2𝐫_1|`$ apart of $$|h(𝐫_2)h(𝐫_1)|^2\frac{1}{\pi K}\mathrm{log}r+C$$ (11) for large $`r`$, where $`C`$ is a constant. Quantities such as spin and local magnetization are periodic functions $`f(h)`$ of the height and hence can be Fourier expanded in the height. We can calculate the spatial correlations in any one such Fourier component $`f_g\mathrm{e}^{\mathrm{i}gh}`$, having frequency $`g`$ in height space, using the fact that $`h`$’s Fourier components are Gaussianly distributed. This gives us $`f_g(𝐫_1)f_g(𝐫_2)`$ $``$ $`\mathrm{e}^{\mathrm{i}g[h(𝐫_2)h(𝐫_1)]}`$ (12) $`=`$ $`\mathrm{e}^{\frac{1}{2}g^2|h(𝐫_2)h(𝐫_1)|^2}r^{(d2+\eta )},`$ (13) for large $`r`$ where $`\eta `$ is the anomalous dimension of the correlation function and $`d`$ is the dimensionality of the lattice. Given that $`d=2`$ in the present case and making use of Equation (11) we then find that $$\eta =\frac{g^2}{2\pi K}.$$ (14) If a quantity has several non-zero Fourier components, then the one with the smallest $`\eta `$—i.e., longest wavelength—will dominate for large $`r`$. If $`K`$ is not a scalar, these equations generalize to $$(h_\kappa ^{}h_\kappa )(h_\lambda ^{}h_\lambda )=\frac{K_{\kappa \lambda }^1}{\pi }\mathrm{log}r+C_{\kappa \lambda }$$ (15) and $$\eta =\frac{gK^1g^{}}{2\pi }.$$ (16) For instance, transforming a scalar $`K`$ to triangular coordinates where basis vectors are $`\frac{2}{3}\pi `$ apart rather than orthogonal gives a matrix $`K^{}`$ $$K^{}=K(\begin{array}{cc}1& 1/2\\ 1/2& 1\end{array})\text{ and }(K^{})^1=\frac{1}{K}(\begin{array}{cc}4/3& 2/3\\ 2/3& 4/3\end{array}).$$ (17) Using the equivalence of the bond-coloring model to the fully-packed loop model on the honeycomb lattice, and relating the vortex–antivortex correlation function to the probability that two sites lie on the same loop, Kondev and Henley have shown that these models are exactly at their roughening transition and have a stiffness of $`K=\frac{2}{3}\pi `$. Since our four-state model is equivalent to these models, it has the same value of $`K`$. We will use these results below to calculate the scaling exponents of various quantities for comparison with Monte Carlo experiments. ## V Forces between defects Since the energy of a pair of defects is 2 regardless of how far apart they are, there is no energy gradient to drive a force between them. However, there is an entropic force, driven by the fact that the presence of a free defect reduces the entropy within an area of radius $`r`$ by an amount proportional to $`\mathrm{log}r`$. In a model with a one-dimensional height representation, a defect with Burgers vector $`b`$ has an average field around it $$|h|=\frac{b}{2\pi r},$$ (18) giving a force between two defects with Burgers vectors $`b`$ and $`b^{}`$ $$F=\frac{K}{\pi }\frac{bb^{}}{r}.$$ (19) When coupled with a mobility $`\mathrm{\Gamma }`$, this gives an average velocity to the defects of $$v=\mathrm{\Gamma }F=\frac{\mathrm{\Gamma }K}{\pi }\frac{bb^{}}{r}.$$ (20) Such forces have been measured numerically for the three-state Potts model on the square lattice. The generalization to a higher-dimensional height representation is straightforward. Since the free energy in Equation (10) is a sum of independent terms in $`h_1`$ and $`h_2`$, the force between two defects with Burgers vectors $`𝐛`$ and $`𝐛^{}`$ will be proportional to $`b_1b_1^{}+b_2b_2^{}=𝐛𝐛^{}`$. Since the Burgers vector for a defect is equal to $`2\sqrt{3}`$ times the charge $`\chi `$ on that defect (see Section III), this gives us a force of $$F=\frac{12K}{\pi }\frac{\chi \chi ^{}}{r}.$$ (21) In other words, an $`X`$ and a $`Y`$ will be attracted to each other, but only half as strongly as an $`X`$ and an $`\overline{X}`$, and an $`X`$ and a $`\overline{Y}`$ will be repelled half as strongly as an $`X`$ and another $`X`$. ## VI Monte Carlo simulation In order to simulate correctly the properties of the ground-state ensemble of a system, it suffices to find a set of update moves which take us from one ground state to another without introducing any defects, such that every ground state can be reached from every other in a finite number of moves on a finite lattice. An algorithm based on such a set of moves is said to be ergodic, and it can be shown that any ergodic algorithm will sample all ground states with equal frequency over the course of a long simulation (see Ref. for example). Unfortunately, it turns out to be quite difficult to find a suitable ergodic set of moves for the four-state Potts model considered here. To begin with, it is clear that no single-spin-flip dynamics can be ergodic because finite defect-free regions can be pinned under such a dynamics, and remain pinned no matter what happens outside them. For instance, in the hexagon $`\begin{array}{c}\begin{array}{cc}1& 2\end{array}\\ \begin{array}{ccc}3& 4& 3\end{array}\\ \begin{array}{cc}2& 1\end{array}\end{array}`$ every spin has at least one neighbor of each of the other states, so no spin can change state. Since there is a positive density of such clusters in a random ground state on a large lattice, single-spin-flip dynamics will only explore an exponentially small fraction of the possible configurations. So we are forced to turn to a cluster update algorithm to simulate this model. The algorithm we use in this paper is the zero-temperature limit of the Wang-Swendsen-Kotecký (WSK) cluster algorithm for Potts antiferromagnets, which is defined as follows. At each step in the simulation we choose two of the four colors on the lattice, identify all connected clusters containing only these two, and, in each cluster independently, either switch the two colors or leave them untouched with probability $`\frac{1}{2}`$. Clearly this preserves the property that neighbors have differing states, and, as we will show, it is computationally quite efficient. At finite temperature the WSK algorithm is trivially ergodic, as demonstrated by Wang et al. in their original paper. With a little more effort it is also possible to prove that it is ergodic at $`T=0`$ for all $`q>2`$ on the square lattice or more generally on bipartite lattices. The triangular lattice however is not bipartite, and in fact the algorithm is known not to be ergodic at $`T=0`$ on triangular lattices with toroidal boundary conditions, at least for certain lattice dimensions. In this section we make use of our height representation to prove that the algorithm is ergodic for certain other types of boundary conditions. Previous proofs of the ergodicity of the WSK algorithm at zero temperature have relied on defining a specific “target configuration” of the spins on the lattice and demonstrating that any given configuration can be transformed into this one in a finite number of reversible moves. For bipartite lattices the target configurations used have been checkerboard colorings. Here we use the same approach to prove the algorithm ergodic on the triangular lattice but, since the lattice is not bipartite and therefore does not permit checkerboard colorings, we use instead a three-coloring of the lattice as our target configuration. There are six possible three-colorings of the triangular lattice for each choice of 3 of the 4 colors. We illustrate one of them in Figure 10. We define a domain to be a connected set of sites whose colors coincide with one of these three-colorings. The three colors in such a domain fall on three triangular sublattices with lattice parameter $`\sqrt{3}`$ times that of the fundamental lattice. Our goal is to add new sites to the domain, singly or in groups, until the domain fills the entire lattice. If we can do this with reversible cluster moves from any given starting state, then our algorithm must be ergodic, since we can get from any state to the target and then from the target to any other state. (It makes no difference which three-coloring we take as our target configuration since we can get from any one to any other in at most three steps of the WSK algorithm—one to get the three colors right and at most two to put the colors on the correct sublattices.) Our approach is as follows. We choose a site which lies outside our domain, but is adjacent to it. The color of this site, call it $`a`$, differs from the color $`b`$ of the sites inside the domain on the same sublattice. If we can switch the colors $`a`$ and $`b`$ everywhere inside the domain, while leaving the color of the new site unchanged, it will now match the other sites on that sublattice and we will have added it to the domain. (Note that the domain now has a new three-coloring, resulting from our switching $`a`$ and $`b`$.) This can be done trivially in a single Monte Carlo move if we make the right choice of two colors for the move, but there is a catch. The problem is that there may be some cluster connecting the new site to a (possibly quite distant) site in the domain via sites of colors $`a`$ and $`b`$. If such a cluster exists, then the new site will get flipped whenever we change colors within the domain, so that its color will always differ from that of the sites on the same sublattice in the domain. An example of such a situation is shown in Figure 11. Happily, we can invoke the height representation to prove that any state in which such a path exists must contain at least one defect, and hence that such paths cannot exist in a ground state of the system. To see this we consider a closed loop of sites formed by the path outside the domain, completed with any path of our choice within the domain. To make things concrete, let us suppose that $`a`$ and $`b`$ are the states 4 and 3 as in Figure 11. The portion of our closed path outside the domain consists by definition of sites of only these two colors and hence the change in height $`\mathrm{\Delta }h`$ from one end to the other must be a multiple of, in this case, $`\alpha `$. Within the domain, on the other hand, the heights on all sites belonging to a particular sublattice are the same, so that $`\mathrm{\Delta }h`$ for the portion inside the domain is just equal to the change in height resulting when we change a 3 in the domain to a 4, or $`\beta \gamma `$ in this case, which is perpendicular to $`\alpha `$. In general, $`\mathrm{\Delta }h`$ for changing some site in a three-coloring from one color to another is perpendicular to the $`\mathrm{\Delta }h`$ between neighboring sites of those colors. This means that the sum of the $`\mathrm{\Delta }h`$s for the two parts of the loop cannot be zero, and hence the Burgers vector around the loop is non-zero and the loop must contain a defect. Unfortunately, this does not quite prove the ergodicity of the algorithm. Certainly a configuration like that in Figure 11 can be ruled out, because there must be at least one defect within the loop, and hence the lattice cannot be in a ground state. (The reader might like to populate the interior of the loop with spins just to check that this is indeed true.) However, there is another possibility. If we have some form of periodic boundary condition on our lattice, it may be possible for the loop to go off one side of the lattice and come back on another and rejoin the domain that way. It turns out that such a loop can have a non-zero Burgers vector even when the lattice is in a ground state. In essence, the lattice possesses a non-localized defect, without there being a defect anywhere in particular. The crucial difference between the two types of loops is that in the first case the loop is contractible, meaning that it can be shrunk to a point by shifting it one plaquet at a time. For a loop of this type, the situation depicted in Figure 11 applies, and the arguments given above are correct. If however the loop is non-contractible, then it is possible for there to be a non-localized defect and we cannot prove the ergodicity of the algorithm. To give an example, we show in Figure 12 a configuration of the model on a lattice with toroidal boundary conditions. This lattice has two fundamental non-contractible loops, one wrapping around the boundary conditions horizontally, and one vertically. For the configuration shown, these two loops have Burgers vectors of $`2\beta +4\gamma `$ and $`4\beta +2\gamma `$ respectively, even though there are no localized defects. In fact, the domain inside the dotted line shows how the algorithm can fail. Any attempt to add a new site to the domain by exchanging two colors inside the domain will change the color of the new site as well, because the new site is connected to the domain by paths which wrap around the boundary conditions. This does not actually prove that the WSK algorithm is not ergodic on this lattice, only that we cannot prove it to be so using arguments of the type given here in which domains gain sites but do not lose them. However, Salas and Sokal report that they found a configuration on a $`6\times 6`$ torus which can only be transformed into a small number of others, thus showing that the algorithm is not in general ergodic on the torus. Huse and Rutenberg, in their study of the three-state model on the Kagomé lattice, noted a similar lack of ergodicity in a loop-flipping algorithm on the torus. It seems, therefore, that it would be imprudent to conduct simulations solely on a toroidal or similar lattice. On lattices with no non-contractible loops, however, the algorithm is, by the arguments given above, ergodic. Examples of such lattices include the infinite lattice and any finite lattice with free boundary conditions, where opposite edges of the lattice have no connection to each other. Unfortunately, the first of these is impractical for computer simulations, and the second suffers rather dramatically from finite-size effects. A better solution is to perform our simulation on a lattice with periodic boundary conditions, but with a topology chosen so that there are no non-contractible loops. The simplest example is the sphere. Because its Euler characteristic is 2, there is no way to cover a sphere with a regular triangular lattice; some sites have to have less than six nearest neighbors. Fortunately, our proof works not just for the triangular lattice, but for any lattice where both the height representation and the target three-coloring can be defined. As we showed in the previous section, the height representation can be defined for any planar lattice with triangular plaquets and even coordination number, so that we can place vectors $`𝐭`$ on the bonds oriented so that they go only clockwise or only counter-clockwise around each plaquet. We can also define a three-coloring on such a lattice by having colors cycle upward or downward along $`𝐭`$. Thus, we can for instance perform a simulation on a lattice tiling the surface of an octahedron where 6 sites have four nearest neighbors, and the WSK algorithm will be ergodic. In Figure 13 we depict one such lattice and illustrate how the three-coloring behaves near its corners. Another possible topology is the projective plane—a hemisphere with diametrically opposing points on the equator identified. To create a lattice with this topology, we simply take the upper half of the octahedron depicted in Figure 13 and identify sites along its square equator as shown in Figure 14. Since the projective plane has only half as much curvature as the sphere (its Euler characteristic is 1) there are now just three sites with four neighbors rather than six. Thus this lattice is proportionately flatter than the sphere with the same number of sites. Note that the three-coloring is well-defined for any lattice size. Although the projective plane does possess non-contractible loops, every such loop has the property that going around it twice makes it contractible. Formally, the fundamental group $`\mathrm{\Pi }_1`$ for the space is $`_2`$, the integers mod 2. This means that the Burgers vector of any such loop must satisfy $`\mathrm{\Delta }h+\mathrm{\Delta }h=0`$, so $`\mathrm{\Delta }h=0`$. (Kawamura and Miyashita show that the Heisenberg antiferromagnet on the triangular lattice has defects with a $`_2`$ charge, so there are spin systems in which $`\mathrm{\Delta }h`$ can be nonzero even though $`2\mathrm{\Delta }h=0`$. This however is not the case with the present system.) Thus non-contractible loops are tolerable as long as they have finite order, i.e., as long as their $`n^{\mathrm{th}}`$ multiple is contractible for some finite $`n`$. The sphere and projective plane are the only finite two-dimensional manifolds satisfying this condition, since both tori and Klein bottles have non-contractible loops of infinite order. While Monte Carlo calculations performed on spheres and projective planes may seem outlandish, they are important in the present case in order to rule out the possibility that lack of ergodicity is introducing a bias into our results. The drawback is that we are forced to introduce a small number of atypical sites into the lattice (those with only four neighbors) which, for example, makes calculation of spatial correlation functions more difficult. In this paper we strike a compromise by performing some (probably non-ergodic) simulations on toroidal lattices, which give excellent statistics for correlation functions and other quantities, and some on the projective plane, which give poorer statistics, but are more trustworthy. In fact, we find that there are no physical measurements for which the two topologies disagree, so it is possible that the simulations on the torus are “sufficiently ergodic” for our purposes, although we cannot guarantee this. Just as with the height representation, our proof of ergodicity will work for other lattices as well, whenever we can give each bond a direction such that the bonds around each triangular plaquet go either only clockwise or only counter-clockwise. It can also be used for some lattices which have a mixture of ferromagnetic and antiferromagnetic bonds, but which have no frustration. At $`T=0`$ vertices with a ferromagnetic bond between them can simply be identified, since they must have the same state. Then the WSK algorithm is ergodic on such lattices if the resulting graph fits the conditions above. ## VII Results of Monte Carlo simulations We have performed extensive simulations of the four-state triangular model using the WSK algorithm on lattices having the topology both of the projective plane (for which the algorithm is definitely ergodic) and of the torus (for which it probably is not). On the projective plane we simulated systems with linear dimension $`L`$ equal to a power of two from $`L=4`$ up to $`L=1024`$. On the torus the three-coloring of the lattice is only well defined for $`L`$ a multiple of three, so we simulated systems with $`L=6`$, $`12`$, $`24\mathrm{}768`$. In each case simulations ran up to one million Monte Carlo steps for the largest lattices. In order to allow accurate error estimation, and also to examine the efficiency of the algorithm, we first measured the (exponential) correlation time $`\tau `$ in Monte Carlo steps as a function of system size. Figure 15 shows the results for both topologies. In each case the dynamic exponent $`z`$, defined by $`\tau L^z`$, takes the value $`0.74\pm 0.02`$, which is comparable with values for cluster algorithms for ferromagnetic Potts models. Correlation times for the projective plane are about a factor of $`1.3`$ larger than those for the torus with the same value of $`L`$, which is presumably because the projective plane has more sites on it. (There are $`L^2`$ sites on a torus of linear dimension $`L`$, but $`2L^2+1`$ on the projective plane.) In practical terms, since a single step of the WSK algorithm updates a number of spins which scales with the area $`L^d`$ of the lattice, the CPU time taken to generate a given number of independent lattice configurations scales as $`L^{d+z}L^{11/4}`$. In order to measure the critical exponents for the model, we chose two different definitions for the order parameter and measured two-point correlations and fluctuations for each of them. From these results we extracted values for the correlation exponent $`\eta `$ by direct measurement, and for the susceptibility exponent $`\gamma /\nu `$ by finite size scaling. These exponents are not independent; we expect them to be related by the Fisher scaling law $`\gamma /\nu =2\eta `$. Our two order parameters were defined as follows: 1. The simplest magnetization measure is just $`m_k=N_k\frac{1}{4}N`$ where $`N_k`$ is the number of spins on the lattice in spin state $`k`$. The two-point connected correlation for this magnetization, averaged over $`k`$, is then $`G_c(i,j)=\delta _{s_is_j}\frac{1}{4}`$ and the susceptibility is $`\chi =_km_k^2`$. 2. We also examined the “staggered magnetization” defined by Huse and Rutenberg for the three-coloring model on the hexagonal lattice, to allow direct comparison with their results. In the context of the present model, this magnetization may be thought of as a complex number $`_lc_lr_l`$ where the sum is over bonds on the triangular lattice, $`c_l`$ represents the color $`\alpha `$, $`\beta `$ or $`\gamma `$ of bond $`l`$ as defined in Equation (7), and $`r_l`$ is a static reference coloring corresponding to the color of bond $`l`$ in any of the 24 possible three-colorings of the lattice. The staggered magnetization $`m_i`$ on a site $`i`$ can be defined as the sum over the bonds $`l`$ connected to that site and the two-point correlation is then given by $`G_c(i,j)=m_im_j^{}`$ and the susceptibility by $`\chi =_im_im_i^{}`$. The staggered magnetization is in fact slightly the easier of these two to analyze, so we examine this case first. Huse and Rutenberg pointed out that the staggered magnetization has a wavelength $`\sqrt{3}`$ on the height lattice. Using Equation (17) to transform from Cartesian coordinates to triangular ones, Equation (16) then gives $`\eta =\frac{4}{3}`$ and $`\gamma /\nu =\frac{2}{3}`$. Figure 16 shows our simulation results for the susceptibility on the projective plane. A least-squares fit gives $`\gamma /\nu =0.71\pm 0.02`$, which is a little greater than the expected value, but, as Figure 17 shows, there is a clear downward trend in the value with increasing system size. Huse and Rutenberg saw similar behavior in their bond-coloring model (in fact their value for $`\gamma /\nu `$ is almost exactly the same as ours) and they attributed it to logarithmic corrections to the scaling forms which arise because the height representation is at its roughening transition (see Section IV). We have also measured spatial correlations in the staggered magnetization on both the torus and the projective plane (Figures 18 and 19, respectively). The results on the projective plane have larger statistical errors than those on the torus, because of the need to stay well away from sites with local curvature in performing the calculations. Fits to the data yield values of $`\eta =1.27\pm 0.01`$ on the torus and $`\eta =1.28\pm 0.09`$ on the projective plane, in reasonable agreement with theoretical predictions. The situation is a little more complicated for our other definition of magnetization. Looking at Figure 7, we see that the mapping from heights to spin states has wavelength 2 on the height lattice, which implies that both the correlation exponent for the spins and the corresponding susceptibility exponent should be equal to 1. Figure 20 shows our data for the susceptibility on the projective plane. A least-squares fit gives $`\gamma /\nu =0.84\pm 0.01`$, which is some way from the theoretical prediction, but, as Figure 21 shows, the value is increasing steadily with system size, so the discrepancy is again probably due to logarithmic corrections. However, when we look at the spin–spin correlation function, Figure 22, we find that $`\eta =0.35\pm 0.01`$, which is nowhere near 1. Our values for the exponents thus appear to violate the Fisher law. The explanation of this result is as follows. The correlation function shown in Figure 22 is measured, naturally enough, along one of the three principal directions on the toroidal lattice. If two sites lie along such a direction and their spins are in the same state, then their heights are constrained to particular sublattices of the height lattice. For instance, suppose that the distance between them is a multiple of 3, and that one site has state 1 and height $`(0,0)`$. Then the height of the other site is a sum of multiples of three times $`\alpha `$, $`\beta `$ and $`\gamma `$, all with the same sign since $`𝐭`$ is constant along any of the lattice directions. Since $`\alpha +\beta +\gamma =0`$, we can cancel terms in threes until only two kinds of terms are left, say $`\alpha `$s and $`\beta `$s. Since the number of these is still a multiple of three, and since $`\alpha `$ and $`\beta `$ (or more precisely, the unit vectors in those directions on the height lattice) have a component of $`\frac{1}{2}`$ along the $`h_1`$ axis, the only heights we can end up with are ones with $`h_1=3k/2`$ for some integer $`k`$. The set of such heights which correspond to spin state 1 is shown in Figure 23. More generally, once we choose the distance between two sites and the state of the spin on one of them, the only heights the other site can have and still be in the same spin state are those on one of the three sublattices corresponding to that state. The wavelength of each such sublattice is $`2\sqrt{3}`$, and Equation 16 then gives $`\eta =1/3`$, in good agreement with our simulation results. This phenomenon, in which some constraint gives an operator a wavelength on the height lattice longer than that for the corresponding susceptibility, may apply to other operators and systems like this one. Therefore, it appears that the Fisher scaling relation cannot always be applied in a straightforward fashion. ## VIII Conclusions In this paper we have studied the four-state Potts antiferromagnet on the triangular lattice at zero temperature. By examining the spectrum of defect types which can appear in the model and identifying the conservation laws which govern their interactions, we have been able to define a Burgers vector for the model and thus show that the ground-state ensemble has a well-defined height representation. The height is, in this case, two-dimensional and may be the simplest example to date of a vector height. Using the height representation, we have been able to demonstrate a number of results. We have shown that the model is equivalent to a three-state Potts antiferromagnet on the bonds of either the triangular or hexagonal lattice, or on the sites of the Kagomé lattice, that pairs of defects feel entropic forces between them in proportion to the dot product of their topological charges, and that the spin–spin correlations in the ground-state ensemble must decay algebraically at large distances. We have calculated exact values for a variety of critical exponents. The scaling exponent $`\eta `$ for the spin–spin correlation function is of particular interest because the wavelength of the spin operator on the height lattice turns out to be longer than the fundamental periodicity of the height-to-spin mapping due to a constraint on paths connecting sites along a principal lattice direction. This gives a value of $`\eta =\frac{1}{3}`$ even though the corresponding susceptibility exponent $`\gamma /\nu =1`$—an apparent violation of the Fisher scaling relation. We have also used the height representation to prove for the first time that the Wang-Swendsen-Kotecký cluster Monte Carlo algorithm is ergodic for the four-state model on the triangular lattice. Our proof however requires that the lattice have a topology which possesses no non-contractible loops of infinite order, and this means that simulations on the torus (which has such loops) are probably not ergodic, calling previous simulations of this and related models into question. Simulations on lattices with free boundary conditions or lattices with the topology of the sphere or the projective plane are, on the other hand, provably ergodic, and we have performed extensive simulations on the projective plane. We find reasonable agreement between the values of the critical exponents measured in these simulations and both theoretical values and values from previous numerical studies. There are, however, significant logarithmic corrections to scaling associated with the fact that the height model is exactly at its roughening transition, and this means that we have to go to extremely large lattice sizes to see the expected behavior. ## Acknowledgements We wish to thank Alan Sokal and Jesus Salas for sharing with us their observation that the WSK algorithm is not ergodic on lattices with toroidal boundary conditions, and Michael Lachmann for useful discussions. C.M. also thanks Molly Rose and Spootie the Cat for their support. Note added. During the course of these investigations, we learned of an unpublished manuscript by Henley which also shows that the four-state antiferromagnet studied here is equivalent to the three-state one on the Kagomé lattice.
no-problem/9902/hep-ph9902213.html
ar5iv
text
# QCD ## 1 Introduction These notes are not an introduction to Quantum Chromodynamics (QCD), the theory of strong interactions. Many excellent textbooks exist where the interested reader can find clear expositions of the subject. In fact we shall assume here that the reader is familiar with the basic technical tenets of perturbative QCD, such as Feynman diagrams, dimensional regularization, renormalization, beta function and so on. We pretend rather to give physical insight of the reasons behind a rather remarkable theoretical development which is nearly as old as QCD itself, namely deep inelastic scattering. Why is it so remarkable? QCD is, in a way, a rather simple theory (specially when compared to the intricacies of the electroweak part of the Standard Model). It is just a simple extension of good old Quantum Electrodynamics. Instead of matter fields carrying electrical charge +1 (and anticharge -1), if we are talking about electrons, the matter fields of QCD (the quarks) carry a new quantum number: color. Color can take three different values (and their corresponding anti-values). Furthermore the intermediate bosons (the gluons), unlike photons which cannot change the charge of a particle, change the color of a quark. They may for instance turn a red quark into a blue quark. This simple fact implies that the gauge group of QCD is much larger that the $`U(1)`$ of QED. Since every quark comes in three copies, whose labels get exchanged, there is a $`SU(3)`$ invariance<sup>1</sup><sup>1</sup>1The reason why the symmetry group is $`SU(3)`$ and not $`U(3)`$ —which would have nine gluons— has to do with the mathematical fact that $`ϵ_{\alpha \beta \gamma }`$ is not an invariant tensor of $`U(3)`$. This tensor is necessary to build combinations of three quarks which are antisymmetric, as required by Fermi statistics. For instance, $`\mathrm{\Delta }^{++}=|u^{}u^{}u^{}`$. Being the lightest hadron with this quark contents we expect to have the three quarks in the ground state, hence in a symmetric wave function. This is in contradiction with Fermi statistics. The contradiction can be solved if we admit the existence of a new quantum number $`\alpha `$ and $`|u^{}u^{}u^{}=\frac{1}{\sqrt{6}}ϵ^{\alpha \beta \gamma }|u_\alpha ^{}u_\beta ^{}u_\gamma ^{}`$.. Simple as this theory may seem, it is not an easy matter in QCD to relate theory and experiment. It is well known that the fields and particles we know how to compute with (with the simplest tool at our disposal, perturbation theory) are not those that are observed by experimentalists in their detectors due to the phenomenon of confinement. Quarks and gluons are real, but they cannot be detected as free particles as they are known to be confined inside hadrons. In view of this is quite remarkable that there are theoretical techniques enabling us to put the theory to very stringent tests. The phenomenon of confinement comes about because the coupling constant of QCD (which is relatively small at large values of the momentum transfer, the value quoted by the Particle Data Group is $`\alpha _s(M_Z)=0.119\pm 0.002`$; notice the truly amazing precision, which may soon be reduced to a mere 1%) becomes strong as the energy decreases. The behaviour predicted by the renormalization-group is, at one loop, $$\alpha _s(Q)=\frac{\pi }{\frac{\beta _1}{_2}\mathrm{log}(Q^2/\mathrm{\Lambda }_{QCD}^2)}.$$ (1) where $`\mathrm{\Lambda }_{QCD}`$ is a renormalization-group invariant, but scheme dependent, quantity and $`\beta _1=11/2+N_f/3`$. The preferred value is $`\mathrm{\Lambda }_{QCD}=219_{23}^{+25}`$ MeV (5 flavours, $`\overline{MS}`$-scheme). The meaning of the scale-dependent, renormalized coupling constant is roughly the following: it is the “effective” coupling, relevant at the scale $`Q`$, namely, the one that (within the choosen scheme) minimizes further quantum corrections, in particular resumming all large logs. From (1) we see that precisely at the scale $`Q^2=\mathrm{\Lambda }_{QCD}^2`$, the effective coupling has a pole. Of course well before that scale is reached perturbation theory becomes completely unreliable, and the $`1/r`$ potential of the perturbative interaction change to a stronger behaviour, possibly to a linear $`r`$ behaviour. When computing the production of any physical hadron (typically of mass $``$ few $`\mathrm{\Lambda }_{QCD}`$), perturbation theory is completely useless. One instance where perturbative QCD can be applied is to inclusive processes, provided that the characteristic momentum transfer is large enough. These will not be discussed in this lecture. The interested reader can look at the determination of $`\alpha _s`$ through $`R_{had}`$ $`R_\tau `$, for instance in. A clear application of perturbative QCD is provided by deep inelastic scattering. The subject is now over twenty years old and, by now, perturbative QCD has been tested to a high degree. Furthermore, the commissioning of HERA has opened a new kinematical region where it may be possible to study the onset of non-perturbative effects in a controlled fashion. The exploration of this region is a fascinating subject interesting on its own right. Due to the lack of time and space we have not included two sections that, in our view, should be in any general review of perturbative QCD and deep inelastic scattering. The first one concerns the so-called “spin of the proton” problem. Another topic that is not covered at all is the study of the photon structure functions. The list of references is very incomplete and those provided merely reflect personal tastes. ## 2 Logs in QCD Beyond tree level most Feynman diagrams are ultraviolet divergent. Take for instance the one diagram contributing at one loop to the gluon propagator. Neglecting external momenta, the integral over the momenta of the internal particles is of the form $$\frac{d^4k}{(2\pi )^4}\frac{k^\alpha k^\beta }{k^4}=\mathrm{}.$$ (2) To make sense of the theory and get a finite result we must introduce a cut-off $`\mathrm{\Lambda }`$ and counterterms. A possible method is to perform a subtraction at some $`q^2=\mu ^2`$. For instance, for the self-energy of the gluon propagator $$\mathrm{\Pi }(q^2)\mathrm{\Pi }(\mu ^2)\mathrm{\Pi }_R(q^2)=\mathrm{finite}.$$ (3) Alternatively we can make sense of the integrals using dimensional regularization by continuing the dimensionality from 4 to $`n=4+2ϵ,`$ $$\frac{d^4k}{(2\pi )^4}\frac{d^nk}{(2\pi )^n},$$ (4) and subtract just the poles in $`1/ϵ`$ (minimal subtraction, $`MS`$) or also the $`\gamma _E\mathrm{log}4\pi `$ that always accompanies the singularity in $`1/ϵ`$ (improved minimal subtraction, $`\overline{MS}`$). For instance, for the quark contribution to the gluon self-energy, one has the following result after computing the integral in $`n=4+2ϵ`$ dimensions $$\mathrm{\Pi }(q^2)=\frac{\alpha _s}{6\pi }\delta _{ab}(\frac{1}{ϵ}+\gamma _E+\mathrm{log}\frac{m^2}{4\pi \mu ^2}+\mathrm{}).$$ (5) Using the $`MS`$ and $`\overline{MS}`$ schemes one gets $$\mathrm{\Pi }_{MS}(q^2)=\frac{\alpha _s}{6\pi }\delta _{ab}(\gamma _E+\mathrm{log}\frac{m^2}{4\pi \mu ^2}+\mathrm{})\mathrm{\Pi }_{\overline{MS}}(q^2)=\frac{\alpha _s}{6\pi }\delta _{ab}(\mathrm{log}\frac{m^2}{\mu ^2}+\mathrm{})$$ (6) The above expressions illustrate the appeareance of ultraviolet logs $`\mathrm{log}q^2/\mu ^2`$ through the renormalization procedure. There are, however, other source of logs in QCD. They are of the form $`\mathrm{log}q^2/\lambda ^2`$ where $`\lambda ^2`$ can is some external momentum squared or a small (mass)<sup>2</sup> that we have given by hand to the (massless) gluon. While the former are associated to ultraviolet divergent integrals (integrals with a bad behaviour when the internal momentum is large), the latter are infrared logs and are related to Feynman diagrams with a bad behaviour when one or more external momenta vanish. Ultraviolet logs appear in any renormalizable field theory after renormalization. On the contrary, infrared logs appear whenever a theory has massless particles in the spectrum (such as photons or gluons). A given Feyman diagram can give rise to both type of singularities at the same time. There actually two classes of infrared logs caused by massless particles. The so-called infrared divergences arise from the presence of a soft massless particle ($`k^\mu 0`$). For instance in the process $`e^+e^{}\mu ^+\mu ^{}`$ at the one loop level we have to compute the integral (figure 1) $$\frac{d^4k}{(2\pi )^4}\frac{1}{k^2[(p_1+k)^2m^2][(p_2+k)^2m^2]}.$$ (7) When $`p_1^2=p_2^2=m^2`$ the integral behaves for $`k^\mu 0`$ as $$\frac{d^4k}{(2\pi )^4}\frac{1}{k^4}.$$ (8) and diverges. This divergence is unphysical so it must be cancelled by something else. The Bloch-Nordsieck theorem states that in inclusive enough cross-sections the infrared logs cancel. What do we mean by ‘inclusive enough’? A detector will not be able to discern a ‘true’ muon from a muon accompanied by a soft enough photon (with $`\stackrel{}{k}0`$). Therefore, we have to consider diagrams where a soft photon is radiated by the muon, square the modulus of the amplitude and integrate over the available phase space (which actually depends on the experimental cut). When this is done the result is infrared finite. The relevant diagrams are depicted in figure 2. The other type of infrared logs are called mass singularities. They occur in theories with massless particles because two parallel massless particles have an invariant mass equal to zero $$k^2=(k_1+k_2)^2=(\omega _1+\omega _2,0,0,\omega _1+\omega _2)=0.$$ (9) The appeareance of such a mass singularity is illustrated in figure 3 $$\frac{1}{(pk)^2}=\frac{1}{p^2+k^22k^0p^0+2k^0p^0\mathrm{cos}\theta },$$ (10) the denominator vanishes when we set all particles on shell ($`p^2=k^2=0`$) and $`\theta 0`$ (i.e. $`\stackrel{}{k}`$ is parallel to $`\stackrel{}{p}`$). Even if one of the two particles is massive there is a singularity, provided the 3-momenta are parallel. The Kinoshita-Lee-Nauenberg theorem ensures that for inclusive enough cross section the mass singularities also cancel. Both for mass singularities and for infrared divergences there is a trade-off between $`\lambda ^2`$, the infrared regulator of a massless particle, and the energy and angle resolution of the inclusive cross section $`\mathrm{\Delta }E`$, $`\mathrm{\Delta }\theta `$. In practice, it is better to regulate the infrared logs using dimensional regularization (introducing $`\lambda ^2`$ leads to difficulties with gauge invariance). Real gluon emission diagrams are regulated by performing the phase space integration in $`n`$ dimensions. There is in fact a lot of physical insight hidden in the infrared logs. Physical arguments tell us that the probability of finding a ‘bare’ isolated muon should be zero. We know this because detectors are unable to tell apart a muon from a muon plus one soft photon or indeed from a muon plus any number of soft photons. Infrared divergences in QED can be summed up and then one sees that the probability of finding an isolated muon is indeed zero and not infinite as the one loop diagram led us to believe. Whenever a Feynman diagram is infrared divergent it means that we have forgotten something relevant. Let us consider in QED the interaction of a charged fermion with an external source and let us expand in the number of virtual photons $`n`$ (i.e. in the number of loops). The total amplitude will be expressed as $$M(p,p^{})=\underset{n=0}{\overset{\mathrm{}}{}}M_n(p,p^{})$$ (11) then a calculation shows that $`M_0`$ $`=`$ $`m_0,`$ $`M_1`$ $`=`$ $`m_0\alpha B+m_1,`$ $`M_2`$ $`=`$ $`m_0{\displaystyle \frac{(\alpha B)^2}{2}}+m_1\alpha B+m_2,`$ (12) $`\mathrm{}`$ The quantities $`m_n`$ are IR-finite, while $`B`$ is IR-divergent. The series in (11) can be summed up $$M=\mathrm{exp}(\alpha B)\underset{n=0}{\overset{\mathrm{}}{}}m_n,m_n\alpha ^n,$$ (13) and $`B`$ can be obtained just from the lowest order diagram. Introducing an IR cut-off $`\lambda `$, $`B\mathrm{log}m^2/\lambda ^2`$, which indeed shows that when we remove the cut-off the probability of finding an isolated charged fermion is zero in QED. The addition of soft photons changes that result, multiplying the total amplitude by a factor $`(\mathrm{\Delta }E/\lambda )^2`$. There is a trade between the infrared regulator and $`\mathrm{\Delta }E,\mathrm{\Delta }\theta `$. The latter are, of course, detector-dependent quantities. Although only partial results exist, it is believed that a similar exponentiation takes place in QCD. Due to the confinement subtleties it is unclear whether the suppression factor is compensated by radiation of soft gluons. Even if this compensation does actually take place that would not disprove confinement, only that confinement would have nothing to do with the structure of infrared singularities of the theory. The previous discussion can be summarized in the following way: due to IR singularities one is forced to consider cross sections not of individual particles in the final state, but rather of bunches of particles, each ‘hard’ quark and gluon surrounded by a ‘soft’ cloud of gluons and, perhaps, quarks. These bunches are called ‘jets’. The Bloch-Nordsieck and Kinoshita-Lee-Nauenberg theorems guarantee the finiteness of the cross-sections. We have to define an energy and angle resolution. For instance, if $`p`$ is the momentum of a primary quark we can impose that the energy of each soft particle in its jet satisfies $`k_i^0<ϵp_0`$ and also that $`\mathrm{arg}(\stackrel{}{p},\stackrel{}{k}_i)<\delta `$. We will get singularities of the form $`\alpha _s\mathrm{log}ϵ\mathrm{log}\delta `$ when $`ϵ,\delta 0`$. The specific details depend on the precise definition of the jet. ## 3 Free Parton Model The counterpart of having an effective coupling constant which grows at low energies is that the theory becomes simple at high energies, making perturbative calculations possible (at least some of them). Indeed a brilliant confirmation of the existence of nearly free constituents inside the nucleon was provided more than twenty years ago by a series of experiments carried out at SLAC. Then it became possible to scatter electrons off nucleons in fixed target experiments with a typical momentum transfer $`110`$ (GeV)<sup>2</sup>, a kinematical range unexplored until that time. The kinematics of Deep Inelastic Scattering (DIS) processes is shown in fig. 4. The virtual intermediate boson is far off its mass-shell and scatters off a quark or gluon in a time of $`𝒪(1/\sqrt{q^2})`$. Typically quarks and gluons are themselves off-shell by an amount of $`𝒪(\mathrm{\Lambda }_{QCD})`$. After the scattering the outgoing particles recombine into hadrons in a time of $`𝒪(1/\mathrm{\Lambda }_{QCD})`$. Thus Deep Inelastic is a two-step process * Short distance scattering occurs with a large momentum transfer. Well described by perturbation theory. * Outgoing particles recombine. Not calculable in perturbation theory. However, the second part can be side-stepped all together for fully inclusive rates. Then perturbation theory is adecuate to describe many features of DIS. If we place ourselves in the center of mass of the hadron and virtual intermediate boson both particles move very fast towards each other. Whatever components the hadron contains they will all have moments parallel to $`P^\mu `$, up to transversal motion of $`𝒪(\mathrm{\Lambda }_{QCD})`$. Let us write $$p^\mu =xP^\mu .$$ (14) The squared CM energy of the lepton and proton constituent will be $$\widehat{s}=(xP+k)^22xPkxs.$$ (15) We neglect masses (as well as the fact that constituents are off-shell by $`𝒪(\mathrm{\Lambda }_{QCD})`$) The final momentum of the constituent is $`xP+q`$. Therefore $$0(xP+q)^22xPq+q^2,$$ (16) so $`x=q^2/2Pq`$. If $`\nu `$ is the energy transfer in the LAB system, we can also write $$x=\frac{q^2}{2\nu m_N}$$ (17) $`m_N`$ being the nucleon mass. It is convenient to introduce $$y=\frac{Pq}{Pk}=1\frac{Pk^{}}{Pk}$$ (18) In the lab frame $`y=\nu /E`$ and $`0y1`$. $`y`$ is thus the relative energy loss of the colliding lepton. Let us for the time being ignore altogether QCD interactions and let us assume that constituents of the nucleons (which we will call partons) are free. DIS will then be described by an incoherent sum over elementary processes. The partonic differential cross sections in the LAB frame will be $``$ $`\nu q`$, $`\overline{\nu }q`$-scattering $$\frac{d\widehat{\sigma }_\nu }{dy}=(\frac{g^2}{4\pi })^2\frac{\pi mE}{(q^2M_W^2)^2}[g_L^2+g_R^2(1y)^2],$$ (19) $$\frac{d\widehat{\sigma }_{\overline{\nu }}}{dy}=(\frac{g^2}{4\pi })^2\frac{\pi mE}{(q^2M_W^2)^2}[g_R^2+g_L^2(1y)^2].$$ (20) $``$ $`eq`$-scattering $$\frac{d\widehat{\sigma }_e}{dy}=Q^2\frac{4\pi \alpha ^2mE}{q^4}[1+(1y)^2].$$ (21) The neutral current sector is dominated by $`\gamma `$ exchange below $`q^2=M_Z^2`$, so we have not bothered to include $`Z`$ exchange. In (21) $`Q`$ is the quark electric charge (in units of $`e`$) and $`m`$ is the target mass. Since $`p^\mu =xP^\mu `$, we just take $`m=xm_N`$. Then, for instance, $$\frac{d^2\widehat{\sigma }_e}{dxdy}=Q^2\frac{4\pi \alpha ^2xm_NE}{q^4}[1+(1y)^2].$$ (22) Let $`u(x)dx,d(x)dx,\mathrm{}`$ be the number of $`u,d,\mathrm{}`$ quarks with momentum fraction between $`x`$ and $`x+dx`$ in a nucleon. Then $`xu(x),xd(x),\mathrm{}`$ will be the fraction of the nucleon momentum carried by $`u,d,\mathrm{}`$ quarks. We, of course, identify quarks with partons and, since we assume that they are free, proceed to sum incoherently over the different scattering possibilities. For instance in $`epeX`$ $$\frac{d^2\sigma }{dxdy}=\frac{2\pi \alpha ^2}{s}\frac{1+(1y)^2}{xy^2}[\frac{4}{9}(u(x)+\overline{u}(x))+\frac{1}{9}(d(x)+\overline{d}(x))+\frac{1}{9}(s(x)+\overline{s}(x))].$$ (23) (We neglect here the possible contribution from the sea of heavy quarks in the nucleon.) Other DIS processes weigh differently quarks and antiquarks. For instance, in $`\nu p\mu X`$ if $`q^2M_W^2`$ we have $$\frac{d^2\sigma }{dxdy}=x\frac{G_F^2s}{\pi }[c_c^2d(x)+s_c^2s(x)+\overline{u}(x)(1y)^2],$$ (24) with $`c_c=\mathrm{cos}\theta _c`$, $`s_c=\mathrm{sin}\theta _c`$, the cosinus and sinus of the Cabibbo angle, respectively. The parton distribution functions (PDF) $`q(x)`$ are quantities which are not calculable within perturbative QCD, as we will see. Probably the first thing that one learns is that gluons are very important. From the SLAC-MIT data $$Q=U+D+S=_0^1𝑑xx(u(x)+d(x)+s(x))0.44,$$ (25) $$\overline{Q}=\overline{U}+\overline{D}+\overline{S}=_0^1𝑑xx(\overline{u}(x)+\overline{d}(x)+\overline{s}(x))0.07.$$ (26) The total fraction of momentum carried by quarks (and antiquarks) is only about 50% The rest is carried by gluons (parametrized by a PDF $`g(x)`$), showing that although the naive quark model works very well is just a gross simplification as a model of hadrons, at least at large $`q^2`$. In fact we know the asymptotic values of (25) and (26) based in the equipartition of energy in a free theory (since, asymptotically, QCD is free) $$_0^1𝑑xxq(x)\frac{3N_f}{16+3N_f}_0^1𝑑xxg(x)\frac{16}{16+3N_f}.$$ (27) From the above limiting values we see and at higher energies the total momentum carried by constituent or valence quarks diminishes and that an equally important role is played by particles from the Dirac sea of the nucleon. Another example where the quark model fails to describe some basic features of hadrons is provided by the ‘spin of the proton’ problem. $`\mu `$-scattering on polarized targets shows that the fraction of the total spin of the proton that can naively be associated to constituent quarks is surprisingly small. We shall not dwelve on this matter further here. Nevertheless, there are some obvious sum rules for the parton distribution functions which can ultimately be explained in terms of the quark model. For the proton $$_0^1𝑑x(u(x)\overline{u}(x))=2,$$ (28) $$_0^1𝑑x(d(x)\overline{d}(x))=1,$$ (29) $$_0^1𝑑x(s(x)\overline{s}(x))=0.$$ (30) On QCD grounds we expect that this free parton model description of the hadrons becomes more and more accurate when $`q^2\mathrm{}`$, $`\nu \mathrm{}`$, while keeping $`x`$ fixed. This limit is known as Bjorken scaling and in the strict $`q^2=\mathrm{}`$ limit everything depends just on $`x`$. Let us now try to rederive the previous results in a more theoretical setting. Let us consider for instance $`\nu p`$ scattering. Then $$\frac{d^2\sigma }{d(q^2)d\nu }=\frac{G_F^2m_N}{\pi s^2}L^{\mu \nu }H_{\mu \nu },$$ (31) where $$L^{\mu \nu }=\frac{1}{8}\mathrm{Tr}[\gamma ^\mu (1\gamma _5)\gamma ^\alpha \gamma ^\nu (1\gamma _5)\gamma ^\beta ]k_\alpha k_\beta $$ (32) is the trace over the leptonic external lines, and $`H_{\mu \nu }`$ is given by $$\underset{X}{}P|J_\mu (0)|X(P^{})X(P^{})|J_\nu (0)|P=d^4ze^{iqz}P|J_\mu (z)J_\nu (0)|P,$$ (33) which is just $`\mathrm{Im}\mathrm{\Pi }_{\mu \nu }(q)`$, with $$\mathrm{\Pi }_{\mu \nu }(q)=d^4ze^{iqz}P|TJ_\mu (z)J_\nu (0)|P.$$ (34) We decompose $`H_{\mu \nu }`$ as $$H_{\mu \nu }=g_{\mu \nu }F_1+\frac{P_\mu P_\nu }{\nu m_N}F_2+\frac{i}{2\nu m_N}ϵ_{\mu \nu \rho \sigma }P^\rho q^\sigma F_3$$ (35) (If we assume that we are working with non-polarized targets $`P`$ and $`q`$ are the only vectors at our disposal.) $`F_1`$, $`F_2`$ and $`F_3`$ are called the nucleon structure functions. Using the kinematical relations $`x=q^2/2\nu m_N`$ and $`y=2m_N\nu /s`$ we get $$d(q^2)d\nu =\nu sdxdy,$$ (36) $$\frac{d^2\sigma }{dxdy}=\frac{G_F^2s}{2\pi }[F_1xy^2+F_2(1y)F_3xy(1\frac{y}{2})].$$ (37) Let us now compare with the free parton model. We see that (restoring the $`\nu p`$ index, to make apparent that the structure functions are process dependent) $$F_1^{\nu p}(x)=c_c^2(\overline{u}(x)+d(x))+s_c^2(s(x)+\overline{u}(x)),$$ (38) $$F_2^{\nu p}(x)=2xc_c^2(\overline{u}(x)+d(x))+2xs_c^2(s(x)+\overline{u}(x)),$$ (39) $$F_3^{\nu p}(x)=2c_c^2(\overline{u}(x)d(x))+2s_c^2(s(x)+\overline{u}(x)).$$ (40) For other processes the actual expressions may vary but the structure functions are always linear combinations of the parton distribution functions, i.e. $`F_2(x)=x_f\delta _fq_f(x)`$, etc. Note that in the free parton model $$F_L(x)=F_2(x)\frac{F_1(x)}{2x}=0.$$ (41) This is the Callan-Gross relation, which actually is not an exact one; it gets modified when the $`q^2`$ dependence is included, i.e. we depart from the strict $`q^2=\mathrm{}`$ limit. An exact sum rule, which is easily expressed in terms of the structure function $`F_2(x)`$ was given by Adler $$_0^1\frac{dx}{x}(F_2^{\overline{\nu }I}F_2^{\nu I})=4I_3$$ (42) where $`I_3`$ is the third component of the target of isospin $`I`$. Other sum rules are not exact, except in the strict free parton model, but their violations are computable within perturbative QCD. Two instances are the Gross-Llewellyn-Smith sum rule $$\frac{1}{2}_0^1𝑑x(F_3^{\nu p}+F_3^{\overline{\nu }p})=_0^1𝑑x(u(x)\overline{u}(x)+d(x)\overline{d}(x))+𝒪(\alpha _s)=3+𝒪(\alpha _s),$$ (43) and the Gottfried sum rule $$_0^1\frac{dx}{x}(F_2^{\mu p}F_2^{\mu n})=\frac{1}{3}+𝒪(\alpha _s)+_0^1𝑑x(\overline{u}(x)\overline{d}(x)).$$ (44) Violations to the different sum rules are a theoretically clean way to extract $`\alpha _s`$. Of course in practice things are difficult because the sum rules involve an integral over all values of $`x`$, which is always poorly known in some range of the integrand and extrapolations are needed. If we assume that $`\alpha _s`$ is extracted from some other source, the Gottfried sum rule provides some interesting information on the sea contents of light antiquarks in the nucleon. The collaboration NMC has determined that for the proton $$\overline{U}\overline{D}_0^1𝑑x(\overline{u}(x)\overline{d}(x))=0.15\pm 0.036.$$ (45) In the proton quark sea there are many more $`d`$-type antiquarks than $`u`$-type. This is in fact a large isospin violation, much larger than expected to the mass difference of the quarks and, in fact, goes, at least naively, in the opposite direction. The isospin violation is larger at low values of $`x`$ ($`x0.2`$) and also at low values of $`Q^2`$, which hints that long-distance physics (quite remote from perturbative QCD) is called for. The enhancement of $`d`$-type antiquarks is confirmed by other experiments. For instance Na51 finds at $`x=0.18`$ $`\overline{u}/\overline{d}=0.51\pm 0.09`$, while NuSeaC finds at $`Q^2=7.4`$ GeV<sup>2</sup> that $`\overline{U}\overline{D}=0.10\pm 0.024`$. Could this be due to the exclusion principle, as illustrated in figure 5, which makes harder for $`u`$-type antiquarks to appear in the sea? It could be, but it is very difficult to come up with quantitative results. A partial understanding is provided by the chiral quark model, including pion exchange (see figure 5), but this type of physics is still very poorly understood. ## 4 Scaling Violations It is plain clear from the data that there is some $`Q^2=q^2`$ dependence in the structure functions. In other words, there are violations of Bjorken scaling and actually $`F_f=F_f(x,Q^2)`$. The free parton model is not completely correct (no big surprise, of course). Our job is to try to understand these violations in the framework of QCD. Let us assume that we have isolated a parton with initial momentum $`p=xP`$. The probability of finding such a parton is given by $`q(x)`$. At the parton level the structure function $`F_2`$ is just $`\widehat{F}_2=x\delta _f`$ ($`\delta _f`$ is the appropriate charge. At the proton level, however, this partonic cross-section has to be multiplied by the probability of finding the parton with momentum fraction $`x`$, i.e. by $`q(x)`$. We write this in the form $$F_2(x)=x\delta _f_0^1𝑑\xi q(\xi )\delta (x\xi )=x\delta _f_0^1\frac{d\xi }{\xi }q(\xi )\delta (1\frac{x}{\xi }).$$ (46) At $`𝒪(\alpha _s)`$ many diagrams contribute. They are given in figure 6. We are looking for scaling violations and, therefore, we must look for logs. In other words we must investigate ultraviolet, infrared and mass singularities of any kind. It turns out that ultraviolet singularities are proportional to the free result and simply renormalize $`\delta _f`$. Infrared divergences cancel amongst all diagrams. Only the mass singularity present in diagram (d) when the momentum of the gluon is parallel to that of the gluon survives. Keeping only logarithmic terms, the caculation at $`𝒪(\alpha _s)`$ amounts to the replacement $$\delta (1\frac{x}{\xi })\delta (1\frac{x}{\xi })+\frac{\alpha _s}{2\pi }P(\frac{x}{\xi })\mathrm{log}\frac{Q^2}{\lambda ^2},$$ (47) where $`\lambda ^2`$ is an infrared regulator and $$P(z)=C_F[\frac{1+z^2}{(1z)_+}+\frac{3}{2}\delta (1z)].$$ (48) Only the logarithmic term is retained for this discussion. Now we understand the reason for writing things in apparently such a complicated way. First of all, scaling violations appear through the contribution of real soft particles. $`\xi `$ is the original momentum fraction of the nucleon carried by the parton, which is reduced to $`x\xi `$ after the emission of the soft gluon. The hard scattering takes place with the parton carrying fraction $`x`$. All along the discussion, the transverse motion of the partons inside the target is neglected as well as are all masses. Then $$F_2(x)=x\delta _f[q_0(x)+_x^1\frac{d\xi }{\xi }q_0(\xi )\frac{\alpha _s}{2\pi }P(\frac{x}{\xi })\mathrm{log}\frac{Q^2}{\lambda ^2}].$$ (49) In addition we have replaced $`q(x)`$ by $`q_0(x)`$, the bare PDF. If we now define the renormalized PDF by $$q(x,\mu ^2)=q_0(x)+\frac{\alpha _s}{2\pi }_x^1\frac{d\xi }{\xi }q_0(x)P(\frac{x}{\xi })\mathrm{log}\frac{Q^2}{\lambda ^2},$$ (50) we can write $$F_2(x,Q^2)=x\delta _f[q(x,\mu ^2)+_x^1\frac{d\xi }{\xi }q(x,\mu ^2)\frac{\alpha _s}{2\pi }P(\frac{x}{\xi })\mathrm{log}\frac{Q^2}{\mu ^2}].$$ (51) No doubt the similarity with the usual renormalization process did not go unnoticed. Now the infrared regulator has been eliminated (hidden in the bare PDF) at the expense of introducing a renormalization-scale dependence. These are the sought after scaling violations. ## 5 Altarelli-Parisi Equations and $`\mathrm{\Lambda }_{QCD}`$ At this point it is convenient to introduce the variable $`t=\frac{1}{2}\mathrm{log}\mu ^2/\mathrm{\Lambda }_{QCD}^2`$. It then follows from (50) that $$\frac{}{t}q(x,t)=\frac{\alpha _s(t)}{\pi }_x^1\frac{d\xi }{\xi }q(\xi ,t)P(\frac{x}{\xi }),$$ (52) which immediately translate into differential equations for the structure functions themselves. These are the Altarelli-Parisi equations. They summarize the rate of change of the parton distribution functions with $`t`$. We define the moments of the PDF’s by $$q(n,t)=_0^1𝑑xx^{n1}q(x,t).$$ (53) Introducing the anomalous dimension $`\gamma _n`$ as $$\gamma _n=_0^1𝑑xx^{n1}P(x),$$ (54) the convolution over the fractional momentum $`\xi `$ transforms into a product $$\frac{}{t}q(n,t)=\frac{\alpha _s(t)}{\pi }\gamma _nq(n,t).$$ (55) This leads to following scaling behaviour for the moments of the structure functions $$F_2(n,Q^2)=F_2(n,Q_0^2)\left(\frac{\alpha _s(Q_0)}{\alpha _s(Q)}\right)^{\frac{\gamma _n}{\beta _1}},$$ (56) which is our final expression. Experiments agree on the whole very nicely with the scaling violations predicted by QCD. Taking into account all the subtle points of Quantum Field Theory that have gone into the analysis, this provides a beautiful check of the theoretical framework. We have been considering $`F_2`$, but the same procedure can be repeated for any structure function. The expression (56) amounts to resumming the leading logs obtained by iteration of soft collinear gluons. The diagram is the one shown in figure 7. As a simplifying hypothesis we have neglected mixing. In fact, the evolution equation is a $`(2N_f+1)\times (2N_f+1)`$ matrix, involving quarks and gluons. In the flavour singlet case life is more complicated; there is mixing with gluon operators and therefore one must also consider gluon parton distribution functions as well $$\frac{q(x,t)}{t}=\frac{\alpha _s(t)}{\pi }_x^1\frac{dy}{y}[q(y,t)P_{qq}(\frac{x}{y})+g(y,t)P_{gq}(\frac{x}{y})]$$ (57) $$\frac{g(x,t)}{t}=\frac{\alpha _s(t)}{\pi }_x^1\frac{dy}{y}[g(y,t)P_{gg}(\frac{x}{y})+q(y,t)P_{qg}(\frac{x}{y})]$$ (58) The detailed form of the Altarelli-Parisi kernels at leading and NLO order can be found in . No complete calculation exists yet at the NNLO to my knowledge, just some partial results. It is important to realize that the Altarelli-Parisi equations are not exact. They take into account the perturbative contribution only (and this up to a given order in perturbation theory). They also neglect transverse motion, which leads to corrections of $`𝒪(\mathrm{\Lambda }_{QCD}/Q^2)`$ to the leading results. These are more easily dealt with in the perhaps more rigorous (but more cumbersome) treatment based in the Operator Product Expansion, which will not be discussed here. Target mass corrections should be equally taken into account. They are particularly important near thresholds (such as the charm and bottom thresholds). For instance the proper treatment of thresholds is highly relevant for HERA, since a big chunk of the data comes from a region close to these thresholds. And, of course, mass corrections are important for charm PDF from the sea, which have actually been recently measured. The analysis of Deep Inelastic Scattering based on the Altarelli-Parisi equations (or, alternatively, on the Operator Product Expansion) has been one of the most clear tests of perturbative QCD and traditionally the best way of determining $`\alpha _s`$, which, as we have seen, enters in the scaling violations. However, at present the value of $`\alpha _s(M_Z)`$ extracted from Z-physics is equally accurate if not more. For some time a discrepancy was claimed between the value of $`\alpha _s`$ obtained from the analysis of scaling violations and the Z-pole value. For instance the value quoted by CCFR was $`\alpha _s(M_Z)=0.111\pm 0.004`$, well below the world average, and quite away, for instance, from measurements based on event shapes at the $`Z`$ peak ($`\alpha _s(M_Z)=0.122\pm 0.007`$; for measurements of $`\alpha _s`$ at LEP, see e.g.). The average $`\alpha _s`$ from DIS was given just two years ago to be $`\alpha _s(M_Z)=0.113\pm 0.005`$. Theoretical speculations were fuelled and it was claimed that non-perturbative corrections could be larger than originally thought. Deep inelastic scattering data have been reanalyzed recently and the quoted value for $`\alpha _s(M_Z)`$ from DIS from a global fit to all data is $`\alpha _s(M_Z)=0.118\pm 0.005`$. The agreement with other determinations is now almost perfect. The discrepancy was apparently due, it is claimed, to the energy calibration of the detector (in the case of CCFR, at least; the new CCFR value is $`\alpha _s(M_Z)=0.119\pm 0.005`$), a better understanding of higher twist corrections (not discussed here) and from the treatment of the contribution from heavy quarks from the sea, charm in particular. We will see later, when we discuss in somewhat more detail the form of the PDF, that one must actually make a number of hypothesis before being able to extract $`\alpha _s(M_Z)`$. While do not regard the issue as totally settled yet, because some of the older data sets are very poorly described by the now preferred value of $`\alpha _s(M_Z)`$, the most recent data coming from HERA (H1 and ZEUS) give values which agree nicely with each other and in fact fall well in the high $`\alpha _s`$ range, almost on top of the new average value. These ongoing experiments will become statistically more and more significant in the near future in the determination of $`\alpha _s(M_Z)`$ as more and more data points pile up. ## 6 Parton Distribution Functions We do not know in general how to compute the parton distribution functions, even for $`q^2\mathrm{}`$. Only their evolution can be reliably computed either through the Operator Product Expansion of the use of the Altarelli-Parisi equations and this for large enough values of $`q^2`$. The scaling behaviour is governed by the anomalous dimensions. At leading order they are $$\gamma _{qq}(j)=C_F[\frac{1}{2}+\frac{1}{j(j+1)}2\underset{k=2}{\overset{j}{}}\frac{1}{k}],$$ (59) $$\gamma _{qg}(j)=T_R[\frac{2+j+j^2}{j(j+1)(j+2)}],$$ (60) $$\gamma _{gq}(j)=C_F[\frac{2+j+j^2}{j(j^21)}],$$ (61) $$\gamma _{gg}(j)=2C_A[\frac{1}{12}+\frac{1}{j1}+\frac{1}{(j+1)(j+2)}\underset{k=2}{\overset{j}{}}\frac{1}{k}]\frac{2N_f}{3}T_R,$$ (62) where $`C_F,T_R`$ and $`C_A`$ are group-theoretical factors. An interesting issue is the behaviour of the parton distribution functions at the endpoints $`x=0`$ and $`x=1`$. The large $`n`$ behaviour of the moments probes the $`x1`$ region. Since it is natural to expect that at the kinematical boundaries the parton distribution functions vanish, one can make the following ansatz for $`x1`$ $$q(x,Q^2)A(Q^2)(1x)^{\nu (\alpha _s(Q^2))1}.$$ (63) Demanding that eq. (63) fulfills the $`q^2`$ evolution equation leads to $$A(Q^2)=A_0\frac{[\alpha _s(Q^2)]^{d_0}}{\mathrm{\Gamma }(1+\nu (\alpha _s(Q^2)))}\nu (\alpha _s)=\nu _0\frac{16}{332N_f}\mathrm{log}\alpha _s(Q^2),$$ (64) $$d_0=\frac{16}{332N_f}(\frac{3}{4}\gamma _E).$$ (65) Likewise, for the gluons we have $$g(x,Q^2)A_0^{}\frac{[\alpha _s(Q^2)]^{d_0}}{\mathrm{\Gamma }(2+\nu (\alpha _s(Q^2)))}\frac{(1x)^{\nu (\alpha _s(Q^2))}}{\mathrm{log}(1x)}.$$ (66) The constants $`A_0,A_0^{}`$ and $`\nu _0`$ are not calculable on perturbative QCD and depend on the specific operator. $`d_0`$ is universal. When $`x1`$ the gluon distribution functions approach zero more rapidly than the quark ones. For large values of $`x`$ the quark contents of nucleons is the relevant one. Second order corrections to this asymptotic behaviour can be derived in a similar way and are known. It turns out that the correction is arbitraryly large if one gets sufficiently close to $`x=1`$. This is because the collinear gluon is, in addition, soft in that exceptional configuration thus giving rise to a $`\mathrm{log}(1x)`$ singularity. Multiple emission is then kinematically favoured, since the log overcomes the $`\alpha _s`$ suppression. For small values of $`x`$ the opposite behaviour takes place, the gluon distribution function eventually becomes dominant. At LHC the cross-section will be greatly dominated by low-$`x`$ physics and the important process there will be gluon-gluon scattering. At the current Tevatron run the quark contents of protons and antiprotons is still dominant. Let us see why gluons dominate completely at low $`x`$. The key point is the appeareance of the singularity for $`j=1`$. Indeed, as $`j1`$ $$\gamma _{gg}(j)\frac{2N}{j1}.$$ (67) Then $$g(j,t)=g(j,t_0)\mathrm{exp}[\frac{N}{\pi \beta _1(j1)}\mathrm{log}\frac{t}{t_0}].$$ (68) Then we proceed to evaluate $`g(x,t)`$ by performing an inverse Mellin transform $$xg(x,t)=\frac{1}{2\pi i}_C𝑑jx^{1j}g(j,t).$$ (69) The integration circuit is a line in the direction of the imaginary axis in the complex $`j`$ plane, to the right of the $`j=1`$ singularity. The saddle point method can now be used provided that $`\mathrm{log}(1/x)`$ is large and this is the reason why this procedure gives only the small $`x`$ behaviour. Working things out we see that the gluon parton distribution function for low $`x`$ behaves as $$g(x)\frac{1}{x}\mathrm{exp}\sqrt{C(Q^2)\mathrm{log}\frac{1}{x}},$$ (70) where $`C(Q^2)`$ is calculable. Unfortunately, this answer is not totally satisfactory because something must stop the growth in $`g(x)`$ for low $`x`$, or else one runs into unitarity problems sooner or later, and thus eq. (70) it is not credible all the way to $`x=0`$. Technically speaking, there must be corrections that destabilize the saddle point solution. Physically, the uncontrolled growth of the gluon distribution is an infrared unstability. The density of soft gluons is too large. Shadowing and non-linear evolution equations are the buzzwords here. The double scaling limit (high $`Q^2`$, low $`x`$) is well supported by the data. See for instance for a recent analysis coming from measurements of $`F_2`$ at H1. Except in these two limiting cases, PDF’s have to be parametrized. The way one proceeds is by proposing a given parametrization at some reference value $`Q_0^2`$, then evolve to all desired values of $`Q^2`$ using the Altarelli-Parisi equations, then perform a global fit of the parameters describing the PDF and, at the same time, determine $`\alpha _s`$. ## 7 Confinement $`\mathrm{\Lambda }_{QCD}`$ sets a natural scale in the theory. Well above $`\mathrm{\Lambda }_{QCD}`$ perturbation theory makes sense. Of course perturbative QCD at large enough energies describes a world of quasi-free quarks, interacting with Coulomb-like forces. We know very well that hadronic physics is a very different world where quarks are confined into colorless hadrons. As soon as $`q^2\mathrm{\Lambda }_{QCD}^2`$ perturbation theory is unreliable. It simply cannot explain confinement. What confinement means is that there is a force between quarks that does not decrease with distance. There is indeed phenomenological evidence (which is supported by lattice analysis) that the interquark potential at large distances in QCD is of the form $$V(r)a\mathrm{\Lambda }_{QCD}^2r\frac{b}{r}+\mathrm{}$$ (71) The first term is a confining quark potential. The constant $`a`$ has to be $`1`$ because $`\mathrm{\Lambda }_{QCD}^2`$ is the only dimensional quantity at our disposal. The Coulombic part is called the Lüscher term and plays a crucial role in heavy quark spectroscopy. At short distances the behaviour of the interquark potential is totally different. It is Coulomb-like $$V(r)\frac{\alpha _s}{r}.$$ (72) Different quarks probe the different regimes. Indeed, because $`\alpha _s(m_t)`$ is so small (say $`0.1`$), the corresponding Bohr radius is $`r_010^2`$ fm, much smaller than $`\mathrm{\Lambda }_{QCD}^1`$. The coulombic part of the interquark potential largely dominates. (At such short distances the linearly rising potential is not at work, the leading confinement effects are $`r^3`$, as discussed by Leutwyler some time ago, but they can be safely neglected at first approximation.) Bottom and charm are in a somewhat intermediate position. $`\alpha _s(m_b)`$ is still relatively small. The Bohr radius is $`10^1`$ fm, smaller but comparable to $`\mathrm{\Lambda }_{QCD}^1`$. Spectroscopy is basically perturbative, at least for the lowest levels, but some non-perturbative effects are visible. Charm is really no-man’s land. Both perturbative and non-perturbative effects compete even for the ground state $`n=1`$. For light quarks the Bohr radius is several fm and the confining potential is fully at work. The existence of a confining potential leads to very large multiplicities and jets. One can imagine a quark-antiquark being formed at the primary vertex then moving apart. Part of their kinetic energy is deposited in the interquark potential as they move away. Very quickly a separation $`r_m`$ is reached where the energy deposited is enough to form a new quark-antiquark pair, $$\mathrm{\Lambda }_{QCD}^2r_m2m_q,$$ (73) at that moment the quark-antiquark ‘string’ breaks and the process is repeated until the average relative momentum is small enough and hadronization takes place. There is a lot of physics in the string picture. We can think of color forces being confined in some sort of tube or string joining the two moving quarks. The chromodynamic energy is thus stored in a relatively small region of space-time. If this picture is correct we should expect hadronization to take place in this region in preference to any other. This is indeed the case; in three jet events (which originate from $`\overline{q}qg`$, with a hard gluon) there is a clear enhancement of soft gluon and hadron production in the regions between color lines (representing the gluon by a double color line, or $`\overline{q}q`$ state), and a relative depletion in other regions. This phenomenon is called color coherence. ## 8 Dual Models We now backtrack in history, to the pre-QCD days, and recall that in the 60’s the duality hypothesis was much in fashion. The hypothesis stated that in strong interactions a sum over intermediate states in the $`s`$-channel should reproduce the sum over resonances in the $`t`$-channel. Mathematically, $$A(s,t)=\underset{J}{}\frac{g_J^2s^J}{tM_J^2}=\underset{J}{}\frac{g_J^2t^J}{sM_J^2}.$$ (74) Of course for this to have even a chance of being true an infinite number of intermediate states is required. It should be stated right away that the evidence for this peculiar property was (and still is) rather weak. However, in 1968, Veneziano took the idea seriously and proposed the following amplitude $$A(s,t)=\frac{\mathrm{\Gamma }(\alpha (s))\mathrm{\Gamma }(\alpha (t))}{\mathrm{\Gamma }(\alpha (s)\alpha (t))},\alpha (s)=\alpha (0)+\alpha ^{}s.$$ (75) This amplitude is manifestly dual. In 1969 and 1970 Y.Nambu and others unveiled the relation between the Veneziano amplitude and open string theory. It was later generalized to closed strings by Koba and Nielsen. This of course is the way to make contact with the long-distance properties of QCD that we have discussed in the previous section. If in some kinematical regime QCD can be described by some type of string theory, an amplitude of the Veneziano type should describe strong interactions in a regime where perturbation theory is not valid. Unfortunately life is not so easy. First of all, consistency of string theory requires $`\alpha (0)=1`$ and then the Veneziano amplitude exhibits poles in the $`s`$-channel whenever $`s=(n1)/\alpha ^{}`$, $`n=0,1,\mathrm{}`$. There is a tachyonic scalar/pseudoscalar particle and a massless vector particle. In addition, the amplitude does not exhibit the proper chiral behaviour (Adler zero), i.e. $`A(0,0)=0`$, if we are to interpret the pseudoscalars as pions. It is a complete phenomenological failure. Lovelace and Shapiro finally proposed an amplitude with the correct behaviour or, at least, not manifestly incorrect. It is inspired from the theory of supersymmetric strings and the equivalent of the Veneziano amplitude now reads $$A(s,t)=\frac{\mathrm{\Gamma }(1\alpha (s))\mathrm{\Gamma }(1\alpha (t))}{\mathrm{\Gamma }(1\alpha (s)\alpha (t))},\alpha (s)=\alpha (0)+\alpha ^{}s,$$ (76) where $`\alpha (0)`$ in principle also equals 1 here. Again this amplitude (which is tachyon-free — this is the main virtue of the supersymmetric string) is not physically acceptable because the amplitude does not have the correct Adler zero. However, it does reproduce the right chiral behaviour if one replaces by hand this value by 1/2 and is then appropriate to describe pion scattering. Unfortunately this replacement cannot be derived at present from any known string theory. The corresponding trajectory is called the Regge trajectory and corresponds to the exchange of open strings and, as discussed, has an intercept $`\alpha _R=1/2`$. Physically this is interpreted as the exchange of quark-antiquark pairs In addition there is a Pomeron trajectory which has an intercept $`\alpha _P=\alpha _R+1/2`$, and is due to the exchange of closed strings (interpreted as glueballs in QCD). Let us now return to DIS and let us see what all this has to do with it. Let us consider the behaviour of the amplitude $`A(s,t)`$ for large $`s`$ and fixed $`t`$ $$A(s,t)s^{\alpha ^{}t+\alpha _0}.$$ (77) Let us now assume that the elastic nucleon-parton amplitude (figure 9) is described by such an amplitude. Kinematically, $`s=(Pk)^2`$. If we decompose $$k=xP\frac{k_T^2}{2x}n+k_T,$$ (78) where $`n`$ is a vector such that $`n^2=0`$, $`nP=1`$ and $`k_Tn=k_TP=0`$, then, with the usual approximations, $$s=2kP=\frac{k_T^2}{x}$$ (79) We see then that, at a finite value of $`k_T`$, low $`x`$ corresponds to the large $`s`$ behaviour. On the other hand, for this subprocess $`t=0`$, and we of course realize that the amplitude is directly related to the cross-section for parton + proton $``$ anything. If the parton is a quark only the Reggeon will contribute. If there is mixing with partonic gluons we will have a contribution from both the Regge and Pomeron trajectories. After a short calculation we shall conclude that $$F_2(x)A_Px^0+A_Rx^{\frac{1}{2}}.$$ (80) And, therefore that, at low values of $`x`$, $$g(x)x^1,q(x)x^{\frac{1}{2}}.$$ (81) These are the implications of Regge theory for the parton distribution functions. Notice that there is no $`Q^2`$ dependence anywhere, so the question poses itself as to which is the appropriate value of $`Q^2`$ to compare with. The answer is not very well defined, but it should correspond to the typical range of energies where Regge phenomenology is known to be valid in other contexts, i.e. a few GeV. In fact, a fit to the gluon PDF shows that at $`Q^2=4`$ GeV<sup>2</sup>, $`g(x)x^{1.17}`$. Not bad. ## 9 Low $`x`$ region The region below $`10^2`$ had not been explored experimentally until very recently; a first look at these low-$`x`$ values has been provided by the commissioning of HERA. HERA is a machine ideally suited for an in-depth analysis of structure functions. It should be possible to arrive at very low values of $`x`$ (down to $`x10^5`$). Most parametrizations have traditionally performed very poorly when extrapolated to the low $`x`$ region. Typically they predict an increase as $`x0`$ which is lower than what is actually seen. The behaviour $`F_2(x)x^\lambda `$, with $`\lambda 1/2`$ as $`x0`$, which is predicted from the BFKL evolution equation seemed at some point (see e.g.) to stand the comparison with HERA results best. However, this behaviour is still incompatible with unitarity and cannot hold all the way to $`x=0`$ either. If fact we know now that the predictions from BFKL cannot be trusted. This has prompted a renewed interest in trying to extract the behaviour at low $`x`$ from conventional Altarelli-Parisi evolution. The consensus now seems to be that even for the low values of $`x`$ analyzed at HERA there is no real evidence of any results beyond ordinary perturbative QCD. It is easy to understand why perturbative QCD must fail at some point. The expansion of the splitting function $`P(z)`$ in powers of $`\alpha _s`$ at the NLO actually resums all terms of the form $`(\alpha _s\mathrm{log}Q^2)^n`$ and $`\alpha _s^n\mathrm{log}^{n1}Q^2`$. Looking at (10) we see that the propagator causing the mass singularity is ($`p=\xi P`$) $$\frac{1}{2pk}=\frac{2x}{\xi k_T^2}.$$ (82) Apart from the parametric integrals, we have $$\frac{d^2k_T}{k_T^2}.$$ (83) This is the origin of the $`\mathrm{log}\lambda ^2`$ and, eventually, of the $`\mathrm{log}Q^2`$. The leading $`\mathrm{log}^nQ^2`$ will thus be produced by one single region in integration $$\frac{d^2k_T^n}{(k_T^n)^2}\frac{d^2k_T^{n1}}{(k_T^{n1})^2}\mathrm{}\frac{d^2k_T^1}{(k_T^1)^2},$$ (84) with $`|Q||k_T^n||k_T^{n1}|\mathrm{}|k_T^1|`$. At sufficiently large $`x`$ logarithms of $`1/x`$ necessarily appear. We have actually seen them in the double scaling limit. They must, at some point, spoil the predictivity of the perturbative expansion. One must then identify the regions in integration capable of giving rise to terms of the form $`(\alpha _s\mathrm{log}\frac{1}{x})^n`$, and eventually to $`\alpha _s^n\mathrm{log}^{n1}\frac{1}{x}`$ and so on. Lipatov and coworkers (see also for alternative derivations) have identified such a contribution. It corresponds to the diagram depicted in figure 10, more specifically to the region $$k_i=\alpha _iP+\beta _in+K_{iT},$$ (85) $$\alpha _1\alpha _2\mathrm{}\alpha _{n1},k_{iT}k_{jT},\beta _1\beta _2\mathrm{}\beta _{n1}.$$ (86) This leads to splitting kernels similar to those of the Altarelli-Parisi equations $$F_2(x,Q^2)=d^2k_T_x^1\frac{d\xi }{\xi }C(\frac{x}{\xi },Q^2,k_T)F_2(\xi ,k_T),$$ (87) where $`F_2(\xi ,k_T)`$ obeys the differential equation $$\xi \frac{}{\xi }F_2(\xi ,k_T^{})=d^2k_TK(k_T^{},k_T)\frac{k_{T}^{}{}_{}{}^{2}}{k_T^2}F_2(\xi ,k_T).$$ (88) The BFKL kernel is now known to leading and subleading order. The leading asymptotic solution is $$F(x,k_T)x^{4N\mathrm{log}\frac{2\alpha _s}{\pi }}.$$ (89) Unfortunately the corrections implied by the next-to-leading calculations are gigantic. There is no way of doing anything useful with BFKL scaling at present. As previously discussed this does not seem to be a problem for HERA data since a careful analysis shows that —perhaps surprisingly— the data is well accounted for by ordinary perturbative QCD (the matter is however somewhat controversial to this date), but it will come the day where $`\mathrm{log}1/x`$ corrections will be essential. The subject is thus still open. ## Acknowledgements It is a pleasure to thank S.Forte for discussions on some of the topics presented in this lecture. This work has been supported in part by CICYT grant AEN98-0431 and CIRIT project 1998SGR 00026. Being at the meeting has been, as always, a great pleasure. Thanks are due to B.Adeva and the rest of the organizing committee for the invitation extended to me. Thanks are also due to Mercedes Fatas, the efficient secretary of the XXVI International Meeting on Fundamental Physics for her patience in dealing with this long overdue contributor to the proceedings.
no-problem/9902/hep-ph9902354.html
ar5iv
text
# Highest-energy cosmic rays from Fermi-degenerate relic neutrinos consistent with Super-Kamiokande results \[ ## Abstract Relic neutrinos with mass $`0.07_{0.04}^{+0.02}`$ eV, in the range consistent with Super-Kamiokande data, can explain the cosmic rays with energies in excess of the Greisen-Zatsepin-Kuzmin cutoff. The spectrum of ultra-high energy cosmic rays produced in this fashion has some distinctive features that may help identify their origin. Our mechanism does not require but is consistent with a neutrino density high enough to be a new kind of hot dark matter. preprint: UCLA/99/TEP/6 \] The observation of atmospheric neutrino oscillations at Super-Kamiokande has provided a strong evidence that at least one of the neutrino species has mass greater than $`m_{_{SK}}=\sqrt{\delta m^2}=0.07_{0.04}^{+0.02}`$ eV. It seems plausible that at least one of the neutrino masses is actually in this range, as would be the case if the neutrino masses were hierarchical. Of course, an alternative possibility, that some neutrino masses are nearly degenerate and larger than $`\sqrt{\delta m^2}`$, is also consistent with the current data. In this paper we will concentrate on the former case. If one of the lepton asymmetries of the Universe $`L_i=(n_{\nu _i}n_{\overline{\nu }_i})/s`$ is of order one , the neutrinos with masses $`m_{_{SK}}`$ can make a significant contribution to the energy content of the Universe . (Here and below, $`n_x`$ denotes the number density of the $`x`$-species, and $`s=1.80g_sn_\gamma `$ is the entropy density.) This possibility, frequently discounted in contemporary cosmology, can arise naturally if the Universe underwent an Affleck-Dine baryo- and leptogenesis at the end of a relatively low-scale inflation. Lepton asymmetries can also be generated in neutrino oscillations after the electroweak phase transition, but we do not know whether an asymmetry of order one can arise in this fashion. We will show that ultra-high energy cosmic rays with interesting new features can be produced in the presence of such background neutrinos. The cosmic rays with energies beyond the Greisen-Zatsepin-Kuzmin (GZK) cutoff present a challenging outstanding puzzle in astroparticle physics and cosmology . The protons with energies above $`5\times 10^{19}`$ eV could not reach Earth from a distance beyond 50 – 100 Mpc because they scatter off the cosmic microwave background photons with a resonant photoproduction of pions: $`p\gamma \mathrm{\Delta }^{}N\pi `$. The mean free path for this reaction is only $`6`$ Mpc. The photons of comparable energies pair-produce electrons and positrons on the radio background and, likewise, cannot reach Earth from beyond 10 to 40 Mpc . This creates a problem because the closest astrophysical objects that could produce such energetic particles, active galactic nuclei (AGN), are at least hundreds of megaparsecs away. Several solutions have been proposed for the origin of these ultra-high energy cosmic rays (UHECR). For example, they could be produced in the decay of some ubiquitous hypothetical heavy particles , topological defects , or light neutral supersymmetric hadrons ; or they could also hint at exotic interactions. A more conservative and economical scenario involves relic neutrinos. It has been suggested that distant AGN’s can produce high-energy neutrinos whose annihilation on the relic neutrinos in the galactic halo at $`\sqrt{s}M__Z`$ can produce the protons with energies above the GZK cutoff. In the absence of lepton asymmetry, the background neutrino density would not be sufficient to generate the necessary flux of UHECR if it were not for the clustering of neutrinos in the galactic halo. The latter helps. However, the required total energy carried by the high-energy neutrinos is uncomfortably close to the total luminosity of the Universe . If neutrinos with mass $`m_{_{SK}}`$ carry a large lepton asymmetry, the above scenario is aided in several ways. First, the density $`n_\nu `$ of the Fermi-degenerate light neutrinos should be much higher than that considered in Ref. . Therefore, the neutrino annihilations in the entire volume $`(50\mathrm{Mpc})^3`$ contribute to the observed UHECR. Second, the probability for the neutrinos emitted at distances of the order of the inverse Hubble constant, $`H^1`$, to decay within $`50`$ Mpc of the observer is maximal when the mean free path $`\lambda =1/(\sigma _{\mathrm{ann}}n_\nu )`$ is comparable to $`H^1`$. We will see that this condition, $`\lambda H^1`$, is satisfied automatically for $`m_\nu `$ measured by Super-Kamiokande if $`\mathrm{\Omega }_\nu h^20.01`$. As a result of an increased background neutrino density and the increase in annihilation probability in our neighborhood, the total energy of the energetic neutrinos is much less than the total luminosity of the Universe, in contrast to Refs. . Finally, the predicted spectrum of the UHECR peaks at $`10^{2021}`$ eV and has a cutoff at about $`10^{23}`$ eV, hence, creating an observable feature that could help distinguish this mechanism from some others. As long as the chemical potential of the degenerate neutrinos $`\mu `$ is smaller than the neutrino mass, the neutrinos are non-relativistic. Therefore, the neutrinos with mass $`m_{_{SK}}`$ are non-relativistic at present if the degeneracy parameter $`\xi =\mu /T<100`$. We will not come close to this upper bound on $`\xi `$. Therefore, the energy density is $`\rho _i=m_{\nu _i}n_{\nu _i}`$ and $$\eta _\nu =\frac{n_\nu }{n_\gamma }=3.6\left(\frac{0.07eV}{m_\nu }\right)\left(\frac{\mathrm{\Omega }_\nu h^2}{0.01}\right),$$ (1) where $`\mathrm{\Omega }_\nu \rho _i/\rho _c`$. These neutrinos decouple while they are still relativistic because the decoupling temperature $`T_d`$ increases with $`\xi `$ , so $`T_d>1`$ MeV. Therefore, the present value of $`\xi `$ is determined by the following relation valid for relativistic species: $$\eta =\frac{1}{12\zeta (3)}\left(\frac{T_\nu }{T_\gamma }\right)^3[\pi ^2\xi +\xi ^3]=0.0252(9.87\xi +\xi ^3).$$ (2) Here we used $`\zeta (3)=1.202`$ and $`(T_\nu /T_\gamma )^3=4/11`$. This relation is valid as long as $`T_d`$ is lower than the muon mass, which translates into the upper bound $`\xi <12`$ . From equation (2), $`\eta =3.6`$, as in equation (1), corresponds to $`\xi =4.6`$. The UHECR are dominated by the resonant neutrino annihilations with $`\sqrt{s}M__Z`$, which corresponds to the incoming neutrino energy $`E_\nu =M__Z^2/2m_\nu =0.57\times 10^{23}`$ eV$`(m_\nu /0.07\mathrm{eV})`$. The annihilation cross section is $`\sigma _{\mathrm{ann}}=4\pi G__F/\sqrt{2}`$. The mean free path for a neutrino is, therefore, $`\lambda ={\displaystyle \frac{1}{\sigma _{\mathrm{ann}}n_\nu }}`$ $`=`$ $`5.3\left({\displaystyle \frac{412\mathrm{cm}^3}{n_\nu }}\right)\times 10^{28}\mathrm{cm}`$ (3) $`=`$ $`{\displaystyle \frac{3.9}{\eta _\nu }}\left({\displaystyle \frac{0.65}{h}}\right)H^1.`$ (4) The atmospheric neutrino oscillations observed at Super-Kamiokande imply that a muon neutrino has a large, order one, mixing with either $`\nu _\tau `$ or a sterile neutrino. The mass eigenstate that makes up the Fermi-degenerate relic background with mass $`m_{_{SK}}`$ must, therefore, have a muon neutrino component that is not small. The astrophysical sources are expected to produce high-energy muon neutrinos from the decays of pions. Since both the background neutrinos and the high-energy neutrinos from astrophysical sources must have a large muon component, there is no significant suppression of the annihilation cross section due to mixing angles. The probability of the neutrino annihilation is maximized when $`\eta _\nu 4`$, at which point a fraction $`r=(50\mathrm{Mpc}/2.7\lambda )5.5\times 10^3`$ of all neutrinos with energies near the $`Z`$ resonance annihilate within 50 Mpc of the observer. According to the data pertaining to the momentum distribution of $`Z`$ decay products , these neutrino annihilations yield hadrons with an average energy $$E_p0.025E_\nu 1.4\left(\frac{0.07\mathrm{eV}}{m_\nu }\right)\times 10^{21}\mathrm{eV}$$ (5) and photons (from the decays of $`\pi ^0`$) with a lower average energy, $$E_\gamma 0.0035E_\nu 2.0\left(\frac{0.07\mathrm{eV}}{m_\nu }\right)\times 10^{20}\mathrm{eV}.$$ (6) $`Z`$ decays produce on average 2 nucleons and about 9.5 $`\pi ^0`$’s, which decay into 19 photons. Even though the photons that reach Earth originate in a smaller volume (10–40 Mpc)<sup>3</sup> than protons (50-100 Mpc)<sup>3</sup>, both components should contribute to the UHECR because the multiplicity of photons is about 10 times greater than that of protons. It may be possible, given enough statistics of UHECR, to resolve the two peaks in the distribution of cosmic rays: one at lower mean energy, due to photons, and another, at higher energies, due to protons. The total energy per unit volume in neutrinos with energies above $`10^{19}`$ eV in our case is lower than that in Refs. by a factor $`21(\eta _\nu /4)(m_\nu /0.07\mathrm{eV})`$. The difference with $`E(>10^{19}\mathrm{eV})`$ in equation (5) of Ref. is the increased value of the background neutrino density, $`m_\nu =m_{_{SK}}`$, and $`N_{_{CR}}30`$. This means the total power generated in high-energy neutrinos $`_\nu 0.5(4/\eta _\nu )(0.07\mathrm{eV}/m_\nu )10^{48}\mathrm{erg}\mathrm{Mpc}^1\mathrm{yr}^1`$ is well below the luminosity of the Universe. Bounds on the neutrino degeneracy come from nucleosynthesis , as well as structure formation in the Universe . A combination of both yields $`0.06\stackrel{<}{_{}}\xi _{\nu _e}\stackrel{<}{_{}}1.1`$, $`|\xi _{\nu _{\mu ,\tau }}|\stackrel{<}{_{}}6.9`$ . In addition, in models with large neutrino degeneracy, the baryon density of the Universe can be higher than in conventional nucleosynthesis, so a larger fraction of the dark matter can be baryonic. These bounds are based on the requirement that neutrinos not interfere with galaxy formation. However, it has been shown recently that a relativistic degenerate neutrino may actually help the formation of structure in the Universe. This neutrino may be the addition that ‘standard’ cold dark matter (CDM) models need to account well for all present data on structure in the Universe. In fact, a CDM model with a relativistic relic neutrino background with $`\xi 3.4`$ provides a good fit to all the data on large-scale structure and anisotropy of the cosmic microwave background radiation. These analyses do not fully apply to our case because the neutrinos with mass $`m_{_{SK}}`$ are non-relativistic at present. Our scenario is consistent with large values of $`\mathrm{\Omega }_\nu h^2`$ that can make the relic degenerate neutrinos an important hot dark matter component. Its effects on structure formation and the anisotropy of CMBR need to be studied. Finally, we would like to address the issue of how the large lepton asymmetry could be generated in the early Universe, while the baryon asymmetry remains small. The coherent Affleck-Dine condensate could have evolved differently along the flat directions carrying the baryon ($`B`$) and the lepton ($`L`$) numbers. The corresponding supersymmetry-breaking terms and higher-dimension operators depend on the type of a flat direction and need not be the same for the directions with $`B0`$ and $`L0`$. If the electroweak symmetry was never restored after inflation, either because the reheat temperature was low, or because the lepton asymmetry was high , the baryon and lepton asymmetries of the Universe may differ by many orders of magnitude. A cold dark matter component, called for by the need to form structure, could also arise naturally in the low-scale Affleck-Dine scenario . In this paper we do not assume any particular cosmological scenario for generating the relic neutrinos. However, it is reassuring that an economical self-consistent cosmological model outlined above can simultaneously produce cold and hot dark matter, the latter being the light, Fermi-degenerate neutrinos that carry a lepton asymmetry of order one. To summarize, we have shown that a cosmic background of Fermi-degenerate neutrinos with masses inferred from the Super-Kamiokande data can explain the ultra-high energy cosmic rays above the GZK cutoff. Our mechanism does not require but is consistent with a neutrino density high enough to be a new kind of hot dark matter. The mechanism predicts ultra-high energy cosmic rays from both protons, peaked at energies $`10^{22}`$ eV, and photons, peaked at $`10^{20}`$ eV. Another prediction is a new cutoff at $`M__Z^2/2m_\nu =0.57\times 10^{23}`$ eV$`(m_\nu /0.07\mathrm{eV})`$. This work was supported in part by the US Department of Energy grant DE-FG03-91ER40662, Task C.
no-problem/9902/cond-mat9902120.html
ar5iv
text
# Ground-state clusters of two-, three- and four-dimensional ±𝐽 Ising spin glasses ## 1 Introduction Spin-glass models with discrete distributions of the interactions are believed to exhibit a rich ground-state ($`T=0`$) landscape. So far only for very small systems of $`N=4^3`$ spins the complete landscape has been analyzed . In this work a new “ballistic search” algorithm is presented, which allows the treatment of much larger systems. As an application, two-, three-, and four-dimensional Edwards-Anderson (EA) $`\pm J`$ spin glasses are investigated. They consist of $`N`$ spins $`\sigma _i=\pm 1`$, described by the Hamiltonian $$H\underset{i,j}{}J_{ij}\sigma _i\sigma _j.$$ (1) The sum runs over all pairs of nearest neighbors. The spins are placed on $`d=2,3,4`$-dimensional simple (square/ cubic/ hypercubic) lattices of linear size $`L`$ with periodic boundary conditions in all directions. Systems with quenched disorder of the interactions (bonds) are considered. Their possible values are $`J_{ij}=\pm 1`$ with equal probability. To reduce the fluctuations, a constraint is imposed, such that $`_{i,j}J_{ij}=0`$. Since the Hamiltonian exhibits no external field, reversing all spins of a configuration (also called state) $`\{\sigma _i\}`$ results in a state with the same energy, called the inverse of $`\{\sigma _i\}`$. In the following, a spin configuration and its inverse are regarded as one single state. Since the ground-state problem belongs to the class of NP-hard tasks , only algorithms with exponentially increasing running time are available. Currently it is possible to obtain a finite number of true ground states per realization up to $`L=56`$ (2d), $`L=14`$ (3d) or $`L=8`$ (4d) using a special optimization algorithm . For the $`\pm J`$ model the number of existing ground states $`n_{GS}`$ per realization, called the ground-state degeneracy, grows exponentially with $`N`$. The reason is, that there are usually free spins, i.e. spins which can be flipped without changing the energy of the system. A state with $`f`$ independent free spins allows at least for $`2^f`$ different configurations of the same energy. Currently, it seems to be impossible to obtain all ground states for system sizes larger than $`L=5`$. To overcome this problem in this work all clusters of ground states are calculated. A cluster is defined in the following way: Two ground state configurations are called neighbors if they differ only by the orientation of one free spin. All ground states which are accessible through this neighbor relation are defined to be in the same cluster. This means, one can travel through the ground states of one cluster by flipping only free spins. With the method presented here, the ballistic search (BS), it is not only possible to analyze large ground-state clusters, furthermore it allows to obtain the cluster landscape when having only a small subset of all ground states available. Additionally one can estimate the size of the clusters from this small number of sample states, as shown later on. The number of clusters as a function of system size is also of interest on its own: for the infinitely-ranged Sherrington-Kirkpatrik (SK) Ising spin glass a complex configuration-space structure was found using the replica-symmetry-breaking mean-field (MF) scheme by Parisi . If the MF scheme is valid for finite-dimensional spin glasses as well, then the number of ground-state clusters must diverge with increasing system size. On the other hand the droplet-scaling picture predicts that basically one ground-state cluster (and its inverse) dominates the spin-glass behavior. To address this issue a cluster-analysis was performed for small systems of one size $`L=4`$ in three dimensions . But an analysis of the size dependence of the number of clusters or even an investigation of two-/four-dimensional spin glasses has not been carried out before. By the way, with the method presented here it is possible to calculate the entropy $`S_0\mathrm{ln}n_{GS}k_B`$ even for systems exhibiting a huge $`T=0`$ degeneracy. The symbol $`\mathrm{}`$ denotes the average over different realizations of the bonds. Since the number of free spins is extensive, $`s_0S_0/N>0`$ holds for the $`\pm J`$ spin glass. For three-dimensional spin glasses, in the ground-state entropy was estimated by computing exact free energies for systems of size $`4\times 4\times M`$ ($`4M10`$). In a Monte-Carlo simulation and in multicanonical simulations were used to calculate $`s_0`$. Results for the ground-state entropy of two dimensional system were obtained with similar methods: numerically exact calculations of finite systems , Monte-Carlo simulations and analytics methods . For $`d=4`$ the author is not aware of results for the ground-state entropy. The paper is organized as follows: First the procedures used in this work are presented. Then the results for the number of clusters and the number of ground states as function of $`N`$ in two, three and four dimensions are shown. The last section a summarizes the results. ## 2 Algorithms At first the optimization method applied here is stated. Then, for illustrating the problem, a simple method for constructing clusters of ground states is explained. In the main part the BS method for identifying clusters in systems exhibiting a huge degeneracy is presented and it is explained how this technique can be applied to estimate the size of these clusters. The basic method used here for the calculation of spin-glass ground states is the cluster-exact approximation (CEA) algorithm , which is a discrete optimization method designed especially for spin glasses. In combination with a genetic algorithm this method is able to calculate true ground states . Using this technique one does not encounter ergodicity problems or critical slowing down like in algorithms which are based on Monte-Carlo methods. Genetic CEA was already utilized to examine the ground states of two-, three- and four dimensional $`\pm J`$ spin glasses by calculating a small number of ground states per realization , while in this work the emphasize is on the study of the cluster landscape. Therefore, many ground states per random sample have to be obtained. Since the algorithm calculates only one independent ground state per run, a much larger computation effort was necessary. Once many ground states are calculated the straight-forward method to obtain the cluster landscape works the following way: The construction starts with one arbitrary ground state. All its neighbors are added to the cluster. These neighbors are treated recursively in the same way: All their neighbors which are yet not included in the cluster are added. After the construction of one cluster is completed the construction of the next one starts with a ground state, which has not been visited so far. The construction of the clusters needs only linear computer-time as function of $`n_{SG}`$ ($`O(n_{SG})`$), similar to the Hoshen-Kopelman technique , because each ground state is visited only once. Unfortunately the detection of all neighbors, which has to be performed at the beginning, is of $`O(n_{SG}^2)`$ since all pairs of states have to be compared. Even worse, all existing ground states must have been calculated before. As e.g. a $`5^3`$ system may exhibit already more than $`10^5`$ ground states, this algorithm is not suitable. The basic idea of the ballistic-search algorithm is to use a test, which tells whether two ground states are in the same cluster. The test works as follows: Given two independent replicas $`\{\sigma _i^\alpha \}`$ and $`\{\sigma _i^\beta \}`$ let $`D`$ be the set of spins, which are different in both states: $`D\{i|\sigma _i^\alpha \sigma _i^\beta \}`$. Now BS tries to build a path of successive flips of free spins, which leads from $`\{\sigma _i^\alpha \}`$ to $`\{\sigma _i^\beta \}`$ while using only spins from $`D`$. In the simplest version iteratively a free spin is selected randomly from $`D`$, flipped and removed from $`D`$. This test does not guarantee to find a path between two ground states which belong to the same cluster, since it may depend on the order the spins are selected whether a path is found or not. It only finds a path with a certain probability which depends on the size of $`D`$. It turns out that the probability decreases monotonically with $`|D|`$. For example for $`N=8^3`$ the method finds a path in 90% of all cases if the two states differ by 34 spins. More analysis can be found in . The algorithm for the identification of clusters using BS works as follows: the basic idea is to let a ground state represent that part of a cluster which can be found using BS with a high probability by starting at this ground state. If a cluster is large it has to be represented by a collection of states, such that the whole cluster is “covered”. For example a typical cluster of a $`8^3`$ spin glass consisting of $`10^{16}`$ ground states is usually represented by only some few ground states (e.g. two or three). A detailed analysis of how many representing ground states are needed as a function of cluster and system size can be found in . The algorithm holds in memory a set of clusters consisting each of a set of representing configurations. At the beginning the cluster set is empty. Iteratively all available ground states $`\{\sigma _i\}`$ are treated: For all representing configurations the BS algorithm tries to find a path to the current ground state or to its inverse. If no path is found, a new cluster is created, which is represented by the actual configuration treated. If $`\{\sigma _i\}`$ is found to be in exactly one cluster nothing special happens. If $`\{\sigma _i\}`$ is found to be in more than one cluster all these clusters are merged into one single cluster, which is now represented by the union of the states which have represented all clusters affected by the merge. The BS identification algorithm has some advantages in comparison with the straight-forward method: since each ground-state configuration represents many ground states, the method does not need to compare all pairs of states. Each state is compared only to a few number of representing configurations. Thus, the computer time needed for the calculation grows only a little bit faster than $`O(n_{SG}n_C)`$ , where $`n_C`$ is the number of clusters which is much smaller than $`n_{SG}`$. Consequently, large sets of ground states, which appear already for small system sizes like $`L=5`$, can be treated. Furthermore, the ground-state cluster landscape of even larger systems can be analyzed, since it is sufficient to calculate a small number of ground states per cluster. One has to ensure that really all clusters are found, which is simply done by calculating enough states, but this is still only a tiny fraction of all ground states . Also one has to be sure that all clusters are identified correctly. This is not guaranteed immediately, since for two ground states belonging to the same cluster there is just a certain probability that a path of free flipping spins connecting them is found. But this poses no problem, because once at least one state of a cluster has been found, many more states can be obtained easily by just performing a $`T=0`$ Monte-Carlo simulation starting with the initial state. By increasing the number of states available more and more, the probability that all clusters have been identified correctly approaches very quickly one. Detailed tests can be found in . For all results presented here, the number of available ground states has been increased so far, such that each cluster has be identified correctly with a probability of more than 0.99. Once all ground states are grouped into clusters, their sizes have to be obtained to calculate the total number of states and the entropy. If only some ground states per cluster are available, the size cannot be evaluated by simply counting the states. Then a variant of BS is used to perform this task. Given a state $`\{\sigma _i\}`$, free spins are flipped iteratively, but each spin not more than once. During the iteration additional free spins may be generated or destroyed. When there are no more free spins left, the process stops. One counts the number of spins that has been flipped. By averaging over several tries and several ground states of a cluster one obtains an average value, denoted with $`l_{\mathrm{max}}`$. It can be shown that this quantity represents the size $`n_C`$ of a cluster very well and is more accurate than simpler measures such as the average number of static free spins. By analyzing all ground states of small systems a $`n_C=2^{\alpha l_{\mathrm{max}}}`$ behavior is found, with $`\alpha [0.85,0.93]`$ depending on the dimension of the system. These results will be exposed in the next section. A similar method for estimating the cluster sizes is presented in . There three heuristic fitting-parameters are needed, but they are universal for all system dimensions. ## 3 Results First, the results for three-dimensional systems are given. In the second and third part two- and four-dimensional spin glasses are investigated. In 3d, for system sizes $`L=3,4,5,6,8`$ large numbers of independent ground states were calculated using genetic CEA. Usually 1000 different realizations of the disorder were considered. Tab. I shows the number of realizations $`n_R`$ and the number of independent runs $`r`$ per realization for the different system sizes $`L`$. For the small systems sizes (and for 100 realizations of $`L=5`$) many runs plus an additional local search were performed to calculate all ground states. For the larger sizes $`L=5,6,8`$ the number of ground states is too large, so it is only possible to try to calculate at least one ground state per cluster. It is highly probable that all clusters were detected, except for $`L=8`$, where for about 25% of the realizations some small cluster may have been missed . This problem is not related to the design of the ballistics search method. It is due to the enormous computational effort needed for generating the ground states of the largest systems, so only a restricted number of runs can be performed. Since the probability that a certain cluster is found in a run of the genetic CEA algorithm decreases with the size of the cluster , ground states belonging to small clusters occur only rarely. Even by doubling the number of runs for $`L=8`$, this fraction is estimated to fall only to 20 %. The ground states were grouped into clusters using the ballistic-search algorithm. The number of states per cluster was sufficiently large, so that only with a probability of less than $`10^3`$ some configurations from a large cluster may be mistaken for belonging to different clusters . The average number $`n_C`$ of clusters is shown in the fourth column of Tab I. In Fig. 1 the result is shown as a function of the number $`N`$ of spins. By visualizing the results using a double-logarithmic plot (see inset) one realizes that $`n_C`$ seems to grow faster than any power of $`N`$. The larger slope in the linear-logarithmic plot for small systems may be a finite-size effect. Additionally, for $`L=8`$ there is a large probability that some small clusters are missed, explaining the smaller slope there. Summarizing, our data favor an exponential increase of $`n_C(N)`$. To calculate the ground-state entropy, the size of the clusters have to be known. For the small systems, this can be done just by counting. For larger system sizes it is not possible to obtain all states, so the method using the dynamical number $`l_{\mathrm{max}}`$ of free spins is applied, as explained before. In Fig. 2 the cluster size for small systems is shown as a function of $`l_{\mathrm{max}}`$ with a logarithmically scaled y-axis. A $`n_C=2^{\alpha l_{\mathrm{max}}}`$ dependence is visible very well, yielding $`a=0.90(5)`$. By summing up all cluster sizes for each realization the ground-state degeneracy $`n_{GS}`$ is obtained. Its average is shown in the fifth column of the table. The quantity is plotted in Fig. 3 as a function of $`N`$. The exponential growth is obvious. The result for the average ground-state entropy per spin is shown in the last column of Tab. I. The number for $`L=4`$ is within two standard deviations of $`s_0=0.073(7)k_B`$ which was found in , where 200 realization were treated. By fitting a function of the form $`s_0(L)=s_0(\mathrm{})+aL^\beta `$ a value of $`s_0(\mathrm{})=0.0505(6)k_B`$ is obtained. In $`s_0=0.04(1)k_B`$ was estimated for systems with periodic boundary conditions only in two directions, which may be the reason for the smaller result. The value found by a Monte-Carlo simulation $`s_0=0.062k_B`$ for systems of size $`20^3`$ is much larger. The deviation is presumably caused by the fact that it was not possible to obtain true ground states for systems of that size, i.e. too many states were found. The results from multicanonical simulations $`s_0=0.046(2)k_B`$ and $`s_0=0.0441(5)k_B`$ are a little bit lower than the results obtained here. This may indicate that not all ground states are found using that simulation procedure. The result for the entropy does not suffer from the fact, that some ground-state clusters may have been missed for $`L=8`$: the probability for finding a cluster using genetic CEA grows with the size of the cluster . This implies that the clusters, which may have been missed, are considerably small, so the influence on the result is negligible. The largest source of uncertainty is caused by the assumption, that the size of a cluster grows like $`2^{\alpha l_{\mathrm{max}}}`$. The error of the constant $`\alpha `$ enters linearly the result of the entropy. To estimate the influence of this approximation, for the three smallest systems sizes, where the entropy was obtained exactly, $`s_0`$ was calculated using estimated cluster sizes as well. For all three cases the result was equal to the exact values within error bars. The final result quoted here is $`s_0=0.051(1)`$. Now we concentrate on two-dimensional systems. For system sizes $`L=5,7,10,14,20`$ large numbers of independent ground states were calculated using genetic CEA, up to $`10^4`$ runs per realization were performed. Usually 1000 different realizations of the disorder were considered, except for $`L=20`$, where only 96 realizations could be treated For the small systems sizes $`L=5,7`$, many runs plus an additional local search were performed to calculate all ground states. For the larger sizes $`L=10,14,20`$ the number of ground states is too large, so we restrict ourselves to calculate at least one ground state per cluster. The probability that some clusters were missed is higher for two dimensions than for the $`d=3`$ case, because the ground-state degeneracy grows faster with the system size: for small systems sizes $`L10`$ it is again highly probable that all clusters have been obtained. For $`L=14`$ some small clusters may have been missed for about 30% of all realizations, while for $`L=20`$ this fraction raises even to 60%. This is due to the enormous computational effort needed for the largest systems. For the $`L=20`$ realizations a total computing time of more than 2 CPU-years was consumed on a cluster of Power-PC processors running with 80MHz. The results for $`d=2`$ are shown in Tab. II. The number of clusters $`n_C`$ as a function of system size is plotted in Fig. 4. Again it is more likely that $`n_C`$ grows exponential than an algebraic growth. Similar to the $`d=3`$ case, the cluster sizes $`V`$ can be obtained directly for small systems. For estimating $`V`$ in larger systems, again the $`\alpha `$ parameter has been obtained. The average size of a cluster as a function of $`l_{max}`$ is shown in Fig. 5 resulting in $`\alpha =0.85(5)`$. With this parameter the ground-state degeneracy as a function of $`N`$ can be calculated, see Fig. 6. Similar to the $`d=3`$ case, the exponential growth is obvious. The resulting entropy is shown in the inset. By a finite-size extrapolation to the infinite system, a value of $`s_0=0.078(5)`$ is obtained. In $`s_00.075k_B`$ was estimated by using a recursive method to obtain numerically exact free energies up to $`L=18`$. The result of $`s_00.07k_B`$ found in is even slightly lower. The value found by a Monte-Carlo simulation $`s_00.1k_B`$ for systems of size $`80^2`$ is much larger. The deviation is presumably caused by the fact that it was not possible to obtain true ground states for systems of that size, i.e. too many states were visited. Recent results are more accurate: by applying the replica Monte Carlo method a value of $`s_0=0.071(7)`$ was obtained. A transfer matrix calculation resulted in $`s_0=0.0701(5)`$. By using a Pfaffian method $`s_0=0.0704(2)`$ respectively $`s_0=0.0709(4)`$ was obtained. The most recent values are smaller than the entropy found in this work. The reason may be that larger systems could be treated (up to $`L=256`$ in ), while here an extrapolation has been performed with systems of size $`L20`$. At least, the value $`s_0[L=22]=0.079(1)`$ is comparable to the value of $`s_0[L=32]=0.0780(8)`$ found in . Additionally, the fact that for the other works the number of antiferromagnetic bonds fluctuates from sample to sample while it is kept fixed here may have an influence as well. This was tested by calculating ground states for small systems $`(L10)`$, where each bond has a probability 0.5 of being (anti-) ferromagnetic. In this case the entropy turned out to to 5-10 % below the values found above. For large system sizes, which are out of range for the method presented here, this effect should decrease. In the last part we turn to four-dimensional $`\pm J`$ spin glasses. Because of the huge computational effort, $`N=6^4`$ is the largest size which could be considered and a reasonable statistics could be only obtained for $`L=5`$, since one $`L=6`$ run takes several CPU weeks. For details, see Tab. III. The number of clusters as a function of $`N`$ is displayed in Fig. 7. Here, even more clusters seem to have been missed than in the two-, and three-dimensional cases. But again, the data-basis is large enough that an exponential increase of the number of clusters seems possible. The dependence of the cluster size on the number of flips of free spins could be studied only for the smallest system size. Even for $`L=4`$ the number of ground states can grow beyond $`10^6`$, preventing a reliable analysis. From the $`L=3`$ data (see Fig. 8) $`a=0.93(3)`$ has been estimated. In the final figure (Fig. 9) the resulting degeneracy is shown. Here, the small numbers of ground states, which could be calculated with reasonable effort, already have an influence on the results. For the largest size, the exponential growth of the number of ground states with system size is not visible. Please note that in general the average $`n_{GS}`$ is dominated by few samples having a large number of ground states. For $`L=6`$, because of the small number of realizations, these realizations were not generated within 10 samples. This explains the deviation from the exponential growth. For the entropy (see inset of Fig.9 ), rare samples have less influence since the logarithm of the number of states is averaged. Consequently, the value of $`s_0=0.027(5)`$, which again was obtained by a finite-size scaling fit, is much more reliable. As we have seen, the $`\alpha `$-parameter increases with growing dimension. That means that the spins contributing to the ground-state degeneracy become more and more independent, the limit $`\alpha =1`$ corresponds to the case were all free spins do not interact with each other. This can be understood from the decrease of the ground-state entropy. From $`d=2`$ to $`d=4`$ $`s_0`$ drops from 0.078 to 0.027. Thus, with growing dimension the number of spins contributing to the ground state degeneracy decreases quickly, so it becomes less likely that these spins are neighbors. This effect is stronger than the increase of the number of neighbors per spin from 4 in $`d=2`$ to 8 in $`d=4`$. ## 4 Conclusion True ground states of two-, three- and four-dimensional $`\pm J`$ spin glasses have been calculated using genetic cluster-exact approximation. For each realization many independent ground states have been obtained, leading to an enormous computational effort: several months of running 32 PowerPC processors on a parallel computer were necessary. Clusters of ground states have been investigated, which are defined to be the sets of ground-state configurations which can be accessed from each other by flipping only free spins. The ballistic-search method has been presented, which allows the fast identification of very large clusters. It can be assured easily that the ground-state clusters found in this way have been identified correctly. It should be pointed out that this method is not a tool for the calculation of ground states of large systems, but it allows for a detailed analysis of highly degenerate ground-state landscapes. Indeed, it is possible to calculate clusters of systems when only a small fraction of their states is available. The method should be extendable to similar clustering-problems. A variant of the technique is used to estimate the size of clusters. Ground-state clusters for systems of size up to $`L=20`$ (2d), $`L=8`$ (3d) and $`L=6`$ (4d) have been calculated. It means that, in the case of three-dimensions, these realizations are ten times larger and have $`10^{12}`$ times more ground states than the systems treated in . For the other dimensions similar studies even have not been performed before at all. The number of clusters and the degeneracy as a function of the number of spins $`N`$ were evaluated. It appears that both quantities are growing exponentially with $`N`$ for all three cases $`d=2,3,4`$. Consequently, it seems unlikely that even larger systems can be treated accordingly in the near future. The ground-state entropy per spin was found to be $`s_0=0.078(5)k_B`$ (2d), $`s_0=0.051(1)k_B`$ (3d) respectively $`s_0=0.027(5)k_B`$ (4d). It should be stressed that the result for the entropy does not depend on the way a cluster is defined. The specific definition given here is only a tool, which allows the treatment of systems exhibiting a huge ground-state degeneracy. If ground states had colors, they could be grouped according their colors as well instead of performing a clustering according their neighbor relationship. With the method presented here, its is only possible to study the top level clustering of the ground states. It is not possible to find substructures within the clusters. This kind of enhanced analysis can be performed with other methods . Even when applying these other techniques, the ballistic search method is still necessary, since the cluster landscape has to be obtained in advance. There, the ballistic-search clustering is applied to guarantee that a ground-state landscape is sampled thermodynamically correct, see also . ## 5 Acknowledgements The author thanks K. Battacharya, G. Hed, E. Domany and D. Stauffer for interesting discussions. He is thankful to M. Otto for critical reading the manuscript. He was supported by the Graduiertenkolleg “Modellierung und Wissenschaftliches Rechnen in Mathematik und Naturwissenschaften” at the Interdisziplinäres Zentrum für Wissenschaftliches Rechnen in Heidelberg and the Paderborn Center for Parallel Computing by the allocation of computer time. The author obtained financial support from the DFG (Deutsche Forschungsgemeinschaft) under grant Zi209/6-1.
no-problem/9902/gr-qc9902015.html
ar5iv
text
# Quanta of Geometry and Rotating Black Holes ## Abstract In the loop approach to quantum gravity the spectra of operators corresponding to such geometrical quantities as length, area and volume become quantized. However, the size of arising quanta of geometry in Planck units is not fixed by the theory itself: a free parameter, sometimes referred to as Immirzi parameter, is known to affect the spectrum of all geometrical operators. In this paper I propose an argument that fixes the value of this parameter. I consider rotating black holes, in particular the extremal ones. For such black holes the “no naked singularity condition” bounds the total angular momentum $`J`$ by $`A_H/8\pi G`$, where $`A_H`$ is the horizon area and $`G`$ Newton’s constant. A similar bound on $`J`$ comes from the quantum theory. The requirement that these two bounds are the same fixes the value of Immirzi parameter to be unity. A byproduct of this argument is the picture of the quantum extremal rotating black hole in which all the spin entering the extremal hole is concentrated in a single puncture. preprint: CGPG-98/3-3 Even a most naive application of quantum ideas to gravity suggests that the picture of continuous fabric of spacetime becomes meaningless on distances of the order of Planck length. Current approaches to quantum gravity support this conclusion. Thus, for example, in string theory, the usual description of geometry in terms of spacetime metric whose evolution is governed by some action principle is valid only at low energies. An old set of results on high energy string scattering amplitudes indicates that at very high energies the size of strings becomes important which means that the usual metric description of spacetime geometry is not good at short distances. In non-perturbative quantum gravity, or loop quantum gravity, as it is sometimes called, the discrete structure of geometry on Planck scale is manifested by the fact that spectra of operators corresponding to such geometrical quantities as length, area and volume become quantized . The “size” of arising quanta of geometry is proportional to Planck length. However, the spectra of geometrical operators turn out to depend on an additional physical parameter, not fixed by the theory itself, which determines the size of quanta of geometry in Planck units. The presence of this parameter in the theory was pointed out by Immirzi , and I shall refer to it as Immirzi parameter. In this paper I argue that the value of Immirzi parameter in loop quantum gravity must be set to a particular fixed value. This completely fixes the size of quanta of geometry in the approach. I make the argument by comparing some results from loop quantum gravity with some facts about rotating black holes, in particular the extremal rotating black holes. As a byproduct of this argument we shall arrive at an interesting quantum description of the extremal rotating black holes. In loop quantum gravity, the elementary excitations of geometry are described in terms of one-dimensional objects - loops, or, more generally, graphs in space. The edges of these graphs carry various quantum numbers, in particular spin, and can be thought of as flux lines. In particular, these lines “carry” area: surfaces acquire area through their intersections with the flux lines. The flux of area carried by each flux line is quantized, which implies the quantization of the area spectrum in the theory. The spectrum is given by $$A_S=8\pi \gamma l_p^2\underset{p}{}\sqrt{j_p(j_p+1)}.$$ (1) Here $`A_S`$ stands for an eigenvalue of the operator measuring the area of a surface $`S`$, sum is taken over all points $`p`$ where flux lines intersect the surface, $`j_p`$ are spins (half-integers) labelling the corresponding flux lines, $`\gamma `$ is a real, positive parameter (Immirzi parameter) and $`l_p^2=G\mathrm{}`$, $`G`$ being Newton constant. We set $`c=1`$ throughout. The spectrum is purely discrete, and $`\gamma `$ fixes the size of quanta of area in Planck units. Let us emphasize that $`l_p`$ in (1) is the usual Planck length $`l_p=\sqrt{G\mathrm{}}1.6\times 10^{33}\mathrm{cm}`$ calculated using the physical, “macroscopical” Newton’s constant. Due to renormalization effects, the later may, in principle, be different from the “microscopical” Newton’s constant that enters as a parameter of the quantum theory. If such a renormalization occurs, the corresponding renormalization factor is absorbed in $`\gamma `$ so that $`l_p`$ is the usual Planck length. To fix the value of parameter $`\gamma `$ let us consider the Kerr family of black hole solutions. Let us recall some simple facts about these black holes. They are described by two parameters: mass $`M`$ and angular momentum $`J`$. The horizon area $`A_H`$ is given by the well-known function of $`M,J`$. The condition that there is no naked singularity reads $`J/MGM`$. The value of $`J`$ saturating the inequality corresponds to the extremal black hole. Instead of using $`M,J`$ as the independent parameters describing the black hole, one can work with $`A_H,J`$. A simple algebraic manipulation shows that “no naked singularity” condition expressed in terms of the parameters $`A_H,J`$ reads $$J\frac{A_H}{8\pi G}.$$ (2) The value of angular momentum $`J`$ saturating the inequality corresponds to the extremal black hole. The argument I am about to present depends on two assumptions. The first one is that the area spectrum for the horizon surface of a rotating black hole is given by (1). This can almost be said to be a prediction of loop quantum gravity, because the spectrum (1) is derived from very general principles and should be valid for any surface, in particular the horizon. On the other hand, we do not have a quantum description of rotating black holes as of yet, so the validity of (1) in this context is still an assumption. The second assumption is much stronger. I assume that the total angular momentum of the black hole is related to the spins labelling the flux lines intersecting the horizon. It is then most natural to assume that the black hole angular momentum is given by a vector sum of spins entering the hole. A precise way the angular momentum is obtained from spins may be very complicated. However, independently of these details, the total angular momentum will satisfy the inequality $$J\mathrm{}\underset{p}{}j_p.$$ (3) The argument below depends crucially on the assumption expressed by this inequality. Let us now consider a family of quantum black holes for which the algebraic sum of spins $`_pj_p`$ is the same. According to (3), the angular momentum for all these black holes is bound by $`\mathrm{}_pj_p`$. On the other hand, the area spectrum (1) tells us that the horizon area $`A_H>8\pi \gamma l_p^2_pj_p`$. Thus, what we get is a bound on possible values of angular momentum of the black hole of a fixed horizon area: $$J<\frac{A_H}{8\pi \gamma G}.$$ (4) The above inequality is strict, that is one can never saturate the bound. However, there is a way to arrange punctures such that the bound is “almost” saturated. Indeed, by putting all the spin entering the black hole in a single puncture of spin $`j`$ we get $`A_H=8\pi \gamma l_p^2j+O(1)`$. Thus, for a large $`j`$ the bound in (4) can be “almost” saturated by putting all the spin at a single puncture. The bound (4) is strikingly resemblant of the inequality (2). The values of parameters in (2) that saturate the bound correspond to the extremal black holes. Quite similarly, the quantum states that are closest to the saturation of the bound in (4) are the “extremal” ones for which all the spin entering the hole is concentrated in a single puncture. This strongly suggests that the two bounds –the one coming from the properties of the classical black hole solution and the one coming from the quantum theory– express the same physical property and must coincide. This is only possible if one fixes the value of the Immirzi parameter to be the unity: $$\gamma =1.$$ (5) This is the end of the argument. Let us conclude with several remarks. (i) The value (5) of the Immirzi parameter is not the same value as one gets from the quantum mechanical calculation of black hole entropy . In fact, with the above value of the parameter, the approach of gives the value for the entropy $`S0.2S_{BH}`$, where $`S_{BH}`$ is Bekenstein-Hawking entropy. Thus, either the argument presented has a flaw, or the approach undercounts the states. (ii) The above argument depends crucially on the assumption that the angular momentum of a rotation black hole is related to the spin entering the hole. It is not at all obvious that this must be the case, because the angular momentum of the hole is related to spacetime rotations, while the spin labelling the flux lines has to do with rotations in the internal space. On the other hand, there are no other quantities to which the angular momentum can be related: in the absence of matter the spins (and the related labels for the intertwiners) are the only quantum numbers labelling the states. Thus, whether the above assumption is true or false remains to be seen. (iii) It is encouraging that our assumption leads to such a simple picture for extremal black holes. As we saw, the extremal rotating black hole is described by the “extremal” configuration in which all the spin entering the hole is concentrated at a single point on the horizon. At the same time all the area is concentrated at the same point. Thus, the horizon geometry becomes highly non-classical at extremality. Acknowledgements: I am grateful to T. Jacobson for a discussion and to L. Freidel for reading the manuscript. This work was supported in part by the Braddock fellowship of Penn State, by the NSF grants PHY95-14240, PHY94-07194 and by the Eberly research funds of Penn State. The author is also grateful to the Institute for Theoretical Physics, Santa Barbara, where this work was completed.
no-problem/9902/astro-ph9902229.html
ar5iv
text
# ISOCAM EXTRAGALACTIC MID-INFRARED DEEP SURVEYS UNVEILING DUST-ENSHROUDED STAR FORMATION IN THE UNIVERSE ## 1. INTRODUCTION Early-type galaxies and the central bulges of spirals are expected to have rapidly converted most of their primordial gas into stars as indicated by their red colors, typical of old stellar populations, and the lack of remaining fuel to entertain new star formation. The bulk of these stars was therefore expected to have formed during a major episode of star formation and the distant galaxies which should have experienced this process were called ’primeval’ galaxies. The failure of optical searches for such primeval galaxies, expected to exhibit strong Ly-$`\alpha `$ emission lines (Djorgovski $`\&`$ Thompson 1992), was interpreted with the help of two scenarios which are not mutually exclusive. On one hand, the predictions of bottom-up cosmological scenarios of galaxy formation (White $`\&`$ Frenk 1991) and the observational evidence for merging galaxies, suggest that primeval galaxies simply do not exist, because galaxy formation is a continuous process. On the other hand, strong dust extinction could have masked these major episodes of star formation so that primeval galaxies should be found in infrared surveys (Franceschini $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1991, 1994). Optical surveys found a strong evolution of the population of blue galaxies as a function of redshift for galaxies below z=1 (Canada-France Redshift Survey, CFRS, Lilly et al. 1995, Hammer et al. 1997). At larger redshifts, Steidel $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ (1996) used the U and B drop out technique to select galaxies at redshifts 3 and 4 from their broad-band colors. Madau $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ (1996) translated these observations into a famous plot showing the average star formation history of the universe, which could also be seen as the universal history of an ideal galaxy. In this plot, the formation of stars per cubic megaparsec of the universe was expected to peak at a redshift close to 1 and then decrease at larger redshifts. However, when observed spectroscopically, the drop-out galaxies were found to exhibit a flat spectrum in the UV, indicative of a strong dust extinction which could lead to an underestimation of the associated star formation rates (SFR) of these galaxies by a factor of three and maybe even as much as ten (Pettini $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1997, Meurer $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1997). In the local universe, direct infrared observations from IRAS (Soifer $`\&`$ Neugebauer 1991) showed that the infrared luminosity from 8 to 1000 $`\mu `$m of galaxies is about 30 per cent of that from starlight. The population of LIGs only produce about 6 per cent of this integrated infrared emission. Hence, although the detection by IRAS of LIGs, which appeared to be the strongest starbursts ever detected, was legitimately considered as a breakthrough, it was not supposed to change our understanding of the history of star formation because of the marginal proportion of stars born in these galaxies. However, analyses of IRAS extragalactic source counts showed evidence for strong evolution at low flux levels for ULIGs (ultra-LIGs, $`L_{bol}L_{IR}>10^{12}L_{}`$, Hacking $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1987, Lonsdale $`\&`$ Hacking 1989, Lonsdale $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1990). More recently, Kim $`\&`$ Sanders (1998) found a very strong density evolution for ULIGs with a 60 $`\mu `$m flux between 0.5 Jy and 1.5 Jy of the form $`\mathrm{\Phi }(z)(1+z)^{7.6\pm 3.2}`$. This result was only tentative because of the small redshift range sampled by IRAS luminous galaxies (z$`<`$0.27), but it is an indication that ULIGs and maybe also LIGs should have played a stronger role in the past. The second argument in favor of such an evolution was found by Puget et al. (1996) who detected a strong cosmic infrared background (CIRB) in the 300 $`\mu `$m to 1 mm range in the COBE-FIRAS data. This result was then confirmed by Guiderdoni $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ (1997) and by a positive detection at lower wavelength using another instrument on-board COBE, DIRBE. The exact value of the COBE-DIRBE CIRB at 140 $`\mu `$m is still a matter of debate (Hauser et al. 1998, find a value of $`25\pm 7nWm^2sr^1`$, while Lagache et al. 1999, find $`15\pm 9nWm^2sr^1`$), but most importantly its existence is now confirmed by independant teams and suggests that the rest-frame infrared emission of galaxies is much higher at high redshift than estimated locally (z$`<`$0.2) by IRAS. Several cosmological surveys have been performed with ISOCAM, on-board ISO, in directions of low zodiacal and Galactic cirrus mid-infrared emission, ranging from large and shallow ones (several square degrees, complete down to $``$ 2 mJy) to narrow and very deep ones (a few square arc-minutes, complete down to 50 $`\mu `$Jy). These surveys were performed in the two main broad-band filters of ISOCAM: the 6.75 $`\mu `$m LW2 filter and the 15 $`\mu `$m LW3 filter, respectively centered on the rest-frame parts of the SED which are dominated by aromatic features (7 $`\mu `$m band) and by the thermal emission of very small dust grains (VSGs). We will show in the following that with a thousand times better sensitivity and sixty times better spatial resolution than IRAS, ISOCAM mid-infrared extragalactic surveys have unveiled most of the star formation in the universe below z$``$1 and identified most of the galaxies contributing to the COBE-FIRAS 140 $`\mu `$m cosmic background. The galaxies responsible for most of the mid-infrared light are located at z$``$0.7 and have a bolometric luminosity larger than $`10^{11}L_{}`$. We will discuss this result considering the role of interactions and comparing the relative roles of star formation and Active Galaxy Nuclei (AGN) activities. ## 2. ORIGIN OF THE MID-IR EMISSION AND K-CORRECTION The rest frame mid-IR emission of galaxies can be divided into three components: * UIBs: the Unidentified Infrared Bands (UIBs), detected at 6.2, 7.7, 8.6, 11.3 and 12.7 $`\mu `$m as well as their underlying continuum, dominate the mid-IR emission below 12 $`\mu `$m (see figure 1). The carriers of these UIBs are proposed to be aromatic carbon species such as PAHs (Polycylic Aromatic Hydrocarbons, Léger $`\&`$ Puget 1984, Puget $`\&`$ Léger 1989, Allamandola $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1989) or coal grains (Papoular 1991). Below 12 $`\mu `$m, where the SED is dominated by the UIBs, the intensity of the interstellar radiation field produces only small changes in the shape of the spectrum. This is strong indication that the carriers of the UIBs are transiently heated by the absorption of individual photons (Boulanger 1998). * Warm dust (T$`>`$150 K): Very Small Grains (VSGs) of dust heated by optical-UV photons emitted by stars produce a continuum at $`\lambda >10\mu m`$ (Désert $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1990). * Forbidden lines of ionized gas: NeII (12.8 $`\mu `$m), NeIII (15.6 $`\mu m`$), SIV (10.5 $`\mu `$m), ArII (7 $`\mu `$m). These lines are good indicators of the star formation activity, particularly the NeIII/NeII ratio, which is correlated to the temperature of the stars ionizing the ISM. For local galaxies, the LW2 (5-8.5 $`\mu `$m) and LW3 (12-18 $`\mu `$m) filters measure respectively the MIR emission due to UIBs and to VSGs (emission lines are negligible in these broad-band filters). The LW3 band becomes more and more dominated by UIBs with increasing redshift, due to k-correction (see figure 1). At z$``$1, the effective LW3 band corresponds to the rest-frame LW2 band typically, so it probes UIBs, not hot dust, and above z$``$1.5, dust emission (UIBs + VSGs) becomes too faint to be detected by ISOCAM. The k-corrections for the SED of M82 and Arp 220 have been plotted in figure 2. A galaxy with an SED like M82, or Arp 220, will be fainter in $`\nu F_\nu `$ with increasing redshift up to a redshift of typically z$``$0.4-0.5. Above this redshift, however, the k-correction will become negative, i.e. the galaxy will appear brighter, due to the entrance of the UIBs into the LW3 band, with a maximum around z$``$0.7. Indeed, in very deep ISOCAM surveys, like in the HDF field, we will see that most of the galaxies are located above z=0.4, with a mean redshift of z=0.7. Above a redshift of z$``$1.4, the k-correction falls rapidly, which explains why there is a cut-off in the redshift distribution of ISOCAM galaxies. The MIR emission of a galaxy is correlated with its star formation activity as illustrated by the case of the Antennae galaxy (Vigroux $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1996, Mirabel $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1998). Most of the star formation in this system, as probed by ISOCAM, takes place in a source lying in the overlapping region of the two interacting galaxies, NGC 4038/4039. This source is optically faint and most of the optical emission arises from the two galactic nuclei. In this case, one would underestimate the SFR when using only the optical-UV part of the SED. It is not straightforward, however, to estimate a star formation rate in starburst regions with good precision using only the MIR flux of a galaxy. Indeed, the MIR results from a mixture of stochastic heating (UIBs) and thermal emission at high temperature (VSGs), while the FIR results from thermal emission of big grains at lower temperature. However, assuming that the physics of local galaxies is representative of that of more distant ones, one can still compare the properties of local and distant galaxies in order to quantify the evolution of galaxies. ## 3. DESCRIPTION OF THE ISOCAM EXTRAGALACTIC SURVEYS Before the launch of ISO, there was a great deal of discussion about the best strategy that would both optimize the instrument capabilities and the scientific outcome of the ISOCAM mid-infrared extragalactic surveys. Large surveys would have to be shallow because they are time consuming. They would probe the brightest objects at large distances and would otherwise give good statistics for nearby objects. Pencil beams could go much deeper, although limited by the confusion limit of a 60-cm telescope, and could probe a comparable volume of the universe, with a large extent over redshift or time. Because there was no obvious answer to this question, Guaranteed Time deep surveys were divided into a shallow survey, a deep survey and an ultra-deep survey, and in order to be less biased by large-scale structures, it was decided to perform these two types of surveys in the northern (Lockman Hole) and in the southern (Marano Field) hemispheres. These fields were obviously chosen because of their low foreground Galactic emission (low zodiacal and cirrus emission) but also because they were already covered at other wavelengths and in particular in the X-ray by ROSAT and now other satellites. We will see in the next section that this choice was beneficial. It allowed us to achieve good statistics over a large flux range and in particular around 1 mJy where we found a rapid change in the slope of the number counts. The use of the lensing magnification of galaxies in the line of sight of galaxy clusters, like A2390 (Altieri $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1998, see also Metcalfe $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$, these proceedings), allowed us to complete the number counts below 100 $`\mu Jy`$. These surveys were complemented by other surveys from the ISO Open Time. The European Large Area Infrared Survey (ELAIS, european consortium of 19 institutes lead by M.Rowan-Robinson, see these proceedings), for sources above $``$ 2 mJy, and the ISOCAM-HDF (P.I. M.Rowan-Robinson, Rowan-Robinson $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1997, Aussel $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1999, Désert $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1999) which improved the log N-log S and allowed us to identify the galaxies responsible for the infrared excess (Aussel $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ these proceedings). Finally, the combination of all surveys is statistically significant from 50 $`\mu `$Jy up to 50 mJy and even 300 mJy, including IRAS (see Table 1). Hence they cover four orders of magnitude in flux and therefore give a very strong constraint on the evolution of galaxies in the universe below typically z$``$1.4 (upper limit due to k-correction). We have only mentioned in Table 1 those surveys used to derive the log N-log S presented in this paper. Column 3 gives the total area covered by each survey. The depth of the surveys is not homogeneous over this total area but the variation of the signal-to-noise ratio as a function of the position in the mosaic was taken into account in the source detection and in the determination of the number counts. Column 4, \[F<sub>min</sub>,F<sub>max</sub>\], gives the flux range over which the number of detections is statistically significant. Column 5, t<sub>exp</sub>, gives the exposure time per sky position, i.e. after co-addition of all pixels which observed the same position on the sky. This exposure time depends on the number of redundancies for this given position and was summarized with two numbers when a large fraction of the image is seen at very different depths. The last column gives the integrated contribution of the sources detected in each survey to the 15 $`\mu `$m cosmic background, i.e. 2.35$`\pm `$0.8 $`nWm^2sr^1`$ above the 0.05 mJy level, or the less statistically significant value of 3.3$`\pm `$1.3 $`nWm^2sr^1`$ above 0.03 mJy. Other ISOCAM extragalactic surveys were performed during ISO lifetime: Taniguchi $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ (1997) made an ultra-deep survey of the Lockman Hole in the LW2 7 $`\mu `$m band, a new ISOCAM-HDF observation was performed on the southern HDF field (P.I. M.Rowan-Robinson, Oliver $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1999) and both a deep and a second ultra-deep survey were also performed on the Marano Field (P.I. C.Cesarsky). Finally, one of the CFRS fields at 1415+52 was also covered by ISOCAM at 7 $`\mu `$m (Flores $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1998a) and 15 $`\mu `$m (Flores $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1998b). With a sensitivity close to the Lockman Hole Deep Survey on a smaller area, it allowed us to identify some ISOCAM detections (see Section 5.). ## 4. DATA ANALYSIS The data analysis of ISOCAM images appeared to be more complex than expected. The main culprits for this major difficulty are the cosmic ray impacts inducing memory effects on the detectors. It was therefore decided to develop two different techniques which were used independantly on the same datasets. The resulting source lists with position and photometry were compared in order to check the robustness of the two tools. We also decided to perform some Monte-Carlo simulations in order to estimate the level of incompleteness and the photometric accuracy as a function of the source fluxes. The two techniques have been applied to the ISOCAM-HDF ultra-deep survey and published in the same issue of A$`\&`$A (see Desert et al. 1999, for the ’three-beam technique’, and Aussel et al. 1999, for the ’PRETI technique’). ISOCAM data are subject to standard gaussian noise (photon and readout noises) and to errors associated with the flat-fielding and dark current substraction. But the main limitation of ISOCAM deep surveys comes from its thick and cold pixels: * because they are thick, ISOCAM pixel detectors are very sensitive to cosmic ray impacts (4.5 pixels receive one glitch per second). The behaviour of these glitches can be divided into three families: + ”normal glitches”: the more common ones, which correspond to electrons and last only one or two readouts. They are easily removed with a median filtering (the combination of several scales for the median allows the best correction). + ”faders”: these glitches as well as the following ones are probably associated with protons and alpha particles. They induce positive peaks in the detector response, which can last several readouts. Since ISOCAM is best used in the raster mode, a real source will look like these glitches, i.e. a positive response over the number of readouts spent on a given position of the sky. + ”dippers”: some glitches are followed by a trough extended over more than one hundred readouts. * ISOCAM pixels are cold, so that electrons move very slowly within them and therefore induce a transient behaviour: a pixel will take several hundred readouts to stabilize when moving from the background to the position of a source on the sky and inversely. Because of time limitation, one is therefore limited to non stabilized signals which results in an uncertainty on the photometry. This uncertainty is strongly reduced by the partial correction of the transient behaviour and by the use of simulations to define a statistical distribution of measured fluxes for any given input flux. The final uncertainty on ISOCAM deep survey fluxes is on the order of 20 per cent. In order to facilitate the separation of sources from cosmic ray impacts, ISOCAM surveys were performed using the raster mode with a redundancy (number of different pixels falling successively on a given sky position) ranging from 2 for the shallowest survey (ELAIS) to 88 for the deepest surveys (Marano Field Ultra-Deep survey; 64 for the HDF field). ### 4.1. Data reduction techniques Four different techniques applied to the ISOCAM-HDF (North) image have been compared: the ’Saclay technique’, called PRETI (Starck $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1998), the ’Orsay technique’, called three-beam technique (see Désert $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1999), the ’Imperial College technique’ (see Serjeant $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1997) and the ’Carlo Lari technique’ (Lari 1999). After comparison, all four techniques reach the same result at 15 $`\mu `$m, and find a small difference at 7 $`\mu `$m. This will be discussed in a common paper that will be submitted soon. The use of simulated ISOCAM images was of great help in this work. In the following, we give a brief description of the tool that we have developed at Saclay, PRETI. This tool is based on a multi-resolution wavelet transform which separates the temporal history of each individual pixel into several timelines, each associated with a given frequency in the variation of the signal. This technique developed by J.L.Starck consists in searching for patterns in the wavelet space (or frequency space), which correspond to the ’bad’ glitches, namely the faders and the dippers. Indeed, being extended, a bad glitch will be detected over several successive scales and appear as a pattern in the wavelet, or frequency, space. After having isolated this pattern, one can then substract it from the original signal, which in fact means subtracting a smooth function without the high-frequency contribution corresponding to the gaussian noise of the detector and where faint sources are hidden. When the data cubes have been cleaned of their glitches, they can be co-added to produce a mosaic image on which standard source detection techniques can be applied. This final step was done using again a multi-resolution wavelet transform, applied spatially instead of temporally. The techniques (1) and (2) were applied to all surveys: they agree on the photometry at the 20 per cent level and the differences in astrometry are less than the pixel size. ### 4.2. Simulations We performed simulations in order to quantify the sensitivity limit (minimum detected flux, below the completeness limit), the completeness limit (flux above which all sources are detected) and the photometric accuracy. These simulations were performed with real datasets (in order to simulate realistic glitches) in which we introduced fake sources including the PSF and their modeled transient. For this, we used a long staring observation of more than 500 readouts and analyzed it as if it was a mosaic. In a raster observation, a real source will only be detected when the camera points toward its direction, while the sources present in a staring observation will remain present over the whole observation and hence will be removed as part of the low frequency component of the signal. We added to this simulated raster containing real ISOCAM noise some simulated sources with their PSF and transient behavior and then analyzed the data with the different techniques described above. The simulations were used to set the parameters of the data reduction technique and then to estimate the rate of detections of sources per flux bin as well as the error bar on the photometry. We used them to calculate the 90 per cent confidence level error bars shown in the log N-log S (figure 4). ## 5. NATURE OF THE GALAXIES DETECTED IN ISOCAM EXTRAGALACTIC SURVEYS The detailed study of the properties of ISOCAM galaxies is only beginning but with the help of the large number of multi-wavelength observations of the HDF (Williams $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1996) and of the CFRS field at 1415+52 (Lilly $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1995), the second most observed field at all wavelengths after the HDF, one can already begin this work. In the CFRS field, Flores $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ (1998b) have found that two thirds of the galaxies detected by ISOCAM at 15 $`\mu `$m down to a flux density of 250 $`\mu `$Jy were starburst galaxies as probed by their radio emission at 6 and 21 cm. They recalculated the history of the metal production or star formation per unit volume of the universe, plotted by Madau (1996) and Lilly (1995) using only the optical and UV information on CFRS galaxies, for the dust extinction and found a correction factor of about three. Hence, optical-UV light would only probe 25 per cent of the star formation in the universe below z=1. This strong uncertainty on this correction factor needs to be reduced by more statistics not only on infrared galaxies but also on template SEDs of local galaxies, since the database of Schmitt $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ (1998) that they used only contains 59 galaxies. However, the excess that we find in the number counts (see Section 6.) and the strong 15 $`\mu `$m cosmic background (see Section 7.), confirm that this plot should indeed be corrected by a non negligible factor for dust extinction. In the ISOCAM-HDF, we identified 36 sources, among the 44 detections at 15 $`\mu m`$, with an optical counterpart from the catalog of Barger $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ (1999) limited to I$`<`$22.5. The mean redshift of these galaxies is $`<z>0.7`$ and most of them are located between z=0.4 and 1.4 (as expected from the k-correction, see Section 5.1.). The mean ratio of their effective fluxes at 15 $`\mu `$m over the K band is close to that found for M82, if it was redshifted at z=0.7. Hence, we used the SED of M82 to determine the rest-frame 15 $`\mu `$m flux of these galaxies, although we are aware of the strong uncertainty associated with this estimate. The mean 15 $`\mu `$m luminosity of ISOCAM-HDF galaxies, is about ten times larger than that of M82 so this assumption should be quite conservative since ISOCAM-HDF galaxies should be even more active than M82 and closer to LIGs and ULIGs. Assuming that ISOCAM-HDF galaxies present SEDs similar to that of M82, their mean infrared luminosity would be of the order of $`3\times 10^{11}L_{}`$ and they would be classified as LIGs (Sanders $`\&`$ Mirabel 1996). For a typical ratio of M/L<sub>K</sub> of about 1.5 (Charlot, private communication), they would have a mean mass of $`1.5\times 10^{11}M_{}`$. ### 5.1. Origin of the infrared emission: star formation versus AGN The major source for the huge energy radiated by these galaxies should be star formation activity. Indeed, only a few per cent of AGNs have been identified optically, in the radio or in the X-ray in our sample of ISOCAM galaxies (see also Aussel $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$, these proceedings). Genzel $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ (1998) found that only 20-30 per cent of the energy radiated by local ULIGs is powered by AGN activity, and this fraction should be even lower for our galaxy sample since they are mainly LIGs. However, the fraction of galaxies harboring an AGN (but not dominated by its radiation) should be much larger: Genzel $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ (1998) detected the presence of an AGN in at least 50 per cent of the ULIGs that they studied. It was recently suggested by Fabian $`\&`$ Iwasawa (1999) that if the hard X-ray background detected around 30 keV by HEAO1 were produced by dusty AGNs (Seyferts 2) as indicated by its flat slope, then these dusty AGNs should produce a contribution to the CIRB of $`3nWm^2sr^1`$, i.e. between 10 and 20 per cent of the CIRB measured by COBE-DIRBE. Such a contribution of AGNs to the CIRB would be reasonable if most of it was produced by LIGs and ULIGs and ISOCAM surveys seem to favor this option. ### 5.2. Optical properties of the faint 15 $`\mu `$m galaxies Using available data on the HDF, especially in the optical and NIR from Barger $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ (1999), we find that the optical colors of the galaxies detected above 100 $`\mu `$Jy (ISOCAM-HDF completeness limit, Aussel $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1998) do not strongly differ from field galaxies (see Aussel $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$, these proceedings). Hence it would not have been possible to discriminate the ISOCAM galaxies from their optical colors only. The study of the spectroscopic properties of these galaxies is still underway (spectra made available to the community by the group of the University of Hawaii, Barger $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1999). However, we have already found that a large fraction of ISOCAM-HDF galaxies exhibit weak emission lines and strong Balmer H<sub>δ</sub> absorption lines characteristic of the presence of a large number of A stars with an age of typically one Gyr. On the other hand, galaxies with strong emission lines detected in the field are not detected at 15 $`\mu m`$. We also found the same result in the list of spectra obtained after a multi-spectroscopic follow-up of 15 $`\mu `$m ISOCAM detections in the field of a nearby galaxy cluster, A1689 (P.I. P.A.Duc, see Fadda $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$, these proceedings). Flores $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ (1998b) had already found the same result on the CFRS field at 1415+52 and interpreted this result as a proof that the galaxies with dust-enshrouded star formation detected by ISOCAM have been forming stars during the last few 10<sup>8</sup> years. The lack of detection by ISOCAM of emission line galaxies was suggested to be due to the low metallicity of these galaxies. A completely independant spectroscopic survey of post-starburst galaxies in clusters of galaxies located at redshifts z=0.4-0.5 seems to already confirm this result (Dressler $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1999, Poggianti $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1999). They propose that galaxies with weak or no emission lines but strong Balmer $`H_\delta `$ absorption lines, that they call e(a), are most probably dust-enshrouded starbursts using only optical arguments based, for example, on the ratio of the OII over H<sub>α</sub> equivalent widths. A possible scenario for the origin of these dusty starbursts galaxies, found both in ISOCAM extragalactic surveys and in the optical spectroscopic surveys of distant clusters, could be linked to the effects of galaxy interactions. Indeed, Poggianti $`\&`$ Wu (1999) find an exceptionally high fraction of e(a) spectra among local LIGs where the fraction of interacting galaxies is very high. ISOCAM detections of CFRS galaxies also present morphological signatures of interactions and we find a morphological segregation with increasing redshift in the ISOCAM-HDF with more interacting galaxies with increasing redshift (see figure 3). ## 6. NUMBER COUNTS The most impressive result after three years of data analysis and calibration is the consistency of all different surveys as shown by the log N-log S plot over four orders of magnitude in flux density when including the IRAS point (figure 4). The number counts are perfectly fitted by a no-evolution model normalized to IRAS (Franceschini 1998) above the 1 mJy level typically, while they strongly diverge from this law below 1 mJy, with an increasing difference which reaches a factor of 10 at the faintest fluxes (around 50 $`\mu `$Jy). Although k-correction can be negative as shown in figure 2 and therefore could explain a part of the excess at low fluxes, it can certainly not reproduce an excess of a factor of 10 as we find here. The detailed modeling of this evolution is quite complex and a paper will be devoted to this subject (Franceschini $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$, 1999). However, the increasing fraction of interacting galaxies as a function of redshift detected in the ISOCAM-HDF (figure 3) may explain the behavior of the log N-log S, if LIGs were to begin to dominate the number counts above z$``$0.4 and completely dominate them at z$``$0.7. There was already a hint of such a strong evolution of LIGs and ULIGs in the analysis of local ULIGs by Kim $`\&`$ Sanders (1998, see Section 1. Introduction). We learned from the previous section about the nature of the galaxies that were detected in ISOCAM surveys that this excess is produced by a few LIGs, two orders of magnitude less numerous than optical galaxies detected in the HDF (see Aussel $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1998). Hence, the luminosity function of galaxies should be different at z$``$0.7, the mean redshift of ISOCAM-HDF galaxies, than locally, favoring luminous galaxies. The flattening of the log N-log S at low fluxes implies that we are not far from having identified most of the galaxies producing the mid-infrared background and indicates that we should be able to give a good constraint on the cosmic background produced by these sources. ## 7. COSMIC INFRARED BACKGROUND Integrating the 15 $`\mu `$m number counts over the whole flux range, one finds a conservative value of about $`2.35\pm 0.8nWm^2sr^1`$ above 50 $`\mu `$Jy and 3.3$`\pm `$1.3 $`nWm^2sr^1`$ above 30 $`\mu `$Jy, with less confidence. This corresponds to 30-45 per cent of the cosmic background seen with the HST in the I-band ($`7nWm^2sr^1`$, Lagache $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ 1999). This fraction is very high since locally IRAS showed that it is the total infrared luminosity of galaxies, i.e. from 8 to 1000 $`\mu `$m, which is about 30 per cent of that from starlight (Soifer $`\&`$ Neugebauer 1991). Assuming a conservative SED one would then expect these galaxies to emit at least as much energy in the far infrared than in the optical. In particular, one can estimate the effective ratio of their 140 $`\mu `$m over 15 $`\mu `$m energy densities at the mean redshift of 0.7 and compare it to the COBE-DIRBE value. A conservative ratio would be of the order of three, like for M82, although a more realistic one would be higher than that. In this conservative case, they would produce more than 50 per cent of the COBE-DIRBE CIRB found by Lagache $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ (1999, $`15.3\pm 9.5nWm^2sr^1`$) and about 30 per cent of the value found by Hauser $`\mathrm{𝚎𝚝}\mathrm{𝚊𝚕}.`$ (1998, $`25.1\pm 7nWm^2sr^1`$). This is only a lower limit since a ratio of 3 is probably too conservative and since the number counts have not completely flattened yet. The minimum ratio of the energy densities at 140 $`\mu `$m over 15 $`\mu `$m is about 10, hence if the faint 15 $`\mu `$m galaxies were moderate ULIGs, they would produce the whole COBE-DIRBE background. ## 8. CONCLUSION ISOCAM extragalactic surveys are nearly four orders of magnitude more sensitive than IRAS in the 15 $`\mu `$m range. They teach us that a few luminous and massive infared galaxies located at a mean redshift of 0.7 produce most of the 15 $`\mu `$m and probably also DIRBE-140 $`\mu `$m cosmic background. Assuming a conservative spectral energy distribution for these galaxies, one find that they emit as much energy in the infrared than the two orders of magnitude more numerous optical galaxies detected in the HDF. The galaxies producing this infrared excess present morphological signs of interaction. Hence, both dust extinction and galaxy interactions played a major role in galaxy evolution since z$``$1. Additional conclusions will be reached only after having collected a statistically larger sample of optical counterparts to ISOCAM galaxies, in particular, in the ultra-deep surveys performed in the Marano Field where we have detected hundreds of very faint 15 $`\mu m`$ galaxies. The large amount of energy released by these galaxies will be one of the key questions for the next spatial telescopes, like SIRTF and FIRST, and at larger redshifts for the ground-based sub-millimeter interferometer LSA-MMA. The Next Generation Space Telescope (NGST) would give a unique insight into the origin of their activity, and in particular the proportion of the energy due to AGN activity, if it were to observe up to 25-30 $`\mu `$m. ## ACKNOWLEDGMENTS We would like to thank Suzanne Madden for very fruitful scientific discussions and for her advises in the redaction of this paper.
no-problem/9902/cond-mat9902029.html
ar5iv
text
# Re-entrant Melting in Polydisperse Hard Spheres ## Abstract The effect of polydispersity on the freezing transition of hard spheres is examined within a moment description. At low polydispersities a single fluid-to-crystal transition is recovered. With increasing polydispersity we find a density above which the crystal melts back into an amorphous phase. The range of densities over which the crystalline phase is stable shrinks with increasing polydispersity until, at a certain level of polydispersity, the crystal disappears completely from the equilibrium phase diagram. The two transitions converge to a single point which we identify as the polydisperse analogue of a point of equal concentration. At this point, the freezing transition is continuous in a thermodynamic sense. Freezing and melting are probably the most common and striking physical changes observed in everyday life. All experiments, to date, demonstrate that the crystallisation of a simple liquid is a first-order transition, in three dimensions. So for instance, the sharp Bragg peaks of the crystal, which reflect the long-range spatial modulation of the density $`\rho (𝒓)`$ and which distinguish a crystal from a liquid, disappear abruptly as a crystal melts. This sharp microstructural change is also mirrored by discontinuities in the first derivative of the free energy so that experimentally, melting is accompanied by a finite density and entropy change. Although the experimental situation is clear, in an early analysis Landau argued that, under certain conditions, a crystal can transform continuously into a liquid. In a simple Landau-Alexander-McTague theory the excess free energy of the crystal (relative to the isotropic liquid) has the following form: $`f_{sl}`$ $`=`$ $`r(T,P){\displaystyle \underset{𝑮}{}}|n_𝑮|^2`$ (2) $`u_3(T,P){\displaystyle \underset{𝑮_\mathrm{𝟏},𝑮_\mathrm{𝟐},𝑮_\mathrm{𝟑}}{}}n_{𝑮_\mathrm{𝟏}}n_{𝑮_\mathrm{𝟐}}n_{𝑮_\mathrm{𝟑}}\delta _{𝑮_\mathrm{𝟏}+𝑮_\mathrm{𝟐}+𝑮_\mathrm{𝟑}\mathbf{,}\mathrm{𝟎}}+\mathrm{}`$ where the order parameters $`n_𝑮`$ are the Fourier components of the crystal density, $`\rho _s(𝒓)=\rho _s+\delta \rho (𝒓)`$, at the reciprocal lattice vector $`𝑮`$ ($`\rho _s`$ is the uniform crystal density) and the coefficients of the expansion are analytic functions of the temperature $`T`$ and pressure $`P`$. Eq. 2 contains cubic terms because the order parameter sets $`\{n_𝑮\}`$ and $`\{n_𝑮\}`$ describe physically distinct crystals with different energies. As a consequence the freezing transition is generally first-order. However since both $`T`$ and $`P`$ can be independently varied the possibility exists that $`r`$ and $`u_3`$ can be made to vanish at a single point in the $`TP`$ plane. At the resulting Landau point the liquid-solid transition is continuous in a mean-field description. Landau theory makes two further distinctive predictions. First the Landau point must lie at the intersection of, at least, three first-order lines of transitions which separate the liquid from two conjugate crystalline phases, C<sub>+</sub> and C<sub>-</sub>, with identical symmetry but which differ in the sign of $`\delta \rho (𝒓)`$. Second, in three dimensions, symmetry considerations should uniquely favour a bcc structure. In spite of these interesting predictions it is not clear if, in a liquid-solid system, the point at which the cubic coefficient $`u_3`$ vanishes is experimentally accessible. On the face of it, one of the most promising candidates is a system of polydisperse hard spheres where the constituent particles have different sizes. The freezing of polydisperse hard spheres has been studied extensively in recent years motivated, in part, because it is a realistic model of a colloidal suspension. These studies have focused mainly on the effect of size polydispersity $`\sigma `$, defined as the ratio of the standard deviation to the mean of the diameter distribution, upon the fluid-solid transition. Calculations have been made using a variety of theoretical and computational techniques, for various size distributions, and in both two and three dimensions. Yet the picture that has emerged is remarkably similar. On increasing $`\sigma `$, from zero the density discontinuity at the transition $`\mathrm{\Delta }\rho =\rho _s\rho _l`$ decreases, vanishing altogether at a “terminal” polydispersity, $`\sigma =\sigma _t`$, above which no liquid-solid transition is found. A number of key questions have however been left unanswered. First, why do the densities of the coexisting phases converge as $`\sigma \sigma _t`$? If the liquid-solid transition is continuous then the singularity at $`\sigma _t`$ must correspond to a Landau point. The phase diagram should therefore contain two crystal phases, in contradiction with the theoretical work to date. Furthermore while the C<sub>+</sub> crystal has the normal bcc structure with spheres at the cube corners and centre, the C<sub>-</sub> crystal has particles at interstitial sites. The unfavourably low packing of the C<sub>-</sub> crystal ($`\varphi _m0.20`$) makes it unlikely that this phase could be important in a dense system. If the vanishing density discontinuity at $`\sigma _t`$ is not critical in origin, then what is its true nature? And finally, why is the polydisperse phase behavior apparently universal? In this letter we reexamine the freezing of polydisperse hard spheres using simple mean-field models for the polydisperse crystal and liquid phases. Our results suggest that the polydisperse solid-liquid transition at $`\sigma _t`$ is not critical. We show that the vanishing of the density discontinuity at the terminal polydispersity is a consequence of a re-entrant solid-liquid transition in a polydisperse system. Our model consists of $`N`$ hard sphere particles in a volume $`V`$, at an overall density of $`\rho =N/V`$. Each particle has a diameter $`R`$ drawn from a distribution $`\rho (R)`$ so that $`\rho =𝑑R\rho (R)`$. The distribution $`\rho (R)`$ is conveniently characterised by the set of generalised moments $`m_i=𝑑R\rho (R)w_i(R)`$ where the weight function $`w_i(R)=(R/\overline{R}1)^i`$. The zeroth moment is simply the total number density $`\rho `$. The “shape” of the diameter distribution, $`\stackrel{~}{\rho }(R)=\rho (R)/\rho `$, is taken here for simplicity, as the Schultz distribution, $`\stackrel{~}{\rho }(R)=\gamma ^\alpha R^{\alpha 1}\mathrm{exp}(\gamma R)/\mathrm{\Gamma }(\alpha )`$ with $`\alpha =1/\sigma ^2`$ and $`\gamma =\alpha /\overline{R}`$. The (excess) chemical potential $`\mu ^{ex}(R)`$ in a polydisperse system is in general a complex and unknown function of the particle size. But with the assumption that there is no critical point at $`\sigma _t`$ the excess chemical potential must, first of all, be an analytic function of $`R`$. Formally, $`\mu ^{ex}(R)`$ may be calculated from the probability, $`W(R)`$, for insertion of a test sphere of diameter $`R`$. At large $`R`$, the leading term in $`\mu ^{ex}(R)`$ is the $`PV`$ work required to generate a cavity sufficiently large to accommodate the test sphere. This contribution varies as $`R^3`$. Motivated by this we assume that in a hard-sphere crystal or fluid $`\mu ^{ex}(R)`$ has the simple analytic form $`\mu ^{ex}(R)`$ $`=`$ $`k_BT\mathrm{ln}W(R)`$ (3) $``$ $`\lambda _0+\lambda _1R+\lambda _2R^2+\lambda _3R^3.`$ (4) where consistency demands that the coefficients $`\lambda _i`$ depend only on the four moments $`m_0,\mathrm{}m_3`$ of the polydisperse distribution. Two of the four unknown coefficients may be determined from the known small and large $`R`$ limits of $`W(R)`$. This fixes $`\beta \lambda _0=\mathrm{ln}(1\varphi )`$ and $`\lambda _3=\frac{\pi }{6}P`$ with $`\varphi `$ the volume fraction and $`\beta =1/k_BT`$. Having specified the general form expected for $`\mu ^{ex}(R)`$, we now outline the calculation of the size-dependent chemical potential in the crystal. From Eq. 4 the probability to insert an arbitrary-sized test particle into any two hard-sphere systems will be equal if the two distributions have the same first four moments. In this sense the two systems may be termed “equivalent”. Since a binary mixture can always be chosen so as to match any four moments we look at the “equivalent” binary substitutionally-disordered crystal, for which simulation data is available. By looking at test particles with sizes equal to the two species in the binary mixture, for which the chemical potentials are known, the remaining two unknown coefficients ($`\lambda _1`$ and $`\lambda _2`$) in the general expression for $`\mu ^{ex}(R)`$ are determined. The resulting predictions for the polydisperse crystal has been compared with simulation data previously. Agreement is good. For the polydisperse fluid accurate expression for $`\mu ^{ex}(R)`$ are available. We use the approximate BMCSL equation of state which for a Schultz distribution has the closed form $$\frac{\pi }{6}\beta P_f\overline{R}^3=\frac{\xi }{1+\sigma ^2}+\frac{3\xi ^2}{1+\sigma ^2}+(3\varphi )\xi ^3$$ (5) where $`\xi =(\frac{1}{1+\sigma ^2})\frac{\varphi }{1\varphi }`$. The excess free energy per particle is found by integrating Eq. 5. Differentiation then yields an expression for the particle potential $`\mu ^{ex}(R)`$ which is of the form of Eq. 4. The total polydisperse free energy $`f`$ (with $`f=F/V`$) consists of ideal and excess terms, $`f=f^{id}+f^{ex}`$, which depend in a very different manner on the distribution $`\rho (R)`$. The excess free energy, $`f^{ex}=𝑑R\rho (R)\mu ^{ex}(R)`$, is a function only of the four moments variables $`m_0,\mathrm{}m_3`$. The ideal term $`\beta f^{id}=𝑑R\rho (R)\mathrm{ln}(\rho (R))`$, by contrast, depends upon the detailed shape of the function $`\rho (R)`$ so formally, at least, the total free energy $`f`$ resides in an infinite dimensional space. Sollich, Cates and Warren have shown that the full polydisperse phase diagram can be approximated by replacing the ideal free energy by a projected term $`\widehat{f^{id}}(\{m_i\})`$ which includes only those contributions that depend on a finite set of moment variables. The remaining contributions to the ideal free energy, from those degrees of freedom of $`\rho (R)`$ which can be varied without affecting the selected moments, are chosen to minimise the free energy. The power of this approach is that by including more moment variables the calculated phase diagram approaches, with increasing precision, the actual phase diagram. The position of equilibrium is fixed by the equality of the ‘moment’ chemical potentials, $`\mu _i=\widehat{f}/m_i`$ and the pressure $`P`$ among all phases with $`\widehat{f}`$ the projected free energy. For polydisperse hard spheres the excess moment chemical potentials are simply combinations of the (known) coefficients {$`\lambda _i`$} in $`\mu ^{ex}(R)`$ (Eq. 4) since $`\mu (R)=\delta \widehat{f}/\delta \rho (R)=_i(\widehat{f}/m_i)w_i(R)=_i\mu _iw_i(R)`$. The first two ideal moment potentials are, ignoring unimportant factors, $`\mu _0^{id}=\mathrm{ln}\rho \alpha \mathrm{ln}\overline{R}`$ and $`\mu _1^{id}=\alpha \overline{R}`$. In order to understand the qualitative features of the polydisperse transition, we consider first the simplest description in which only the lowest moment ($`m_0`$) is retained in the projected free energy. In this limit, there is no size fractionation so the normalised diameter distribution, $`\stackrel{~}{\rho }(R)`$, is fixed and equal in all phases. The location of the fluid-solid transition is determined by equating $`P`$ and $`\mu _0`$, the chemical potential of the mean-sized particle, in each of the crystal and fluid phases. In this way we obtain the phase diagram of Fig. 1. At low densities we find, in qualitative agreement with previous work, that the density discontinuity at freezing $`\mathrm{\Delta }\rho `$ reduces with increasing polydispersity and eventually vanishes at the point $`\sigma _t=0.0833`$ and $`\rho _t=1.111`$. However, at high polydispersity, the calculated diagram contains a novel feature. For $`0.07\sigma 0.083`$ we find a further transition from the crystal back to a disordered phase. The location of this polydispersity-induced-melting transition varies sharply with polydispersity. The range of densities for which a crystal is found shrinks with increasing polydispersity until at $`\sigma _t`$ the crystal of density $`\rho _t`$ disappears completely from the phase diagram. At the point ($`\rho _t`$,$`\sigma _t`$) the line of fluid-to-crystal transitions intersects an upper line of crystal-to-amorphous transitions. At all points in the ($`\rho `$,$`\sigma `$) plane the freezing transition remains first-order so the singularity at ($`\rho _t`$,$`\sigma _t`$) is equivalent to the point of equal concentration seen in molecular mixtures and is not a critical point – so providing an answer to the second of our questions. We now turn to the vanishing density discontinuity in the vicinity of the point of equal concentration. The Gibbs free energy difference $`\mathrm{\Delta }g=g_sg_l`$ (with $`g=G/N`$) between the solid and liquid phases as a function of pressure for three fixed values of $`\sigma `$ is shown in Fig. 2. The re-entrant nature of the freezing transition is very evident with a stable crystal appearing only in an intermediate range of pressures bounded by the two transitions where $`\mathrm{\Delta }g=0`$. The density change $`\mathrm{\Delta }\rho `$ at the liquid-solid transition is given by the slope of the free energy curve at the point $`\mathrm{\Delta }g=0`$ since $`\mathrm{\Delta }g/\rho =(1/\rho _s)(1/\rho _l)`$. Increasing the polydispersity raises the free energy of the solid relative to the fluid, displacing the $`\mathrm{\Delta }g`$ curve vertically and as is evident from Fig. 2 reduces the density jump at the transition. At the terminal polydispersity the solid just touches the fluid curve so the tangent is horizontal and $`\mathrm{\Delta }\rho =0`$. In a system of hard spheres (where the internal energy is constant) the condition $`\mathrm{\Delta }\rho =0`$ necessarily requires the entropy change at this point to also vanish. Clearly while the underlying microscopic transition remains first-order the first derivatives of the thermodynamic potential are continuous at $`\sigma _t`$. A conventional classification of this transition, following the ideas of Ehrenfest, is clearly inappropriate. Retaining more moments in the projected free energy allows the possibility of different-sized particles to be partitioned between phases. To establish the effect of fractionation we have recalculated the phase equilibria with two moment variables. The phase diagram, now given by equating $`P`$ and the moment potentials $`\mu _0`$ and $`\mu _1`$ in all phases, is unchanged in topology from Fig. 1. The point of equal concentration is retained although shifted slightly to $`(\rho _t,\sigma _t)=(1.115,0.0831)`$. Hence our prediction of a re-entrant freezing transition seems to be robust. The extent of fractionation is generally small, although increasing as $`\sigma \sigma _t`$, with the larger particles preferentially found in the crystal phase. Details of our calculations are given elsewhere. The appearance of an equilibrium amorphous phase may be understood simply from maximum packing arguments. For uniform-sized spheres the maximum density of a randomly packed Bernal glass ($`\rho _{rcp}1.22`$) is significantly smaller than the geometric limit of a close-packed hexagonal of fcc crystal ($`\rho _{cp}=\sqrt{2}`$). The greater packing efficiency of the crystal ensures, that at high densities, particles have more freedom and so a higher entropy than those in the fluid phase. The stable high density phase of uniform hard spheres is therefore crystalline. Polydispersity affects crystalline and disordered phases in different ways. In an amorphous phase, small particles pack in the cavities between large particles and $`\rho _{rcp}`$ increases with $`\sigma `$ while the constrained environment of a fixed repeating unit cell causes the maximum density of a crystal $`\rho _{cp}`$ to decrease with $`\sigma `$. Computer simulations indicate that the limiting densities of amorphous and crystalline structures become equal at $`\sigma 0.05`$. For higher polydispersities disordered structures fill space more efficiently than ordered ones. Consequently the appearance of an equilibrium amorphous phase and the ensuing re-entrant freezing transition should be a universal feature of all polydisperse systems – so answering the last of our questions. In conclusion we have presented a simple mean-field model of polydisperse hard spheres which suggests that the equilibrium state at high polydispersities and densities is amorphous. An equilibrium crystal is found only at intermediate densities. The growing stability of the fluid phase with polydispersity causes a singularity in the density-polydispersity phase diagram which we identify as a point of equal concentration. Finally, although we use mean-field theory, our results should be robust with respect to fluctuation effects since the transition we find is not critical and the thermodynamic functions are not singular at this point. It is a pleasure to thank Mike Cates, Peter Sollich and Richard Sear for helpful discussions. The work was made possible by financial assistance from the Engineering and Physical Sciences Research Council.
no-problem/9902/gr-qc9902011.html
ar5iv
text
# MATTERS OF GRAVITY, The newsletter of the APS Topical Group on Gravitation MATTERS OF GRAVITY The newsletter of the Topical Group on Gravitation of the American Physical Society Number 13 Spring 1999 ## Editor Jorge Pullin Center for Gravitational Physics and Geometry The Pennsylvania State University University Park, PA 16802-6300 Fax: (814)863-9608 Phone (814)863-9597 Internet: pullin@phys.psu.edu WWW: http://www.phys.psu.edu/~pullin ## Editorial I just wanted to renew the invitation to everyone to suggest articles for the newsletter. The only way to keep the newsletter vibrant and balanced is if I hear from you, don’t hesitate to email me with suggestions. The next newsletter is due February 1st. If everything goes well this newsletter should be available in the gr-qc Los Alamos archives under number gr-qc/9902011. To retrieve it send email to gr-qc@xxx.lanl.gov (or gr-qc@babbage.sissa.it in Europe) with Subject: get 9902011 (numbers 2-10 are also available in gr-qc). All issues are available in the WWW: http://vishnu.nirvana.phys.psu.edu/mog.html A hardcopy of the newsletter is distributed free of charge to some members of the APS Topical Group on Gravitation. It is considered a lack of etiquette to ask me to mail you hard copies of the newsletter unless you have exhausted all your resources to get your copy otherwise. If you have comments/questions/complaints about the newsletter email me. Have fun. Jorge Pullin ## Editorial policy: The newsletter publishes articles in three broad categories, 1. News about the topical group, normally contributed by officers of the group. 2. Research briefs, comments about new developments in research, typically by an impartial observer. These articles are normally by invitation, but suggestions for potential topics and authors are welcome by the correspondents and the editor. 3. Conference reports, organizers are welcome to contact the editor or correspondents, the reports are sometimes written by participants in the conference in consultation with organizers. Articles are expected to be less than two pages in length in all categories. Matters of Gravity is not a peer-reviewed journal for the publication of original research. We also do not publish full conference or meeting announcements, although we might consider publishing a brief notice with indication of a web page or other contact information. ## Correspondents * John Friedman and Kip Thorne: Relativistic Astrophysics, * Raymond Laflamme: Quantum Cosmology and Related Topics * Gary Horowitz: Interface with Mathematical High Energy Physics and String Theory * Richard Isaacson: News from NSF * Richard Matzner: Numerical Relativity * Abhay Ashtekar and Ted Newman: Mathematical Relativity * Bernie Schutz: News From Europe * Lee Smolin: Quantum Gravity * Cliff Will: Confrontation of Theory with Experiment * Peter Bender: Space Experiments * Riley Newman: Laboratory Experiments * Warren Johnson: Resonant Mass Gravitational Wave Detectors * Stan Whitcomb: LIGO Project * Peter Saulson; former editor, correspondent at large. ## Topical group news <br> Jim Isenberg, TGG secretary, University of Oregon jim@newton.uoregon.edu $``$ Election News The ballot for this year’s election is complete, and it will very soon be going out to all members. The list of candidates is as follows: Vice Chair: Richard Matzner, Bob Wald. Secretary/Treasurer: David Garfinkle, Ted Jacobson. Delegate (Slot 1) John Friedman, Jennie Traschen. Delegate (Slot 2) Matt Choptuik, Ed Seidel. PLEASE VOTE. $``$ Annual Meeting This year our annual meeting will be earlier than usual. It will be in conjunction with the APS Centennial Meeting, 20-26 March, 1999, in Atlanta, Georgia. This meeting is going to be very big. There will be a lot of bicentennial stuff (exhibits, lectures, historical talks), and it should be quite nice. Please check the web site: http://www.aps.org/centennial The activities tied in with GR and gravitation include the following Centennial Symposium: Einstein’s Legacy: Nature’s Experiments in Gravitational Physics (Tuesday, 24 March) Clifford Will: Einstein’s Relativity Put to Nature’s Test: A Centennial Perspective Robert Kirshner: Was the Cosmological Constant Einstein’s Greatest Mistake? David Spergel: The Cosmic Microwave Background: A Bridge to the Early Universe Kip Thorne: A New Window on the Universe: The Search for Gravitational Waves Invited Session: Progress in the theory of gravitation (Thursday, 26 March) Robert Wald: Classical general relativity Saul Teukolsky: Numerical methods Gary Horowitz: Quantum gravity Invited Session: Instrumentation for Gravitational Radiation Detection ( Monday, 23 March) David Shoemaker: Interferometric Detectors - overview Peter Fritschel: Gravitational wave interferometer configurations Jordan Camp: Requirements and performance of optics for gravitational wave interferometry Eric Gustafson: Lasers for Gravitational wave interferometry Peter Saulson: Thermal noise in gravitational wave interferometers There will also be a number of focus sessions with invited and contributed talks (See www site), plus the Annual TGG Business Meeting, which will be on Wednesday at 5:30. PLEASE COME. $``$ Travel Support for the Annual Meeting Last year, we decided that a useful way to spend our accumulating treasure is to support young people coming to the annual meeting. So this year, we will begin a program of small grants to help pay travel and/or lodging for the meeting in Atlanta. The people eligible for these grants are students and post docs. If you wish to apply, please send the following to Jim Isenberg (jim@newton.uoregon.edu). 1) A brief CV 2) A list of your publications 3) A supporting letter from a faculty member research group director 4) A tentative travel plan (including rough cost) for going to the meeting The applications should be in by the 20th of February. ## We hear that… <br> Jorge Pullin, Editor pullin@psu.edu Stephen Hawking has received the Lilienthal prize from APS “For boldness and creativity in gravitational physics, best illustrated by the prediction that black holes should emit black body radiation and evaporate, and for the special gift of making abstract ideas accessible and exciting to experts, generalists, and the public alike.” Luis Lehner has received the Metropolis award from APS “For developing a method that significantly advances the capability for modeling gravitational radiation by making possible the stable numerical solution of Einstein’s equation near moving black holes.” Beverly Berger was elected fellow of the APS “For her pioneering contributions to global issues in classical general relativity, particularly the analysis of the nature of cosmological singularities, and for founding the Topical Group on Gravitation of the APS.” Joan Centrella was elected fellow of the APS “ For her original contributions to numerical relativity, cosmology, and astrophysics, in particular for her studies of large-scale structure in the universe and sources of gravitational radiation.” Ron Drever was elected fellow of the APS “For his fundamental experiment to test the isotropy of space, and for his pioneering contributions to laser interferometry as a tool for gravitational-wave detection.” Bernie Schutz was elected fellow of the APS “For his pioneering work in the theory of gravitational radiation, for the discovery of new instabilities in rotating, relativistic stars, and for elucidating how gravitational-wave observations can reveal astrophysical and cosmological information.” Stu Shapiro was elected fellow of the APS “For his broad contributions to theoretical astrophysics and general relativity, including the physics of black holes, neutron stars, and large N-body dynamical systems, and his pioneering use of supercomputers to explore these areas. ” Our warmest congratulations to them all! ## The Chandra Satellite <br> Beverly Berger, Oakland University berger@oakland.edu Those of you who knew Chandra and those who only knew of him will be pleased to learn that NASA has named its soon-to-be-launched Advanced X-ray Astrophysics Facility (formerly AXAF) the Chandra X-ray Observatory. The Chandra Observatory will join the Hubble Space Telescope and the Compton Gamma-ray Observatory in NASA’s program of major space-based astronomical facilities. The final observatory in the series, for infrared astronomy (SIRTF), is under development. The X-ray telescope in the Observatory has a larger collecting area (400 cm<sup>2</sup> at 1 KeV) and significantly better angular resolution ($`.5^{\prime \prime }`$) than previous X-ray telescopes such as those on the Einstein and Rosat observatories. Instruments include a CCD Imaging Spectrometer, developed by Penn State and MIT, and a High Resolution Camera, built by the Smithsonian Astrophysical Observatory. After Space Shuttle deployment, rockets will boost the Chandra Observatory into an unusual elliptical orbit with apogee more than 1/3 the distance to the moon. This will allow it to spend most of its time above the earth’s radiation belts. The Chandra X-ray Observatory Center (CXC), operated by SAO, will control science and flight operations of the Observatory. Excerpts from the NASA press release are given below. For more information see the Chandra Observatory Web Site at http://xrtpub.harvard.edu/pub.html. NASA’s Advanced X-ray Astrophysics Facility has been renamed the Chandra X-ray Observatory in honor of the late Indian-American Nobel laureate, Subrahmanyan Chandrasekhar. The telescope is scheduled to be launched no earlier than April 8, 1999 aboard the Space Shuttle Columbia mission STS-93, commanded by astronaut Eileen Collins. Chandrasekhar, known to the world as Chandra, which means “moon” or “luminous” in Sanskrit, was a popular entry in a recent NASA contest to name the spacecraft. The contest drew more than six thousand entries from fifty states and sixty-one countries. The co-winners were a tenth grade student in Laclede, Idaho, and a high school teacher in Camarillo, CA. “Chandra is a highly appropriate name,” said Harvey Tananbaum, Director of the CXC. “Throughout his life Chandra worked tirelessly and with great precision to further our understanding of the universe. These same qualities characterize the many individuals who have devoted much of their careers to building this premier x-ray observatory.” “Chandra probably thought longer and deeper about our universe than anyone since Einstein,” said Martin Rees, Great Britain’s Astronomer Royal. “Chandrasekhar made fundamental contributions to the theory of black holes and other phenomena that the Chandra X-ray Observatory will study. His life and work exemplify the excellence that we can hope to achieve with this great observatory,” said NASA Administrator Dan Goldin. Widely regarded as one of the foremost astrophysicists of the 20th century, Chandrasekhar won the Nobel Prize in 1983 for his theoretical studies of physical processes important to the structure and evolution of stars. He and his wife immigrated from India to the U.S. in 1935. Chandrasekhar served on the faculty of the University of Chicago until his death in 1995. The Chandra X-ray Observatory will help astronomers worldwide better understand the structure and evolution of the universe by studying powerful sources of X rays such as exploding stars, matter falling into black holes and other exotic celestial objects. X-radiation is an invisible form of light produced by multi-million degree gas. Chandra will provide x-ray images that are fifty times more detailed than previous missions. At more than 45 feet in length and weighing more than five tons, it will be one of the largest objects ever placed in Earth orbit by the Space Shuttle. ## Analytical event horizons of merging black holes <br> Simonetta Frittelli, Duquesne University simo@mayu.physics.duq.edu Probably all of us are familiar with the celebrated picture of the pair of pants that made it to the cover of the November issue of Science of 1995 . This is the article where the geometry of the collision of two black holes is explained in rather lay terms (which has a definite appeal) and where an embedded picture of the event horizon of such a collision was generated by numerically integrating the light rays that generate the horizon. But since I wish to make a point, it wouldn’t be wise for me to leave up to the reader to retrieve his own copy of the journal because, if your desk is like mine, your copy must be buried beneath one of those stacks of paper. Or, unfortunately enough, it could even have been filed so ingeniously that you are sure now not to be able to find it. Let me instead save you the trip to the library; to start with, let me just borrow from the article the main features of event horizons: (i) The horizon is generated by null rays that continue indefinitely into the future. (ii) The null generators may either continue indefinitely into the past, or meet other generators at points that are thereby considered as their starting points. (iii) The cross-sectional area of the horizon increases monotonically to a constant at late times. We might as well say that the event horizon is a null hypersurface that does not self-intersect, which is formed by following each null geodesic in a bundle of finite area into the past just so far up to the point where it meets another geodesic of the bundle. In fact, if we have an expanding null hypersurface of finite area at late times, which generically does self-intersect in the past, we might as well regard as an event horizon the piece of the null hypersurface that lies to the future of the crossovers, and regard the crossovers as the boundary of the horizon. In this context the “seam” along the inside of the trouser legs is a crossover line where the generators are terminated. The computer simulation of the horizon provided deep insight into the nature of this boundary of the event horizon, distinguishing the caustic points (where neighboring rays meet) from the simple crossover points (where distant rays intersect without focusing). The news is that another pair of pants has recently been released. It looks pretty much like the original, up to smooth deformations. My point is, however, that this newest pair of pants is not the product of numerical integration, but is the embedded picture of an analytical event horizon. There is now an analytical expression for the intrinsic metric of the event horizon of merging eternal black holes. The new pair of pants was constructed by Luis Lehner, Nigel Bishop, Roberto Gómez, Béla Szilágyi and Jeff Winicour , of the University of Pittsburgh relativity group, which has traditionally sustained an interest in null hypersurfaces (tell me about it). The recipe for making this event horizon calls for all sorts of ingredients available in the pantry of the characteristic formulation of the Einstein equations. Surprisingly, perhaps, it does not call for a spacetime metric. Surprisingly, because one might think that, since the metric is needed in order to find geodesics, the horizon could only be known a posteriory of finding the spacetime metric. The key to this remarkable work is to understand that the event horizon can be used as partial data for constructing the spacetime metric. From this point of view, the metric will be known a posteriori of finding the horizon! And the horizon is found by solving only constraint equations, namely, equations interior to the horizon itself. More precisely, the horizon is regarded as one of two intersecting null hypersurfaces that jointly act as the initial surface for evolution in double null coordinates. In this case, the conformal metric of the null slice constitutes free data. The authors choose the conformal structure so that the 3-metric of the horizon is $`\gamma _{ij}=\mathrm{\Omega }^2h_{ij}`$ where $`h_{ij}`$ is the pullback of the Minkowski metric to a self-intersecting hypersurface which is null with respect to the Minkowski metric. (An example of such a self-intersecting hypersurface is the hypersurface traced in four dimensions by the imploding wavefront of an ellipsoid in 3-space.) The conformal factor is then determined by the projection $`n^an^bR_{ab}=0`$ of the vacuum Einstein equations along the null generators $`n^a`$ of the hypersurface. This is an ordinary second-order differential equation for $`\mathrm{\Omega }`$ that determines the dependence of $`\mathrm{\Omega }`$ on the affine parameter $`u`$ along one null geodesic. Apparently, finding the solution is quite simple. The freedom is huge, but the authors point out that $`\mathrm{\Omega }^2`$ relates to the cross-sectional area of the light beam, and thus its asymptotic behavior is fixed by the condition that the area must be finite at late times $`u\mathrm{}`$. Furthermore, the behavior of the area element at the boundary of the horizon is determined by the property of the boundary of containing either caustic points or plain crossovers, which is also used in restricting the behavior of $`\mathrm{\Omega }^2`$. The requirement that the Weyl curvature must be regular provides further tips for the integration. The intrinsic geometry $`\gamma _{ij}`$ of the horizon is thus found explicitly in terms of two angular coordinates $`(\theta ,\varphi )`$ labeling the light rays, and the affine parameter $`u`$, acting as a time. It is rather instructive to see how the figure arises. The pair of pants is constructed by stacking up 3-dimensional Euclidean embeddings of 2-dimensional surfaces obtained by slicing the horizon with constant-$`u`$ hypersurfaces. Actually, the figure corresponds to a case of symmetry of revolution, so that one dimension can be ignored, but this is exactly as in the case of the “computational” pair of trousers of the Science article. Also, strictly speaking, the calculation represents the fission of two white holes, but time reversion allows for its interpretation in terms of the merger of two black holes. At no time does the conformal geometry used as data exhibit more than one hole. However, the horizon obtained by integrating the single Einstein equation does have two holes at early affine times, and just one hole at late affine times. The authors attribute these interesting features to the richness of the Einstein equations; still, a good deal of foresight on their part must have helped bring them to light. References: R. A. Matzner, H. E. Seidel, S. L. Shapiro, L. Smarr, W.-M. Suen, S. A. Teukolski and J. Winicour, Geometry of a Black Hole Collision, Science 270, pp 941-947 (1995). Please check the Science article for references to several authors that contributed computational results collected in the article. L. Lehner, N. T. Bishop, R. Gómez, B. Szilágyi and J. Winicour, Exact Solutions for the Intrinsic Geometry of Black Hole Collisions, gr-qc/9809034 ## LIGO project update <br> David Shoemaker, MIT dhs@ligo.mit.edu LIGO Installation is once again the focus of our efforts in the LIGO Lab. Because the activities are so hands-on, this Update is mostly in graphical form with the text serving as captions for the photographs which can be found at ftp://ligo.ligo.caltech.edu/pub/mog/figures29jan98.pdf We start (Fig. 1) with a view from space of the Livingston Observatory. The 4km arms are visible (as the clearing of the forest), and in the detailed view the ’X’ shaped main building can be made out. Descending from angel to airplane height, we now see clearly the high-bay space containing the interferometer components on the right, the covered beam tubes in the background and to the right, and the entrance and offices in the right foreground. (The ’overpass’ and the black water tank are fire precautions.) A view inside the high bay (Fig 3), this time from Hanford, shows a number of the test-mass vacuum equipment chambers (the taller vertical cylinders), some ’HAM’ multipurpose chambers (the two to the right), the main beam tubes (the large horizontal tubes to the far left and the right background), and a few of the many electronics racks. Navigating around the equipment involves lots of walking and climbing of stairs! Several 2km sections of the beam tube (Fig 4) have been successfully ’baked out’—heated to drive off excess water and other contaminants. We see here a section of beam tube, wrapped in insulation, a heavy cable snaking across the floor to deliver current for heating. The concrete cover is arch-shaped. A very significant effort is now underway in both Hanford and Livingston to install the seismic isolation system. Vacuum cleanliness requires ’bunny suits’ (Fig 5); all of the equipment placed in the vacuum must be cleaned and baked as well, to guard against contamination of the mirrors. Fig 6 is a view after installation, with the bottom table, cylindrical masses and somewhat hidden springs between them, the top ’optics’ table, and counterweights (emulating the final load) all visible. The test masses, fused silica 25cm in diameter, are carefully characterized in a metrology interferometer (Fig 7), and mounted in a cage with a simple wire loop. A detail of the point of departure of the wire is seen at the bottom left. The optical losses are determined by the polish and coating (and its cleanliness), and the mechanical losses are the point of connection with thermal noise, and so excruciating attention must be given to every detail. The optics are installed (Fig 8) in the vacuum system and given an initial alignment sufficiently precise that the reflected beam will be correctly aligned to within one beam tube radius (0.5 m) over the length of the beam tube (4 km). Fancy surveying! Our last image (Fig 9) shows a part of the optical table carrying the Pre-Stabilized Laser and some of the Input Optics (the University of Florida’s contribution). The cylindrical vacuum system contains the frequency reference cavity for the laser, and the rectangular block of fused silica (developed at Stanford) is an optical cavity used in transmission to ’clean’ the optical beam. Our schedule calls for first tests of an interferometer using just the optics in the main building for this summer, with the full 4km paths included in the fall of ’99. Please visit one of the sites if you are in the vicinity; contact and other information can be found at http://www.ligo.caltech.edu. ## “Bicentenary of the Cavendish Experiment” Conference <br> Riley Newman, University of California Irvine rdnewman@uci.edu The measurement of G has become a popular occupation following the announcement a few years ago of widely discrepant new G values — most notably a value reported by the German PTB (NIST-equivalent) lab which is more than half a percent away from the accepted ”CODATA” value (see volume 4 of this Newsletter). Thus the celebration of the 200th anniversary of Cavendish’s G measurement with a conference in London last November, sponsored by IOP and organized by Terry Quinn of the BIPM and Clive Speake, drew a lively crowd of G measurers to Cavendish’s turf. Twelve groups actively pursuing G measurements were represented; six of these announced new or updated G values: Lab G $`\times 10^{11}`$ $`(ppm)`$ $`GG_{CODATA}(\sigma )`$ $`GG_{PTB}(\sigma )`$ New Zealand MSL 6.6742(6) 90 1.5 -51 Zurich 6.6749(14) 210 1.4 -27 Wuppertal 6.6735(9)(13) 240 0.5 -25 JILA 6.6873(94) 1400 1.6 -3 BIPM 6.683(11) 1650 0.9 -3 Karagioz (Russia) 6.6729(5) 75 0.3 -57 Luther/Towler 1982 6.6726(5) 64 -58 PTB 1995 6.71540(56) 83 42 Here the comparisons with CODATA and PTB in standard deviations reflect uncertainties in both numbers compared. The last two lines give the 1982 result of Luther and Towler (on which the CODATA value is based, after doubling the assigned uncertainty), and the puzzling 1995 PTB result. All six new results are higher than but roughly consistent with the CODATA value. The PTB value remains a mystery, not to be lightly dismissed — the CODATA committee will have difficult decisions to make in its next round of assessments! No lab yet feels it has surpassed the 64 ppm accuracy that Luther and Towler assigned to their 1982 measurement, although a number of groups target accuracy of 10 ppm or better. The twelve approaches to G measurement are remarkable in their variety – no two are very similar in technique. Heedful of Kuroda’s caution about the perils of anelasticity, all but two of the experiments either avoid the use of a torsion fiber or use a fiber in a mode such that its internal strains are negligible. The New Zealand MSL lab compares the torque on a torsion pendulum with that due to an electrostatic force which is in turn calibrated in terms of the angular acceleration it produces on the pendulum in a separate experiment. The Zurich group uses a beam balance to weigh kilogram masses in the presence of mercury filled steel tanks – this group anticipates greatly increased accuracy when some systematics issues are resolved. Wuppertal measures the effect of source masses on the spacing of a pair of suspended masses which form a microwave Fabry-Perot cavity, and aims for accuracy better than 100 ppm. JILA uses its free-fall gravimeter to measure the change in g produced by a movable tungsten ring mass. BIPM uses a torsion balance suspended by a thin flat metal strip; its dominant torsional restoring force is gravitational, thus minimizing anelastic dangers. BIPM plans to use its instrument in two ways to measure G, in both a static displacement mode and a ”time of swing” dynamic mode, aiming for a solidly reliable measurement at a 100 ppm level. The Russian lab uses a torsion pendulum in the classic dynamic mode used by Luther and Towler; it has been troubled in the past by poorly understood drifts. Work in progress was reported by additional groups: Luther at LANL is developing an instrument which will use a bifilar suspension which, like the BIPM strip suspension, has a restoring torque which is dominantly gravitational in origin thus circumventing anelasticity issues. The University of Washington and Irvine labs are both building instruments which use thin plate pendulums suspended in nearly pure quadrupole gravitational field gradients produced by special source mass configurations. The Washington approach elegantly avoids fiber-related problems by servoing its pendulum to a continuously rotating platform whose measured periodic angular acceleration reflects that of the pendulum, while the pendulum fiber never twists significantly. The Irvine instrument uses the classic ”time of swing” dynamic method, operating at 2K with a high Q fiber whose anelastic effects should be sufficiently small and well understood to not limit the measurement’s accuracy. Both the JILA group and the Taiwan group of W.T. Ni discussed plans for G measurements using a scheme like Wuppertal’s but using an optical rather than microwave interferometer as distance gauge. The SEE project was presented, which hopes to determine G and test other aspects of Newtonian gravity using measurements of “horseshoe” trajectories of test masses projected toward a field mass within a long cylindrical space capsule. Development at the Politecnico di Torino of a G measurement using a pendulum swinging between two mass spheres was discussed. The conference also featured fascinating talks by G. Gillies and I. Falconer on the history of G measurements and the painfully shy but highly skilled man Cavendish. Thibault Damour lectured on the theoretical importance of measurements of G and its possible dependencies on mass composition, distance, and time. Thibault reminded us that, contrary to popular belief, G is NOT the least well known fundamental constant – that distinction belongs to the strong coupling constant! G measurement lab contacts: New Zealand: t.armstrong@irl.cri.nz Zurich: schlammi@physik.unizh.ch Wuppertal: meyer@wpos7.physik.uni-wuppertal.de, JILA: Fallerj@jila.colorado.edu BIPM: tquinn@si.bipm.fr Karagioz (Russia): irtrib@cityline.ru Ni (Taiwan): wtni@phys.nthu.edu.tw SEE: ASanders@utk.edu Politecnico di Torino: demarchi@polito.it Irvine: rdnewman@uci.edu Washington: gundlach@dan.npl.washington.edu ## Eighth Midwest gravity meeting <br> Richard Hammond, North Dakota rhammond@plains.NoDak.edu The Eighth Midwest Relativity Meeting was held September 24—25 at North Dakota State University in Fargo. The transparencies are electronically published on the MWRM8 website http://www.phys.ndsu.nodak.edu/mrm8.html The meeting was kicked off with a bang by Beverly Berger’s characterization of singularities for generic matter. In her work she assumed the existence of only one Killing vector, and discussed the outlook for the case of no symmetries. Bob Wald presented an intriguing discussion of the Path Integral in quantum gravity, and after emphasizing our incomplete understanding of both the wave function and the path integral, compared it with the parameterized Schödinger quantum mechanics, in the context of tunneling, and showed there was a critical parameter for tunneling. Rich Hammond demonstrated how a cosmological term gradient breaks the principle of equivalence and gave an upper bound for its gradient based on laboratory results. Jean Krisch discussed cosmology in D+1 dimensions, and showed that as D increased the Planck temperature decreased. Jim Wheeler explained his conformal theory of gravity in 4+4 dimensions, while Terry Bradfield used a “compensating” field to obtain conformal invariance. Mike Martin examined gauge fields in the context of a classical unified theory of gravity and electromagnetism. Gabor Kunstatter made his debut at MWRMs with a discussion on the entropy of black holes (why is it so large?) in gravitation with a scalar field. Brett Taylor showed how a scalar field can make the black hole temperature zero, and Leopoldo Zayas discussed procedures for obtaining black hole entropy using string theory methods. Bill Hiscock gave an overview of the OMEGA project, which, if chosen, will be one of NASA’s new smaller missions which will orbit an interferometer around earth-moon. He emphasized that there are six known white dwarf binaries that OMEGA should detect. Shane Larson calculated the noise in the interferometer without resorting to the usual long wavelength approximation. Ted Quinn calculated the force on a scalar particle in curved space including the radiation reaction force. Ken Olum explained fast travel in terms of negative mass, or Casimir-type energy. We all know the picture with Einstein standing in front of the blackboard with “$`R_{i\kappa }=0\mathrm{?}^{\prime \prime }`$. Dwight Vincent gave a fascinating account of this picture, from its photo-shoot in 1931 at the Mt. Wilson Observatory, to modern commercialization of it. Discussions focused on the meaning of the question mark, and what physics was actually being questioned. Charro Gruver derived the material action for gravitation with torsion, and showed how to obtain the correct conservation law for angular momentum plus spin. Bill Pezzaglia derived equations of motion for particles with spin in the presence of torsion by generalizing the Lagrangian to include an area term. Robert Mann presented an interesting exact solution for the N-body problem in two dimensional gravity with a scalar field, and showed, for example, that when the Hamiltonian becomes large with respect to $`3mc^2`$ the relativistic effects are fully evident. Thomas Baumgarte defined a new conformal 3-metric to modify the ADM formalism, and discussed geodesic slicing vs. harmonic slicing with regard to numerical solutions. Ed Glass gave an illuminating discussion of the Vaiyda metric, and showed that a generalization leads to both a null fluid and string fluid. Ivan Booth discussed boundary terms and Steve Harris discussed the future causal boundary and multiply warped spacetimes. Mike Ashley discussed the properties of the a-boundary as a topological object. Homer Ellis explained space-time-time, and ‘dark hole’ solutions with singularities. Ian Redmount examined quantum field theory states in Robertson Walker cosmology and claimed that particles can only be well defined in an open universe. This created a lively discussion with Wald pointing out that his results are probably not valid for massless particles (such as photons). The discussion continued when Berger, Mann, and John Friedmann joined the fray. Marc Paleth reported on the Wigner function and the relation between its peaks and regions of high correlation. Another lively discussion followed when Zayas questioned the $`\mathrm{}0`$ limit. Andreas Zoupas explained how environmental de-coherence arises from the master equation and the reduced equation. James Geddes gave a detailed account of the measure in the path integral from the point of view of a collection of subsets. Excitement grew as meeting neared its end with John Friedmann’s fascinating discussion of how, in a window between $`10^9`$ and $`10^{10}`$, perturbations in a rotating neutron star can grow. Finally, Abraham Ungar used the law of Einstein velocity addition to generalize the laws of motion, and suggested tests for his theory. ## GWDAW ’98 <br> Lee Samuel Finn, Penn State, lsf@gravity.phys.psu.edu On 19–21 November 1998, nearly 100 researchers from Australia, Asia, Europe and North America gathered in Central Pennsylvania to attend the Third Gravitational Wave Data Analysis Workshop, held under the auspices of Penn State’s Center for Gravitational Physics and Geometry. Like the first GWDAW, hosted by MIT in December 1996, GWDAW’98 was organized as a workshop. There were only a small number of short (typically 15 minutes) invited talks, each of which introduced or commented on an outstanding challenge in gravitational wave data analysis. Each talk was followed by extended, moderated discussion (typically 45 minutes) among all the participants. Each invited speaker was admonished by the organizing committee to speak to the future: to look, not backward to the achievements of yesterday, but forward to the challenges ahead and how they might be addressed. To provide focus for the meeting, the organizing committee identified four different areas of study that would benefit from intense, focused discussion in a workshop setting. Speakers were chosen and the workshop was organized around the subjects of data diagnostics, upper limits and confidence intervals, LISA data analysis challenges, and collaborative data analysis. The workshop’s first morning began an orientation: a series of status reports given by representatives of the four different interferometer projects currently under construction — GEO 600 (B. S. Sathyaprakash), LIGO (A. Lazzarini), TAMA 300 (N. Kanda and T. Tanaka), and VIRGO (A. Vicere’) — with an emphasis on their developing plans for data handling and analysis. Following lunch, the participants turned to the discussion of data diagnostics: the use of the data channel itself as a diagnostic monitor of both the instrument’s health and the usefulness of the signal channel for analysis. Dr. S. Vitale (University of Trento) described a new, automated data quality monitoring system that has been developed and installed on the AURIGA cryogenic acoustic detector. Two components of this system were of particular interest to the workshop participants: the first was the requirements it placed on the distribution of the signal channel output over hour-long intervals; the second was the requirement of consistency in the excitation amplitude of the antenna’s two resonant modes. Following this discussion, Dr. S. Mukherjee (Penn State) described the development of a Kalman Filter that can extract from the signal channel of an interferometric detector the amplitude and phase of the mirror suspension violin modes. Passing gravitational waves do not excite these modes, while technical mechanical noise sources that move the interferometer test masses do; thus, the suspension modes are a sensitive monitor of mechanical disturbances masquerading as gravitational waves. The wide-ranging discussion that followed included the use of the Kalman predictor as a means of reducing the data dynamic range and, possibly, the removal of the lines before analysis for gravitational wave signals. The afternoon was rounded out by a presentation by Drs. N. Kanda (Miyagi University of Education) and D. Tatsumi (Institute for Cosmic Ray Research, University of Tokyo), who outlined of the TAMA detector calibration scheme and discussed some of the the effects of calibration error on the ability to reliably detect and identify the parameters of gravitational wave sources. Despite their unprecedented sensitivity and bandwidth, there is no guarantee that the first generation of interferometric detectors will detect directly any sources. Nevertheless these instruments can — at the very least — set interesting and provocative upper limits on source strengths and rates. The second morning of the workshop was given-over to a discussion of the use of data from gravitational wave detectors to set upper limits. Four talks punctuated a very animated discussion. The morning began with a tutorial, including several worked examples, on the construction of confidence intervals and credible sets, given by Dr. S. Finn (Penn State University). Two lessons were evident in this presentation: the first, that the analysis involved in the construction of upper limits and confidence intervals is as involved as that which goes into deciding upon detection or measuring parameter values, and the second, that the procedures for constructing confidence intervals and credible sets involve choices, and that these affect quantitatively the upper limits or confidence intervals derived. Following this tutorial, Dr. S. Mohanty (Penn State University) described how, even in the absence of a known waveform, one can set upper-limits on the gravitational wave strength from gamma-ray bursts or other potential gravitational wave sources associated with an astrophysical trigger. After a lengthy discussion period, Dr. B. Allen (University of Wisconsin, Milwaukee) discussed an analysis of the LIGO 40M prototype November 1994 data set for binary inspiral signals. Focusing on the largest “detected” signal in the data set, this collaboration placed what might be termed an upper limit on the possible sets of upper limits that could arise from a more detailed analysis. Finally, Dr. M. Papa (Albert Einstein Institute) inaugurated a discussion of several different methods for searching for weak periodic gravitational wave signals in long stretches of data. The afternoon of the first day focused on data analysis challenges associated with LISA: the proposed Laser Interferometer Space Antenna. LISA is currently being considered by NASA as a joint ESA/NASA mission, with a potential launch data as early as 2007. LISA has a much greater immediate potential for detecting directly gravitational waves from known astrophysical sources; however, the challenges of LISA data analysis are different than for ground-based interferometers and it is important to show now that LISA’s promise can be realized through the analysis of the data it would collect. Four talks, spread over an afternoon of discussion, focused the workshop’s attention on LISA’s current status (Dr. S. Vitale, University of Trento), data analysis challenges (Dr. R. Stebbins, JILA and University of Colorado), ability to direct the attention of astronomers to imminent activity in different parts of the sky (Dr. A. Vecchio, Albert Einstein Institute), and what turns out to be the unimportance of gravitational lensing of cosmological sources for LISA (Dr. C. Cutler, Albert Einstein Institute). The final session of the conference was devoted to an animated discussion on collaborative data analysis. There is a general agreement that the joint analysis of data from all simultaneously operating detectors will lead to more constraining upper limits, greater confidence in reported detections, greater measurement precision, and more information about observed sources. There are, however, practical impediments to collaborative data analysis. These include the different “personalities” of the experimental apparatus, deriving from their great complexity When different teams design, build, commission and characterize their instruments, where resides the common knowledge required to carry out an analysis that makes sophisticated use of the multiple data streams? This problem has been tackled by the acoustic detector community and some of the lessons they have learned were discussed by Dr. G. Pizzella (University of Rome ”Tor Vergata”). Other communities face the identical problem: Dr. B. Barish (Caltech) described the mechanism that the neutrino detection community has developed for data sharing. Finally, Dr. R. Weiss (MIT) closed out the workshop with a final presentation on the organization of data analysis in the COBE project and his own perspective on the importance of cooperative data analysis. Weiss identified what may be one of the more difficult problems we face: the cultural transition from the model of the scientist as an individual entrepreneur, who keeps hold of an idea and the credit for it, to the collaborative model, where ideas are shared, the community accepts the credit for the accomplishments, and the participants take their reward from being part of the community. ## Bad Honnef seminar <br> Alan Rendall, Albert Einstein Institute rendall@aei-potsdam.mpg.de A seminar with the title ‘Mathematical Problems in General Relativity’ took place in Bad Honnef, Germany, from 7th to 11th September 1998. This seminar, with about sixty participants from eighteen countries, was organized by Herbert Pfister (University of Tübingen) and Bernd Schmidt (AEI Potsdam) and financially supported by the Heraeus Foundation. It was primarily aimed at graduate students and so strong emphasis was put on the pedagogical nature of the talks. At the same time it was intended to give a point of entry to recent results in the area of mathematical relativity. A mathematical understanding of general relativity requires knowledge of the solution theory of the Einstein constraints and evolution equations and the corresponding mathematical background. Robert Beig gave an introduction to the constraints together with a discussion of identifying spacetime Killing vectors in terms of initial data and giving a four-dimensional characterization of special solutions of the constraints, such as the multiple black hole solutions of interest in numerical relativity. Oscar Reula treated the basic theory of the evolution equations. He also presented a more general account of the nature of hyperbolicity of systems of partial differential equations and new results on writing the Einstein equations expressed in Ashtekar variables in symmetric hyperbolic form. A subject which was given particular attention was that of asymptotically flat vacuum spacetimes with Helmut Friedrich talking for four hours on his program to investigate the consistency of the classical conformal picture with the field equations and Alan Rendall talking for four hours on the theorem of Christodoulou and Klainerman on the nonlinear stability of Minkowski space. Friedrich presented his results on the stability of de Sitter and anti-de Sitter spacetimes as well as new developments concerning restrictions on asymptotically flat initial data related to smoothness of null infinity. As explained in talks by Gabriel Nagy, insights from the anti-de Sitter case were important in the recent existence theorem for the initial boundary value problem for the vacuum Einstein equations by him and Friedrich. Rendall explained some of the analytical techniques used in the Christodoulou-Klainerman proof such as energy estimates (also prominent in Reula’s talk), the Bel-Robinson tensor, bootstrap arguments and the null condition. Lars Andersson showed how some of these techniques, in particular the Bel-Robinson tensor, have been applied to a class of cosmological spacetimes in his work with Vincent Moncrief on the stability of the Milne model. This opens up the possibility that the Christodoulou-Klainerman result may not stand in splendid isolation much longer. Yvonne Choquet-Bruhat, in a talk on geometrical optics expansions for the Einstein equations, told how this reveals an ‘almost linear’ property of these equations, exceptional among hyperbolic systems, which is related to the null condition. Matter was not neglected at the conference either. Herbert Pfister and Urs Schaudt described their progress towards constructing solutions of the Einstein-Euler equations with given equation of state representing rotating stars. Lee Lindblom talked on the inverse problem of reconstructing the equation of state given data on masses and radii of corresponding fluid bodies. In constructing fluid bodies it is always wise to keep an an eye on the corresponding Newtonian problem. Jürgen Ehlers gave an introduction to his mathematical formulation of the Newtonian limit which can be used to give a conceptually clear approach to this. Gerhard Rein summarized our present knowledge on the gravitational collapse of collisionless matter, including his recent numerical work with Rendall and Jack Schaeffer on the boundary between dispersion and black hole formation for this matter model. From the point of view of exact solutions, Gernot Neugebauer spoke on the inverse scattering method and Dietrich Kramer described an approach to producing null dust solutions. The fact that the whole seminar took place in the Physics Centre of the German Physical Society in Bad Honnef and that all participants were accomodated in that building provided ample opportunity for formal and (particularly on the evening with free beverages) less formal discussions.
no-problem/9902/cond-mat9902066.html
ar5iv
text
# Quantum-Classical Phase Transition of Escape Rate in Biaxial Spin Particles ## I Introduction The decay rate of metastable states or transition rate between degenerate vacua is dominated at high temperatures by thermal activation, whereas at temperatures close to zero, quantum tunneling is relevant. At some critical temperature the transition from the classical to the quantum-dominated regime occurs. The transition can be first-order, with a discontinuous first derivative of the escape rate, or smooth with only a jump of the second derivative in which case it is known as of second-order. Based upon the functional-integral approach Affleck and Larkin and Ovchinnikov demonstrated with certain assumptions for the shape of the potential barrier that a second-order phase transition from the thermal to the quantum regime takes place at a critical temperature $`T_0=1/\beta _0`$, where $`\beta _0`$ is the period of small oscillations near the bottom of the inverted potential well. Chudnovsky, however, showed that the situation is not generic and that the crossover from the thermal to the quantum regime can quite generally be the first-order transition that takes place at $`T_c>T_0`$ for the case in which the period versus energy curve possesses a minimum. Shortly after the observation of Chudnovsky the sharp first-order transitions were found theoretically in spin tunneling for two systems. One of these is a ferromagnetic bistable large-spin particle described by the Hamiltonian $`\widehat{H}=D\widehat{S}_z^2B_x\widehat{S}_x`$ which is believed to be a good approximation for the molecular magnet $`Mn_{12}Ac`$ of spin $`S=10`$, and the other is a biaxial anisotropic model, whose effective mass was shown to be position-dependent. It was the external field $`B_x`$ (in first model) and the anisotropic constant ratio $`\lambda `$ (in second model) that effect the phase transition of the crossover. The same models with the magnetic field applied along alternative axes have also been studied, and the corresponding phase diagrams have been given. A sufficient criterion for the first-order transition in the context of tunneling can be obtained by studying the Euclidean time period in the neighbourhood of the sphaleron configuration at the peak of the potential barrier. In the present paper we incorporate the two parameters in a single spin tunneling model in order to investigate the dependence of the phase transition on these. The phenomenon of spin tunneling has attracted considerable attention not only in view of the possible experimental test of the tunneling effect for mesoscopic single domain particles - in which case it is known as macroscopic quantum tunneling - but also because the spin system with an applied field provides various potential shapes and therefore serves as a testing ground for theories of instanton induced transitions. The key procedure in dealing with spin tunneling is to convert the discrete spin system into a continuous one by a spin-coordinate correspondence. There are various spin variable techniques which result in effective Hamiltonians. It is a long standing question whether the different effective Hamiltonians for a given quantum spin system lead to the same result . Following Ulyanov and Zaslavskii we have obtained a new effective Hamiltonian for the spin particle with biaxial anisotropy in addition to the one of a sine-Gordon potential with position dependent mass already known in the literature. The paper is organized as follows. In Sec. II we give a brief review of the general theory of phase transitions of escape rates. Using the effective method of Ref. we then derive in Sec. III an alternative effective Hamiltonian for the ferromagnetic particle with a biaxial anisotropy without an applied magnetic field. It is shown that the sharp first-order transition from the classical to the quantum regime indeed exists in agreement with the observation in our previous paper . In Sec. IV we then investigate the spin tunneling and phase transition with an external field applied along the easy axis which breaks the symmetry and makes one of the degenerate vacua metastable. Our conclusions and discussions are given in Sec. V. ## II The Criterion for the Sharp First-Order Phase Transition of the Escape Rate At temperature $`T`$ the escape rate of a particle through a potential barrier can be obtained by taking the ensemble average of the tunneling probability $`\mathrm{\Gamma }_t(E)`$, i. e. $$\mathrm{\Gamma }(T)=\mathrm{\Gamma }_t(E)e^{\frac{E}{T}}𝑑E$$ (1) where the tunneling probability at a given energy $`E`$ is defined by $$\mathrm{\Gamma }_t(E)=Ae^{W(E)}$$ (2) and $$W(E)=2_{\varphi _i(E)}^{\varphi _f(E)}𝑑\varphi \sqrt{2m(\varphi )[V(\varphi )E]}$$ (3) is evaluated from the periodic pseudoparticle (instanton or bounce) trajectories $`\varphi _c`$ between turning points $`\varphi _i`$ and $`\varphi _f`$. The pseudoparticle trajectory $`\varphi _c`$ minimizes the Euclidean action at the given energy $`E`$ above the metastable minimum such that $`\delta S(\varphi _c)=0`$ with periodic boundary condition $`\varphi _c(0)=\varphi _c(\beta )`$. The Euclidean action $`S_E`$ and Lagrangian $`_E`$ are $$S_E=𝑑\tau _E=W+\beta E,_E=\frac{1}{2}m(\varphi )\dot{\varphi }^2+V(\varphi )$$ (4) respectively. Here $`\dot{\varphi }d\varphi /d\tau `$ and $`\tau =it`$ denotes Euclidean time. In general the mass $`m(\varphi )`$ could be position dependent in the context of spin tunneling. The time period $`\beta (E)`$ is related to temperature $`T`$ by $`\beta (E)=\frac{1}{T}`$, as usual. The prefactor $`A`$ in Eq. (2) results from a Gaussian functional integration over small fluctuations around the pseudoparticle trajectory $`\varphi _c`$. In the semiclassical approximation the escape rate at temperature $`T`$ is dominated by $$\mathrm{\Gamma }(T)e^{S_{min}(T)},$$ (5) where $`S_{min}(T)`$ is the minimum effective Euclidean action which is chosen as the smallest value of $`S_0`$ and $`S(T)S_E`$. Here, $`S_0`$ is the thermodynamic action defined by $$S_0=\beta E_0$$ (6) with $`E_0`$ being the barrier height for the pseudoparticle to tunnel through. Generally speaking, at $`E=0`$ (the bottom of the initial well) the Euler-Lagrange equation leads to the vacuum instanton or bounce solution. When $`0<E<E_0`$ (between the bottom and the top of the barrier) the trajectory $`\varphi (\tau )`$ shows periodic motion in the barrier region of the potential $`V(\varphi )`$ which is forbidden for the classical particle. The period of oscillation as a function of energy $`E`$ is given by the following integral $$\beta (E)=_{\varphi _i(E)}^{\varphi _f(E)}𝑑\varphi \frac{\sqrt{2m(\varphi )}}{\sqrt{V(\varphi )E}}.$$ (7) There are two independent criteria for the existence of first-order transitions between the classical and quantum regimes. The non-monotonic behavior of the oscillation period as a function of energy, i.e. the existence of a minimum in the $`\beta E`$ curve, was once proposed as a condition for first-order phase transitions in quantum mechanical tunneling. The corresponding dependence of the action on temperature leads to an abrupt change at some critical temperature, at which the first derivative of $`S_{min}(\beta )`$ is discontinuous, indicating that the crossover from the thermal to the quantum regime is the first-order transition in temperature. More generally, for a massive particle with position coordinate $`q`$, it has been shown that the existence of a first-order transition leads to the condition $$\left[V^{\prime \prime \prime }(q_s)(g_1+\frac{g_2}{2})+\frac{1}{8}V^{\prime \prime \prime \prime }(q_s)+M^{}(q_s)\omega ^2g_2+M^{}(q_s)(g_1+\frac{g_2}{2})+\frac{1}{4}M^{\prime \prime }(q_s)\omega ^2\right]_{\omega _0}<0$$ (8) where $`g_1(\omega )={\displaystyle \frac{\omega ^2M^{}(q_s)+V^{\prime \prime \prime }(q_s)}{4V^{\prime \prime }(q_s)}},`$ (9) $`g_2(\omega )={\displaystyle \frac{2M^{}(q_s)+V^{\prime \prime \prime }(q_s)}{4[4M(q_s)\omega _0^2+V^{\prime \prime }(q_s)]}}.`$ (10) and $$\omega _0^2=\omega _s^2=\frac{V^{\prime \prime }(q_s)}{M(q_s)}$$ (11) and $`M`$ is the position dependent mass. The subscript $`s`$ stands for the coordinate of the bottom of the well of the inverted potential, i.e., the coordinate of the sphaleron. This criterion has been applied to various models studied earlier and the results coincide with previous ones. ## III Effective Hamiltonian of the Ferromagnetic Particle The model we consider here is that of a nanospin particle which is assumed to have a biaxial anisotropy with XOY easy plane and the easy X-axis in the XY-plane, and is described by the Hamiltonian $$\widehat{H}=K_1\widehat{S}_z^2+K_2\widehat{S}_y^2,K_1>K_2>0$$ (12) which has been extensively studied in the context of tunneling from various aspects such as ground state tunneling, tunneling at finite energy, namely, with the periodic instanton , and topological quenching of tunneling . Most recently it was shown that this model possesses a first-order phase transition from the thermal to the quantum regime . In all these investigations the quantum spin system of Eq. (12) is converted into a potential problem by using the conventional spin coherent state technique with approximate spin-coordinate correspondence (see Appendix). The effective potential is of the sine-Gordon type. In the present investigation we reexamine the quantum spin system in terms of a new method developed by Ulyanov and Zaslavskii. The spin operator representation in differential form on the basis of spin coherent states is given by the relations $$\widehat{S}_z=i\frac{d}{d\phi },\widehat{S}_x=S\mathrm{cos}\phi \mathrm{sin}\phi \frac{d}{d\phi },\widehat{S}_y=S\mathrm{sin}\phi +\mathrm{cos}\phi \frac{d}{d\phi }$$ (13) The eigenvalue equation $$\widehat{H}|\mathrm{\Phi }=E|\mathrm{\Phi }$$ (14) then becomes a second-order differential equation, i.e. $`\left(K_1K_2\mathrm{sin}^2\varphi \right){\displaystyle \frac{d^2\mathrm{\Phi }}{d\varphi ^2}}+K_2\left(S{\displaystyle \frac{1}{2}}\right)\mathrm{sin}2\varphi {\displaystyle \frac{d\mathrm{\Phi }}{d\varphi }}+\left(EK_2S^2\mathrm{cos}^2\varphi K_2S\mathrm{sin}^2\varphi \right)\mathrm{\Phi }=0`$ where we have shifted the azimuthal angle by $`\frac{\pi }{2}`$ for convenience, $`\phi =\varphi +\frac{\pi }{2}`$. Following Ref. , we use the transformation $`\mathrm{\Psi }`$ $`=`$ $`\mathrm{\Phi }(\varphi )(K_1K_2\mathrm{sin}^2\varphi )^{\frac{S}{2}},`$ (15) $`x`$ $`=`$ $`{\displaystyle _0^\varphi }{\displaystyle \frac{d\varphi ^{}}{\sqrt{1\lambda \mathrm{sin}^2\varphi ^{}}}}=F(\varphi ,\stackrel{~}{k}),\stackrel{~}{k}^2=\lambda ={\displaystyle \frac{K_2}{K_1}},\mathrm{sin}\varphi =\mathrm{sn}x`$ (16) The eigenvalue equation is then transformed to the following effective potential form $$K_1\frac{d^2\mathrm{\Psi }}{dx^2}+K_2S(S+1)\frac{\mathrm{cn}^2x}{\mathrm{dn}^2x}\mathrm{\Psi }=E\mathrm{\Psi }$$ (17) where $`\mathrm{sn}x`$, $`\mathrm{cn}x`$ and $`\mathrm{dn}x`$ denote Jacobian elliptic functions. The effective Hamiltonian is seen to be $$\widehat{H}=\frac{p^2}{2m}+U(x),m=\frac{1}{2K_1},U(x)=K_2S(S+1)\mathrm{cd}^2x.$$ (18) with $`\mathrm{cd}x=\mathrm{cn}x/\mathrm{dn}x`$. We remark here that this derivation, unlike that in previous investigations, is exact and without a large $`s`$ limiting procedure. We also emphasize that this Jacobian elliptic potential is of interest on its own and has not been investigated before in the context of instanton considerations. The periodic instanton solution leads to an integral with finite energy and is obtained as $$x_p=\mathrm{sn}^1[k\mathrm{sn}(\omega \tau ),\stackrel{~}{k}]$$ (19) The trajectory of one instanton as half of the periodic bounce is shown in Fig. 1 with added instanton$``$anti-instanton pair. We choose $`s=\sqrt{1000}`$ and $`K_1=1`$(thus $`K_2=\lambda `$) in the diagrams. The period of this periodic configuration is seen to be $$\beta (E)=\frac{4𝒦}{\omega (E)}=\frac{2}{\sqrt{K_1}}\frac{1}{\sqrt{K_2s^2E\lambda }}𝒦(k)$$ (20) where $`𝒦(k)`$ is the complete elliptic integral of the first kind, and $`k`$ $`=`$ $`\sqrt{{\displaystyle \frac{n^21}{n^2\lambda }}},n^2={\displaystyle \frac{K_2s^2}{E}},`$ (21) $`\omega `$ $`=`$ $`\omega _0\sqrt{1{\displaystyle \frac{\lambda }{n^2}}},\omega _0^2=4K_1K_2s^2.`$ (22) The action calculated along the above trajectory is $`S=W+\beta E`$ with $$W=\frac{\omega }{\lambda K_1}\left[K(k)(1\lambda k^2)\mathrm{\Pi }(\lambda k^2,k)\right]$$ (23) Here we are particularly interested in the phase transition. To this end we show in Fig. 2 the shape of the potential for various values of $`\lambda `$. The peak of the barrier becomes flatter and flatter as $`\lambda `$ increases. The curve of $`\beta `$ versus $`E`$ is given in Fig. 3 and demonstrates the obvious first-order phase transition for $`\lambda >1/2`$. Fig. 4 shows the action as a function of temperature. Next we apply the criterion for the first-order phase transition to the model above. The sphaleron is located at $`x_s=0`$. Computing the corresponding quantities at the sphaleron position, i.e. $`V[x_s]=K_2s^2,V^{}[x_s]=0,V^{\prime \prime }[x_s]=2K_2s^2(1\lambda ),`$ (24) $`V^{\prime \prime \prime }[x_s]=0,V^{\prime \prime \prime \prime }[x_s]=8K_2s^2(1\lambda )(12\lambda )`$ (25) Eq. (8) becomes $$\frac{1}{8}V^{\prime \prime \prime \prime }(x_s)=K_2s^2(1\lambda )(12\lambda )<0$$ (26) and we regain the critical value of $`\lambda _c=\frac{1}{2}`$ at which the first-order transition sets in. We see the new effective Hamiltonian with exact spin-coordinate correspondence leads to the same results as those of ref. . However, the physical interpretation for the sharp first-order phase transition is now different. In the present case the effective mass is constant and the sharp transition from quantum to classical behavior results from a flattening of the peak of the barrier. In Ref. the first-order transition resulted from the position dependence of the mass which makes the latter heavier at the top of the barrier. ## IV The phase transition with an applied field along the easy axis The Hamiltonian with an applied magnetic field $`h`$ along the easy $`X`$-axis is given by $$\widehat{H}=K_1\widehat{S}_z^2+K_2\widehat{S}_y^2g\mu _Bh\widehat{S}_z,$$ (27) where $`\mu _B`$ is the Bohr magneton, and $`g`$ is the spin $`g`$-factor which is taken to be $`2`$ here. The anisotropy energy associated with this Hamiltonian has two minima: the one on the $`+X`$-axis which is a metastable state and the other on the $`X`$-axis. Between these two energy minima there exists an energy barrier, and the spin escapes from the metastable state either by crossing over or by tunneling through the barrier. Following the same procedure as in the previous section, the Hamiltonian of Eq. (27) can be mapped onto a point particle problem with effective Hamiltonian $$\widehat{H}=\frac{p^2}{2m}+V(x)$$ (28) where $$m=\frac{1}{2K_1},V(x)=K_2s^2(1+\frac{\alpha ^2\lambda }{4})\mathrm{sn}^2(x,\stackrel{~}{k})+K_2s^2\alpha \left(\mathrm{cn}(x,\stackrel{~}{k})\mathrm{dn}(x,\stackrel{~}{k})1\right)$$ (29) where the metastable minimum of the potential has been shifted to $`x=0`$ for convenience. Now the effective mass is a constant. The potential is more like that of an inverted double-well potential with the sphaleron position $`x_s=\mathrm{cd}^1(\frac{\alpha }{2},\stackrel{~}{k})`$ and barrier height $$E_0=K_2s^2(1+\frac{\alpha ^2\lambda }{4})\frac{1\frac{\alpha ^2}{4}}{1\frac{\lambda \alpha ^2}{4}}+K_2s^2\alpha \frac{(1+\frac{\alpha \lambda }{2})(\frac{\alpha }{2}1)}{1\frac{\lambda \alpha ^2}{4}}$$ (30) Under the barrier a bounce configuration exists as shown in Fig. 5. We redraw the period diagram as a function of energy for the same parameter ($`\lambda =0.9`$, $`\alpha =1`$) in Fig. 6 and obtain the first-order transition from thermal activation to quantum tunneling as shown in Fig. 7. Applying the phase transition criterion to this model, we obtain $`V^{}(x_s)`$ $`=`$ $`0,V^{\prime \prime }(x_s)=2K_2s^2(1{\displaystyle \frac{\alpha ^2}{4}})(1\lambda ),`$ (31) $`V^{\prime \prime \prime }(x_s)`$ $`=`$ $`3K_2s^2\alpha (1\lambda )^2\sqrt{{\displaystyle \frac{1\frac{\alpha ^2}{4}}{1\frac{\alpha ^2\lambda }{4}}}},`$ (32) $`V^{\prime \prime \prime \prime }(x_s)`$ $`=`$ $`{\displaystyle \frac{2K_2s^2(1\lambda )}{4\alpha ^2\lambda }}\left((\lambda ^22\lambda )\alpha ^4(7\lambda ^222\lambda +7)\alpha ^232\lambda +16\right)`$ (33) and the frequency $`\omega _0^2`$ is $$\omega _0^2=\omega _s^2=\frac{V^{\prime \prime }(x_s)}{m}=2K_1V^{\prime \prime }(x_s)$$ (34) The expressions for $`g_1`$ and $`g_2`$ are now found to be $`g_1`$ $`=`$ $`{\displaystyle \frac{\frac{3}{8}\alpha (1\lambda )}{\sqrt{(1\frac{\alpha ^2}{4})(1\frac{\alpha ^2\lambda }{4})}}}`$ (35) $`g_2`$ $`=`$ $`{\displaystyle \frac{\frac{1}{8}\alpha (1\lambda )}{\sqrt{(1\frac{\alpha ^2}{4})(1\frac{\alpha ^2\lambda }{4})}}}`$ (36) The critical line of the two parameters for the first order transition requires $$\lambda >\frac{2(2+\alpha ^2)}{8+\alpha ^2}$$ (37) The corresponding phase diagram is shown in Fig. 8. From the diagram we observe several interesting features. First, the classical-quantum phase transition shows both the first-order (region I) and the second-order (region II) transition domains. We see that there is only a second-order transition for $`\lambda <0.5`$. For materials with $`\lambda `$ larger than $`0.5`$ we can see that the order of the phase transition changes from first to second as $`\alpha `$ increases and the phase boundary changes with $`\lambda `$ up to $`1`$. An alternative effective Hamiltonian with the conventional application of the spin coherent state technique as that for the biaxial anisotropy spin particle without the applied magnetic field has also been investigated. However, the effective Hamiltonian with approximate spin-coordinate correspondence gives rise to a result for the phase transition which differs from that of the Hamiltonian of Eq. (28) with exact spin-coordinate correspondence. The center of the position dependent mass in the Hamiltonian with approximate spin-coordinate correspondence in Refs. does not coincide with the position of the pseudoparticle, namely, the bounce here. We, therefore, believe that the instanton concept fails in this case. A detailed analysis will be reported elsewhere. ## V Conclusion In summary, we investigated the biaxial spin model with applied magnetic field along the easy $`X`$-axis with the effective potential methods developed in Ref. . The resulting periodic instanton solutions with potential in terms of Jacobian elliptic functions and the corresponding phase transitions have been studied for the first time. For the quantum spin particles with anisotropic constant $`\lambda >\frac{1}{2}`$ and above the critical value of magnetic field $`\alpha `$, the energy dependence of the oscillation period shows the obvious first-order transition from thermal activation to thermally assisted quantum tunneling. Applying the criterion for the first-order transition, we obtained the phase diagram which exhibits two domains separated by a critical line, indicating the first-order and second-order transitions respectively. Appendix In the conventional application of the spin coherent state technique, two canonical variables, $`\varphi `$ and $`p=s\mathrm{cos}\theta `$ are adopted with the usual quantization $$[\varphi ,p]=i$$ (38) We show in the following that the spin-coordinate correspondence is only approximate up to order $`O(s^3)`$. From the relation between the spin operators and the polar coordinate angles $$S_x=s\mathrm{sin}\theta \mathrm{cos}\varphi ,S_y=s\mathrm{sin}\theta \mathrm{sin}\varphi ,S_z=s\mathrm{cos}\theta $$ (38) the usual commutation relation of spin operators reads $$[S_x,S_y]=s^2[\mathrm{sin}\theta \mathrm{cos}\varphi ,\mathrm{sin}\theta \mathrm{sin}\varphi ]=s^2\mathrm{sin}\theta [\mathrm{cos}\varphi ,\mathrm{sin}\theta ]\mathrm{sin}\varphi +s^2\mathrm{sin}\theta [\mathrm{sin}\theta ,\mathrm{sin}\varphi ]\mathrm{cos}\varphi $$ (39) Using Eq. (38), one can prove the following relation $$[\mathrm{sin}\theta ,\mathrm{cos}\varphi ]=A_+\mathrm{cos}\varphi +iA_{}\mathrm{sin}\varphi ,[\mathrm{sin}\theta ,\mathrm{sin}\varphi ]=A_+\mathrm{sin}\varphi iA_{}\mathrm{cos}\varphi $$ (40) with $`A_+={\displaystyle \frac{1}{2}}\left(\sqrt{1(\mathrm{cos}\theta +\alpha )^2}+\sqrt{1(\mathrm{cos}\theta \alpha )^2}\right),`$ $`A_{}={\displaystyle \frac{1}{2}}\left(\sqrt{1(\mathrm{cos}\theta +\alpha )^2}\sqrt{1(\mathrm{cos}\theta \alpha )^2}\right)`$ where $`\alpha =1/s`$. Substituting (40) into Eq. (39), one has $$[S_x,S_y]=s^2(i)\mathrm{sin}\theta A_{}=i\mathrm{cos}\theta s^2\alpha +O(\alpha ^3)$$ (41) i.e. $$[S_x,S_y]=iS_z+O(s^3)$$ (42) which implies that the usual commutation relation holds only in the large spin limit. Acknowledgment: This work was supported by the National Natural Science Foundation of China under Grant Nos. 19677101 and 19775033. J.-Q. L. also acknowledges support by a DAAD-K.C.Wong Fellowship. Figure Captions: Fig. 1: The effective potential and the corresponding periodic instanton configurations. Fig. 2: The potentials for different values of $`\lambda `$. Fig. 3: The oscillation period as a function of energy at $`\lambda =0.9`$. Fig. 4: The action as a function of temperature indicating the first-order transition at $`\lambda =0.9`$. Fig. 5: The effective potential in the case with external magnetic field. Fig. 6: The oscillation period as a function of energy at $`\lambda =0.9`$ and $`\alpha =1`$. Fig. 7: The action as a function of temperature showing the first-order transition in the case with external field. Fig. 8: The phase diagram for the orders of phase transitions in the ($`\lambda ,\alpha `$) plane. Region I: the first-order domain, region II: the second-order domain.
no-problem/9902/astro-ph9902131.html
ar5iv
text
# 1 Introduction ## 1 Introduction Gravitational lenses are powerful astrophysical tools to investigate cosmology, the Hubble constant, galactic structure and evolution, dust extinction, and AGN host galaxies. There are now 47 known galaxy lens systems, and their broad astrophysical utility relies on accurate photometry, astrometry and redshifts for as many systems as possible. These lenses consist of 2-4 source images (AGNs, quasars, hosts), superposed on a foreground lens galaxy within a diameter of $`1\mathrm{"}`$. Thus, precise and quantitative studies in the optical and IR require Hubble Space Telescope (HST) observations. ## 2 CASTLES Goals and Preliminary Results CASTLES is an ongoing HST survey of all the known lensed systems and lens candidates. We are obtaining images with NICMOS and WFPC2 in the H, I and V bands. Some of its goals are: to create a complete, uniform high-resolution photometric sample of the known galaxy-mass lenses; to obtain redshift estimates for all lens galaxies which lack spectroscopic redshifts; to find any source or lens components that have escaped detection and determine their photometric properties; to obtain precise astrometric data for all source components to improve lens models and estimates of $`H_0`$; and to investigate the wide field environments of the lens galaxies and their role in lensing. In this contribution we present examples of CASTLES images (see Fig. 1) and a brief list of the preliminary findings: Missing Lenses: As expected, the deeper CASTLES optical and IR observations are detecting most “missing” lens galaxies, even in cases where bright quasar images and small separations make this difficult. Among the 10 doubles in Lehár et al. (1999), we detected 9 lens galaxies (4 of which are discoveries). We failed only in the case of Q1208+1011, due to contrast problems. We never detect lenses in the wider quasar pairs (UM425, Q1429–008, Q1634+267, MGC2214+3550, and Q2345+007) whose extreme limits on the lens $`M/L`$ make lensing improbable and point to the binary quasar interpretation (see Kochanek, Falco & Muñoz, 1999). Host Galaxies: CASTLES has detected the lensed AGN host galaxies for 7 quasars (MG0414+0534, Q0957+561, PG1115+080, H1413+117, CTQ414, BRI0952–0115, HE1104–1805), and 4 radio galaxies (MG0751 +2716, MG1131+0456, B1600+434, B1608+656). Lensing provides a unique opportunity to study $`z>1`$ host galaxies, because the hosts are magnified to detectable angular sizes. A preliminary analysis (i.e. demagnification) finds that $`L_{\mathrm{h}ost}<L_{}`$ and indicates that even the most luminous quasars (e.g. PG1115+080) need not reside in particularly luminous hosts. For 2 of the QSO pairs (MGC2214+3550 and Q2345+007), the host morphology proves the systems are quasar binaries and not “dark” lenses. Photometric Lens Redshifts: We have found that we can accurately estimate photometric lens redshifts by mapping the lenses onto the local “fundamental plane” using passive evolution models for E- and K-corrections. These Bayesian estimates simultaneously use the available color, luminosity, mass, and structural information. Tests on ten double-image systems (Lehár et al. 1999) demonstrate agreement within $`\mathrm{\Delta }z_L<\mathrm{\hspace{0.33em}0.1}`$ with spectroscopic redshifts. Galaxy Structure: The CASTLES observations provide detailed constraints on lens galaxy structure, through shape and profile fitting. We continue to find that most lens galaxies have De Vaucouleurs profiles, as well as shapes, and colors consistent with passively evolving early-type galaxies; e.g. Q0142–100, BRI0952–0115, Q1017–207, B1030+071, HE1104–1805 (Lehár et al. 1999); MG1131+0456 (Kochanek et al. 1999) and PG1115+080 (Impey et al. 1998). B0218+357 and PKS1830–211 (Lehár et al. 1999) are spirals but present a more complex photometric picture due to the high molecular gas column densities and implied dust extinction. Dark Matter: We constrain and study the dark matter distribution of lenses by comparing their luminosity distribution to our improved lens mass models (Lehár et al. 1999). While most of the 2-image lenses can be fitted by a single ellipsoidal lens, some require substantial external shear, and most require such shear to explain misalignments between the lens mass and its light. Our observations reveal neighboring galaxies that produce the required shear (e.g. MG1131+0456, Kochanek et al. 1999). In some cases we can study the mass profile. We can use the mass determinations to estimate $`M/L`$ and explore its value and evolution with redshift. Lens Galaxy Extinction: We use lensed image flux ratios as a function of wavelength to measure the differential extinction in the lens galaxies. Of the 10 doubles in Lehár et al. (1999), 2 are spirals (B0218+357 and PKS1830–211) and are heavily attenuated. The rest are ellipticals; 4 contain modest but measurable amounts of dust (SBS0909+523, Q1009$``$0252, HE1104$``$1805, and Q1208+101), and 4 show little or no differential extinction. PG1115+080 (Impey et al. 1998) and MG1131+0456 (Kochanek et al. 1999) are virtually transparent. The transparency of MG1131+ 0456 rules out the “dusty lens hypothesis” for the very red lenses. H<sub>0</sub>: The CASTLES astrometric precision of better than 3 mas is providing stringent new constraints for lensed systems with time delays. Our PG1115+080 data (Impey et al. 1998) show a round, R<sup>1/4</sup>-law lens galaxy, which is part of a small group. With the measured time delay our models robustly limit $`H_0<\mathrm{\hspace{0.33em}65}\pm 5`$ km s<sup>-1</sup> Mpc<sup>-1</sup>. In B0218+357 ( Lehár et al. 1999), we found that the lens was best fit by an exponential profile, consistent with its blue colors and small image separation. B0218+357 is one of the most isolated lenses, with an estimated tidal shear of less than 1%. With the observed time delay of $`12`$ days, the value of $`H_0`$ depends very strongly on the precise lens position, which we hope to improve with new observations. Our new image of Q0957+561 shows 2 images of the quasar host galaxy; their geometry rules out the existing lens models for the system.
no-problem/9902/cond-mat9902160.html
ar5iv
text
# Generic Transmission Zeros and In-Phase Resonances in Time-Reversal Symmetric Single Channel Transport ## Abstract We study phase coherent transport in a single channel system using the scattering matrix approach. It is shown that identical vanishing of the transmission amplitude occurs generically in quasi-1D systems if the time reversal is a good symmetry. The transmission zeros naturally lead to abrupt phase changes (without any intrinsic energy scale) and in-phase resonances, providing insights to recent experiments on phase coherent transport through a quantum dot. In 1995, it was first demonstrated in an experiment using the Aharonov-Bohm (AB) interference effect that the electron transport through a quantum dot contains a phase coherent component . This experiment, however, was found to have some problem due to the so called phase locking effect . Two years later, the experiment was refined using the four probe measurement scheme so that the phase of the transmission amplitude through the dot can be measured in a reliable way . It was found that the phase increases by $`\pi `$ whenever the gate voltage to the dot sweeps through a resonance and that the profile of the phase increase is well described by the Breit-Wigner resonance formula . Unexpected properties were also discovered. The behavior of the phase evolution is identical (up to $`2\pi `$) for a large number of resonances, and between each pair of adjacent in-phase resonances there is an abrupt phase change by $`\pi `$, whose characteristic energy scale is much smaller than all other energy scales available in the experiment. On the other hand, the 1D Friedel sum rule , $$\mathrm{\Delta }Q/e=\mathrm{\Delta }\text{arg}(t)/\pi ,$$ (1) predicts that all neighboring resonances are off phase by $`\pi `$, which differs from the experimental findings. Thus two central questions arise: First, how can in-phase resonances occur? Does it imply that the Friedel sum rule is not valid for the quantum dot? Second, why do abrupt phase changes occur and why are they so sharp? Many theoretical investigations addressed these questions. It was suggested that the Friedel sum rule is still valid and the abrupt phase changes are due to “hidden” electron charging events that do not cause conductance peaks . It was also speculated that the in-phase resonances are due to the strong Coulomb interaction , the finite temperature , or the Fano resonance . There was also a claim that they are due to peculiar properties of the AB ring instead of their being a true manifestation of the phase of the transmission amplitude . Regarding the characteristic energy scale, it was claimed that the width of the abrupt phase change is the true measure of the intrinsic resonance width $`\mathrm{\Gamma }`$ while the measured resonance peaks are thermally broadened . In this paper, we present a new theory based on the Friedel sum rule and the time reversal symmetry. (In Ref. , the magnetic flux threading the dot is only a small fraction of a flux quantum.) One of the key observations is that the 1D Friedel sum rule (1) is not strictly valid for quasi-1D systems due to the appearance of the transmission zeros. To demonstrate this, we first discuss mirror reflection symmetric systems without magnetic fields. Since the parity is a good quantum number, the scattering states can be decomposed into even and odd scattering states: for $`|x|>R`$, $`\psi _\mathrm{e}(x)=e^{ik|x|}+e^{2i\theta _\mathrm{e}}e^{ik|x|}`$ and $`\psi _\mathrm{o}(x)=\text{sgn}(x)[e^{ik|x|}+e^{2i\theta _\mathrm{o}}e^{ik|x|}]`$. (For $`|x|<R`$, there are scattering potentials which may have higher dimensional nature as in Ref. .) The outgoing waves are phase-shifted, and the Friedel sum rule ($`\mathrm{\Delta }Q_\mathrm{e}/e=\mathrm{\Delta }\theta _\mathrm{e}/\pi `$, $`\mathrm{\Delta }Q_\mathrm{o}/e=\mathrm{\Delta }\theta _\mathrm{o}/\pi `$) shows that whenever an even (odd) parity quasibound state is occupied, $`\theta _\mathrm{e}`$ ($`\theta _\mathrm{o}`$) shifts by $`\pi `$ (Fig. 1). Alternatively, left and right scattering states can be used, which are superpositions of the even and odd scattering states: $`\psi _\mathrm{l}(x)=[\psi _\mathrm{e}(x)\psi _\mathrm{o}(x)]/2`$, $`\psi _\mathrm{r}(x)=[\psi _\mathrm{e}(x)+\psi _\mathrm{o}(x)]/2`$. From these relations, one finds that the transmission amplitude $`t`$ and $`t^{}`$ for the left and right scattering states are $$t=t^{}=ie^{i\theta }\mathrm{sin}\varphi ,$$ (2) where $`\theta \theta _\mathrm{e}+\theta _\mathrm{o}`$ and $`\varphi \theta _\mathrm{e}\theta _\mathrm{o}`$. In terms of the new angles, the Friedel sum rule becomes $$\mathrm{\Delta }Q/e=\mathrm{\Delta }\theta /\pi .$$ (3) In true 1D systems, even and odd resonant states alternate in energy and the angle $`\varphi `$ can be limited to the range $`0<\varphi <\pi `$ \[Fig. 1(a)\]. Then $`\mathrm{\Delta }\theta =\mathrm{\Delta }\text{arg}(t)`$ and the 1D Friedel sum rule (1) is recovered. In quasi-1D systems, on the other hand, even and odd levels do not necessarily alternate in energy. One concrete example is a dot with the anisotropic harmonic confining potential. The energy levels of the dot are given by $`E(n_x,n_y)=\mathrm{}\omega _x(n_x+1/2)+\mathrm{}\omega _y(n_y+1/2)`$ where $`\omega _x\omega _y`$. Here $`n_x`$ determines the parity of a level while $`n_y`$ is a free parameter as far as the parity is concerned. Because of the presence of this free parameter, situations like Fig. 1(b) occur generically, where some of adjacent levels share the same parity. Notice that the difference between $`\theta _\mathrm{e}`$ and $`\theta _\mathrm{o}`$ increases from almost zero to almost $`2\pi `$ and then decreases to almost zero. Since the change is continuous, points should exist where the difference $`\varphi `$ is $`\pi `$ exactly. At these points, $`\mathrm{sin}\varphi `$ vanishes identically and as these points are scanned, $`\mathrm{sin}\varphi `$ reverses its sign, causing the abrupt phase change of $`t`$. It is straightforward to verify that the transmission zeros occur whenever neighboring states share the same parity. As a result of the transmission zeros, one finds $$\mathrm{\Delta }Q/e=\mathrm{\Delta }\theta /\pi \mathrm{\Delta }\text{arg}(t)/\pi .$$ (4) Thus the 1D Friedel sum rule (1) is not strictly valid for quasi-1D systems. One immediate consequence is that there are two possibilities for adjacent resonances. They can be either off phase by $`\pi `$ or in phase, and in the latter case, a transmission zero occurs in between. Another important implication is that there is no intrinsic energy scale for the abrupt phase change, since the transmission zero corresponds to a singular point as far as the phase is concerned. It also explains naturally the experimental observation that the abrupt phase changes occur when the amplitude of the AB oscillation almost vanishes . We next generalize the discussion to systems without the mirror reflection symmetry. The electron transport in single channel systems can be described by the $`2\times 2`$ scattering matrix $`𝐒`$, $$𝐒=\left(\begin{array}{cc}r& t^{}\\ t& r^{}\end{array}\right)=e^{i\theta }\left(\begin{array}{cc}e^{i\phi _1}\mathrm{cos}\varphi & ie^{i\phi _2}\mathrm{sin}\varphi \\ ie^{i\phi _2}\mathrm{sin}\varphi & e^{i\phi _1}\mathrm{cos}\varphi \end{array}\right),$$ (5) where the matrix elements are parameterized in a most general way compatible with $`𝐒^{}𝐒=\mathrm{𝐒𝐒}^{}=𝐈`$. When the time reversal is a good symmetry, $`t=t^{}`$ and the angle $`\phi _2`$ can be set to zero. Then Eq. (2) is recovered. Also the general Friedel sum rule , $$\mathrm{\Delta }Q/e=[\mathrm{\Delta }\mathrm{ln}\text{Det}(𝐒)]/(2\pi i),$$ (6) reduces to Eq. (3). Thus one again finds that both possibilities of the off-phase resonances and the in-phase resonances are compatible with the Friedel sum rule and the time reversal symmetry. To examine whether the in-phase resonances can appear generically, one has to investigate whether the transmission zeros are generic. The following gedanken experiment is useful for discussion. Imagine that one changes the confining potential $`V(x,y;\lambda )=V_\mathrm{s}(x,y)+\lambda V_\mathrm{n}(x,y)`$ of a dot adiabatically by turning on the parameter $`\lambda `$ where $`V_\mathrm{s}(x,y)=V_\mathrm{s}(x,y)`$ and $`V_\mathrm{n}(x,y)V_\mathrm{n}(x,y)`$. For $`\lambda =0`$, the potential is mirror symmetric and for $`\lambda 0`$, the mirror symmetry is broken. Let us assume that transmission zeros in the mirror symmetric potential disappear after $`\lambda `$ is turned on. Then, Figure 2(a) and 2(b) represent the typical behaviors of the transmission amplitude in the complex $`t`$ plane for $`\lambda =0`$ and $`\lambda =\delta \lambda 1`$, respectively. Notice that there is no transmission zero in 2(b) since the trajectory of $`t`$ is shifted off the origin. As the energy is scanned from $`A`$ to $`B`$, $`\mathrm{\Delta }\theta =0`$ in Fig. 2(a). In Fig. 2(b), on the other hand, $`\mathrm{\Delta }\theta =\pi `$ and thus $`\mathrm{\Delta }Q=e`$. The corresponding energy levels of the dot are depicted in the insets. While there is no energy level between $`A`$ and $`B`$ for $`\lambda =0`$, a new level is present between $`A`$ and $`B`$ in the level diagram for $`\lambda =\delta \lambda `$ since $`\mathrm{\Delta }Q=e`$. Upon the infinitesimal change of the confining potential, however, new energy levels cannot appear suddenly although they can drift up and down. Thus this sudden appearance of a new energy level is unphysical and to avoid this, the trajectory for $`\lambda =\delta \lambda `$ should pass through the origin. This argument applies all along the turning on process and it shows that the transmission zeros should still appear generically even if the system is not mirror symmetric. One can also argue for the in-phase resonances directly, which then establishes the appearance of the transmission zeros since these two features are linked to each other. With the time-reversal symmetry, the wave functions can be taken as real. In true 1D systems, the number of wave function nodes increases by 1 when a new level appears (oscillation theorem ), and each node increases the phase of the transmission amplitude by $`\pi `$. In quasi-1D systems, on the other hand, there are two classes of nodes: “spanning” nodes \[Fig. 3(a)\] that connect two opposite boundaries of the dot, and “nonspanning” nodes \[Fig. 3(b)\] that touch either only one particular boundary or no boundary at all. Such nonspanning nodes can be created, for example, by excitations in the transverse direction or by negative impurity potentials in the interior of the dot. The two classes of nodes affect the phase of the transmission amplitude in different ways. While each spanning node shifts the phase by $`\pi `$, nonspanning nodes do not affect the phase at all. In the experiment , the transverse size of the quantum dot is estimated to be much larger than the Fermi length. In such a case, nonspanning nodes are equally plausible as spanning nodes, and accordingly in-phase resonances can occur as generically as off-phase resonances. Until now, we have demonstrated the generic appearance of the transmission zeros and in-phase resonances based on the Friedel sum rule and the time-reversal symmetry. Next we demonstrate that multiple resonances also lead to the transmission zeros naturally if the time reversal is a good symmetry. (This demonstration, in fact, constitutes an alternative derivation of the same conclusion without using the Friedel sum rule.) Near a resonance, the scattering matrix becomes $`𝐒(E)=𝐒_{\mathrm{bg}}i𝐁_0/(EE_0+i\mathrm{\Gamma }_0/2)`$ where the $`2\times 2`$ matrix $`𝐒_{\mathrm{bg}}`$ ($`𝐒_{\mathrm{bg}}^{}𝐒_{\mathrm{bg}}=𝐈`$) is the energy independent background contribution . If the off-diagonal matrix elements of $`𝐒_{\mathrm{bg}}`$ are sufficiently small, $`𝐒(E)`$ describes the Breit-Wigner resonance. For multiple resonances, the scattering matrix becomes $$𝐒(E)=𝐒_{\mathrm{bg}}\underset{k=1}{\overset{N}{}}\frac{i𝐁_k}{EE_k+i\mathrm{\Gamma }_k/2}.$$ (7) Here we emphasize that the matrix residues $`i𝐁_k`$ are not independent. Instead they should be highly correlated so that $`𝐒^{}(E)𝐒(E)=𝐈`$ for arbitrary real $`E`$. (This is the origin of the limited phase relations between resonances.) From the unitarity relation and the time-reversal symmetry, one finds five constraints : $`|t(E)|^2=|t^{}(E)|^2`$, $`|r(E)|^2=|r^{}(E)|^2`$; $`|t(E)|^2+|r(E)|^2=1`$; $`t(E)/t^{}(E)^{}+r(E)/r^{}(E)^{}=0`$; $`t(E)=t^{}(E)`$. It turns out that to examine the implications of the constraints, it is more convenient to express the matrix elements of $`𝐒(E)`$ in the product representation by summing up all $`N+1`$ terms in Eq. (7). $`t(E)=t^{}(E)`$ $`=`$ $`t_{bg}{\displaystyle \underset{k=1}{\overset{N}{}}}{\displaystyle \frac{E\epsilon _k+i\mu _k/2}{EE_k+i\mathrm{\Gamma }_k/2}},`$ (8) $`r(E)`$ $`=`$ $`r_{bg}{\displaystyle \underset{k=1}{\overset{N}{}}}{\displaystyle \frac{Eϵ_k+i\nu _k/2}{EE_k+i\mathrm{\Gamma }_k/2}},`$ (9) $`r^{}(E)`$ $`=`$ $`r_{bg}^{}{\displaystyle \underset{k=1}{\overset{N}{}}}{\displaystyle \frac{Eϵ_k+i\nu _k^{}/2}{EE_k+i\mathrm{\Gamma }_k/2}},`$ (10) where $`(\nu _k)^2=(\nu _k^{})^2`$ . Then by imposing the constraints and the condition of no degenerate resonance levels , one finds $$\mu _k=0\text{ for all }k,$$ (11) which implies that all zeros of $`t(E)`$ are located on the real energy axis. (We mention that impurities and irregular boundaries of the dot generate the level repulsion that lifts the degeneracy. The degeneracy is lifted further by the Coulomb blockade effect.) Evolution of the transmission amplitude is determined from the locations of poles and zeros. Thus this analysis produces the following prediction (Fig. 4); If there is a transmission zero ($`B`$ and $`D`$) in between, two neighboring resonances ($`A`$-$`C`$ and $`C`$-$`E`$) are in phase, and otherwise, they are off phase ($`E`$-$`F`$). Notice that these predictions are identical to those of the Friedel sum rule. It is instructive to compare the transmission zeros from the Breit-Wigner resonances and the Fano resonances . Within an energy window that contains two Breit-Wigner resonances, $`t(E)=i(𝐁_k)_{21}/(EE_k+i\mathrm{\Gamma }_k/2)i(𝐁_{k+1})_{21}/(EE_{k+1}+i\mathrm{\Gamma }_{k+1}/2)`$. By summing up the two contributions, one finds $`t(E)=\alpha (E\beta )/[(EE_k+i\mathrm{\Gamma }_k/2)(EE_{k+1}+i\mathrm{\Gamma }_{k+1}/2)]`$ where $`\beta =\beta ^{}`$ due to the time-reversal symmetry. Thus the transmission zero at $`E=\beta `$ is due to the completely “destructive interference” of the two resonance levels. The transmission zeros also arise from the Fano resonance , to which the same expression (7) applies. Unlike the Breit-Wigner resonances, however, the off-diagonal matrix elements of $`𝐒_{\mathrm{bg}}`$ are not small. Thus near a Fano resonance, one finds $`t(E)=(𝐒_{\mathrm{bg}})_{21}i(𝐁_k)_{21}/(EE_k+i\mathrm{\Gamma }_k/2)=\alpha (E\beta )/(EE_k+i\mathrm{\Gamma }_k/2)`$ where $`\beta =\beta ^{}E_k`$. One finds again the transmission zero. It should be noted however that the transmission zero is now due to the destructive interference of the background contribution (continuous state of the energy channel) and the pole contribution (localized state for Ref. and $`t`$-stub for Ref. ). Notice also that the Fano resonance peak is highly asymmetric since $`\beta E_k`$, which disagrees with the experiment . Below we discuss briefly effects of the electron-electron interaction and magnetic fields on the transmission zeros. Langer and Ambegaokar have shown that even in the presence of the interaction, the general Friedel sum rule (6) is valid at $`E=E_F`$ provided that quasiparticle excitations at the Fermi energy remain well defined. Then all analyses for noninteracting systems apply equally to interacting systems if one fixes the probing energy $`E`$ at $`E_F`$ and instead varies the depth of the potential well, which amounts to replacing $`E`$ by $`E_F\eta eV_g`$. (Fig. 2 can be used to argue against the disappearance of the transmission zeros upon the adiabatic interaction turning on. In this case, $`\mathrm{\Delta }Q=e`$ in Fig. 2(b) can be interpreted as the sudden charge density jump.) Magnetic fields, on the other hand, affect the transmission zeros in a fundamental way since it breaks the time-reversal symmetry. In this case, $`tt^{}`$ and the angle $`\phi _2`$ in Eq. (5) can have nonzero values. Then the transmission zeros are generically replaced by the rapid but continuous change of $`\phi _2`$ by $`\pi `$, and thus a finite energy scale appears for the abrupt phase changes. The precise energy scale depends on the detailed electron dynamics inside the dot, which goes beyond the scope of this paper. Lastly, we discuss the large dominance of the in-phase resonances over the off-phase resonances in Ref. . Hackenbroich et al. proposed that avoided crossings of single particle levels may result in a long sequence of resonances carrying the same internal wave function. In view of the present analysis, this mechanism is overrestrictive since it restricts not only the number of spanning nodes but also the number of nonspanning nodes as well. We speculate that a less restrictive and possibly more widely applicable mechanism may exist which exploits the “degree of freedom” given by the nonspanning nodes. Further investigation in this direction is necessary. In summary, we demonstrated that the transmission zeros and the in-phase resonances are generic features in time-reversal symmetric single channel transport if the transverse size of a scatterer (dot) is sufficiently larger than the Fermi wavelength. The author acknowledges M. Y. Choi for helpful suggestions at the initial stage of this work, G. S. Jeon for helpful discussions, and M. Büttiker for critical comments on the manuscript. He also thanks C.-M. Ryu for providing his paper before publication. This work is supported by the Korea Science and Engineering Foundation through the SRC program at SNU-CTP and through the KOSEF fellowship program.
no-problem/9902/math-ph9902012.html
ar5iv
text
# On quantizing nilpotent and solvable basic algebras ## 1 Introduction We continue our study of Groenewold-Van Hove obstructions to quantization. Let $`M`$ be a symplectic manifold, and suppose that $`𝔟`$ is a finite-dimensional “basic algebra” of observables on $`M`$. Given a Lie subalgebra $`𝒪`$ of the Poisson algebra $`C^{\mathrm{}}(M)`$ containing $`𝔟`$, we are interested in determining whether the pair $`(𝒪,𝔟)`$ can be “quantized.” (See §2 for the precise definitions.) Already we know that such obstructions exist in many circumstances: In \[GGG\] we showed that there are no nontrivial quantizations of the pair $`(P(𝔟),𝔟)`$ on a compact symplectic manifold, where $`P(𝔟)`$ is the Poisson algebra of polynomials on $`M`$ generated by $`𝔟`$. Furthermore, in \[GG2\] we proved that there are no nontrivial finite-dimensional quantizations of $`(𝒪,𝔟)`$ on a noncompact symplectic manifold, for any such subalgebra $`𝒪.`$ It remains to understand the case when $`M`$ is noncompact and the quantizations are infinite-dimensional, which is naturally the most interesting and difficult one. Here one has little control over either the types of basic algebras that can appear (in examples they range from nilpotent to semisimple), their representations, or the structure of the polynomial algebras they generate \[Go2\]. In this paper we consider the problem of quantizing $`(P(𝔟),𝔟)`$ when the basic algebra is nilpotent. Our main result is (§5): ###### Theorem 1 Let $`𝔟`$ be a nilpotent basic algebra on a connected symplectic manifold. Then there is no quantization of $`(P(𝔟),𝔟)`$. This in turn is a consequence of an algebraic “no-go theorem” to the effect that a nontrivial Poisson algebra cannot be realized as an associative algebra with the commutator bracket. The latter result, which is of independent general interest, is presented in §3. When $`M=𝐑^{2n}`$ and $`𝔟`$ is the Heisenberg algebra h($`2n`$), Theorem 1 provides an entirely new proof of the classical theorem of Groenewold \[Gro, Go3\]: ###### Corollary 2 There is no quantization of the pair $`(P(\mathrm{h}(2n)),\mathrm{h}(2n))`$. We remark that this version of the no-go theorem for $`𝐑^{2n}`$ does not use the Stone-Von Neumann theorem. A natural question is whether this obstruction to quantization when $`𝔟`$ is nilpotent extends to the case when $`𝔟`$ is solvable. We show that it does *not*; in §6 we explicitly construct a polynomial quantization of $`T^{}𝐑_+`$ with the “affine” basic algebra a(1). ## 2 Background Let $`M`$ be a connected symplectic manifold. A key ingredient in the quantization process is the choice of a *basic algebra of observables* in the Poisson algebra $`C^{\mathrm{}}(M)`$. This is a Lie subalgebra $`𝔟`$ of $`C^{\mathrm{}}(M)`$ such that: 1. $`𝔟`$ is finitely generated, 2. the Hamiltonian vector fields $`X_b,b𝔟`$, are complete, 3. $`𝔟`$ is transitive and separating, and 4. $`𝔟`$ is a minimal Lie algebra satisfying these requirements. A subset $`𝔟C^{\mathrm{}}(M)`$ is “transitive” if $`\{X_b(m)|b𝔟\}`$ spans $`T_mM`$ at every point. It is “separating” provided its elements globally separate points of $`M`$. Now fix a basic algebra $`𝔟`$, and let $`𝒪`$ be any Lie subalgebra of $`C^{\mathrm{}}(M)`$ containing $`1`$ and $`𝔟`$. Then by a *quantization* of the pair $`(𝒪,𝔟)`$ we mean a linear map $`𝒬`$ from $`𝒪`$ to the linear space Op($`D`$) of symmetric operators which preserve a fixed dense domain $`D`$ in some separable Hilbert space $``$, such that for all $`f,g𝒪`$, 1. $`𝒬(\{f,g\})=\frac{i}{\mathrm{}}[𝒬(f),𝒬(g)]`$, 2. $`𝒬(1)=I`$, 3. if the Hamiltonian vector field $`X_f`$ of $`f`$ is complete, then $`𝒬(f)`$ is essentially self-adjoint on $`D`$, 4. $`𝒬(𝔟)`$ is irreducible, 5. $`D`$ contains a dense set of separately analytic vectors for some set of Lie generators of $`𝒬(𝔟),`$ and 6. $`𝒬`$ represents $`𝔟`$ faithfully. Here $`\{,\}`$ is the Poisson bracket and $`\mathrm{}`$ is Planck’s reduced constant. In this paper we are interested in “polynomial quantizations,” i.e. quantizations of $`(P(𝔟),𝔟)`$. We refer the reader to \[Go2\] for an extensive discussion of these definitions. However, we wish to elaborate on (Q4). There we mean irreducible in the analytic sense, viz. the only bounded operators which strongly commute with all $`𝒬(b)𝒬(𝔟)`$ are scalar multiples of the identity. There is another notion of irreducibility which is useful for our purposes: We say that $`𝒬(𝔟)`$ is *algebraically irreducible* provided the only operators in Op$`(D)`$ which (weakly) commute with all $`𝒬(b)𝒬(𝔟)`$ are scalar multiples of the identity. It turns out that a quantization is automatically algebraically irreducible. ###### Proposition 3 Let $`𝒬`$ be a representation of a finite-dimensional Lie algebra $`𝔟`$ by symmetric operators on an invariant dense domain $`D`$ in a separable Hilbert space $``$. If $`𝒬`$ satisfies (Q4) and (Q5), then $`𝒬(𝔟)`$ is algebraically irreducible. *Proof*. We need the following two technical results, which are proven in \[Go3\]. Denote the closure of an operator $`R`$ by $`\overline{R}`$. ###### Lemma 1 Let $`R`$ be an essentially self-adjoint operator and $`S`$ a closable operator which have a common dense invariant domain $`D`$. Suppose that $`D`$ consists of analytic vectors for $`R`$, and that $`R`$ (weakly) commutes with $`S`$. Then $`\mathrm{exp}(i\overline{R})`$ (weakly) commutes with $`\overline{S}`$ on $`D`$. ###### Lemma 2 Let $`S`$ be a closable operator. If a bounded operator (weakly) commutes with $`\overline{S}`$ on $`D(S)`$, then they also commute on $`D(\overline{S})`$. By virtue of (Q5) and Corollary 1 and Theorem 3 of \[FS\], we may assume that there is a dense space $`D_\omega D`$ of separately analytic vectors for some basis $`=\{B_1,\mathrm{},B_K\}`$ of $`𝒬(𝔟)`$. Suppose $`T\mathrm{Op}(D)`$ (weakly) commutes with every $`B_k`$. According to \[FS, Prop. 1\], $`T`$ leaves $`D_\omega `$ invariant. Now by \[RS, §X.6, Cor. 2\] each $`B_kD_\omega `$ is essentially self-adjoint; moreover, $`T_\omega :=TD_\omega `$ is symmetric and hence closable. Upon taking $`R=B_kD_\omega `$ and $`S=T_\omega `$ in Lemma 1, it follows that $`\mathrm{exp}(i\overline{B_kD_\omega })=\mathrm{exp}(i\overline{B_k})`$ and $`\overline{T_\omega }`$ commute on $`D_\omega `$. Lemma 2 then shows that $`\mathrm{exp}(i\overline{B_k})`$ and $`\overline{T_\omega }`$ commute on $`D(\overline{T_\omega })`$ for all $`B_k`$. By (Q5) the representation $`𝒬`$ of $`𝔟`$ can be integrated to a unitary representation $`𝔔`$ of the corresponding connected, simply connected group $`G`$ on $``$ \[FS, Cor. 1\] which, according to (Q4), is irreducible. From the construction of coordinates of the second kind on $`𝔔(G)`$, the map $`𝐑^K𝔔(G)`$ given by $$(t_1,\mathrm{},t_K)\mathrm{exp}(it_1\overline{B_1})\mathrm{}\mathrm{exp}(it_K\overline{B_K})$$ is a diffeomorphism of an open neighborhood of $`0𝐑^K`$ onto an open neighborhood of $`I𝔔(G)`$. Since $`𝔔(G)`$ is connected, the subgroup generated by such a neighborhood is all of $`𝔔(G)`$. It follows that as $`\overline{T_\omega }`$ commutes with each $`\mathrm{exp}(it_k\overline{B_k})`$, it commutes with every element of $`𝔔(G)`$. The unbounded version of Schur’s lemma \[Ro, (15.12)\] then implies that $`\overline{T_\omega }=\lambda I`$ for some constant $`\lambda `$ on $`D(\overline{T_\omega })=.`$ Since $`\overline{T_\omega }`$ is the smallest closed extension of $`T_\omega `$ and $`T_\omega T\overline{T}`$, we see that $`\overline{T}=\lambda I`$, whence $`T`$ itself is a constant multiple of the identity. $`\mathrm{}`$ ## 3 An Algebraic No-Go Theorem We first derive an algebraic obstruction to quantization. The idea is to compare the algebraic structures of Poisson algebras on the one hand with associative algebras of operators with the commutator bracket on the other. ###### Theorem 4 Let $`𝒫`$ be a unital Poisson subalgebra of $`C^{\mathrm{}}(M)`$ or $`C^{\mathrm{}}(M,𝐂)`$. If as a Lie algebra $`𝒫`$ is not commutative, it cannot be realized as an associative algebra with the commutator bracket. Proof. To the contrary, let us assume that there is a Lie algebra isomorphism $`𝒬:𝒫𝒜`$ onto an associative algebra $`𝒜`$ with the commutator bracket. Let us take $`mM`$ and $`f,g𝒫`$ such that $`\{f,g\}(m)0`$. In particular, then, $`X_g(m)0`$. Replacing $`g`$ by $`gg(m)1`$, we can assume that $`g(m)=0`$. The Lie subalgebra $`𝒫_m=\{h𝒫|X_h(m)=0\}`$ is clearly of finite codimension in $`𝒫`$. Let us put $`L=ad^1(𝒫_m)=\{h𝒫|\{𝒫,h\}𝒫_m\}`$. Since $`𝒬(𝒫_m)`$ is a finite-codimensional Lie subalgebra of $`𝒜`$, there is a finite-codimensional two-sided associative ideal $`J`$ contained in $`ad^1(𝒬(𝒫_m))=𝒬(L)`$ \[Gra1, Prop. 2.1\]. But associative ideals are Lie ideals with respect to the commutator bracket! Hence $`𝒬^1(J)`$ is a finite-codimensional (say $`(l2)`$-codimensional) Lie ideal of $`𝒫`$ contained in $`L`$. In particular, some linear combination of $`g^2,g^3,\mathrm{},g^l`$, say $`\widehat{g}=g^k+_{i=k+1}^la_ig^i,`$ $`k2,`$ belongs to $`𝒬^1(J)`$. Then $`ad_f^{k2}\widehat{g}𝒬^1(J)L`$, where $`ad_f\widehat{g}:=\{f,\widehat{g}\}`$, and thus $`ad_f^{k1}\widehat{g}=ad_f(ad_f^{k2}\widehat{g})𝒫_m`$. But, as $`g(m)=0`$, an easy calculation gives $$X_{ad_f^{k1}\widehat{g}}(m)=k!\{f,g\}^{k1}(m)X_g(m)0,$$ a contradiction. $`\mathrm{}`$ See \[Jo\] for complementary results regarding $`P(\mathrm{h}(2n))`$ vis-à-vis the Weyl algebra. In §5 we will use this result to prove the nonexistence of polynomial quantizations of $`(P(𝔟),𝔟)`$ when $`𝔟`$ is nilpotent. ## 4 Nilpotent Basic Algebras Let $`𝔟`$ be a nilpotent basic algebra on a $`2n`$-dimensional connected symplectic manifold $`M`$. Since by (B1) $`𝔟`$ is finitely generated and as every finitely generated nilpotent Lie algebra is finite-dimensional, \[Go2, Prop. 2\] shows that $`M`$ must be a coadjoint orbit in $`𝔟^{}`$. Now we have the “bundlization” results of Arnal *et al.* \[ACMP\], Pedersen \[Pe\], Vergne \[Ve\], and Wildberger \[Wi\], which assert: ###### Theorem 5 Let $`𝔟`$ be a finite-dimensional nilpotent Lie algebra. For each $`2n`$-dimensional coadjoint orbit $`O𝔟^{}`$, there exists a symplectomorphism (“bundlization”) $`\phi _O:T^{}𝐑^nO`$. We may consider $`b𝔟`$ as a (linear) function on $`𝔟^{}`$, and form $`\mathrm{\Phi }_O(b)=b|O\phi _O.`$ Then cotangent coordinates $`(q_1,\mathrm{},q_n,p_1,\mathrm{},p_n)`$ on $`T^{}𝐑^n`$ may be chosen in such a way that $`\mathrm{\Phi }_O(b)`$ has the form $$\varphi _0p_1+\varphi _1(q_1)p_2+\mathrm{}+\varphi _{n1}(q_1,\mathrm{},q_{n1})p_n+\varphi _n(q_1,\mathrm{},q_n),$$ (1) where the $`\varphi _\alpha `$ are polynomials. Thus we may assume that $`M=T^{}𝐑^n`$ and that $`𝔟`$ consists of elements of the form (1). See \[Gra2\] for an analogous characterization of transitive nilpotent Lie algebras of vector fields. The canonical example of a nilpotent basic algebra on $`T^{}𝐑^n`$ is the Heisenberg algebra h$`(2n)=\mathrm{span}_𝐑\{1,q_\alpha ,p_\alpha |\alpha =1,\mathrm{},n\}.`$ It is not difficult to see from (1) that, up to isomorphism, h(2) is the only nilpotent *basic* algebra on $`T^{}𝐑`$. This is not true in higher dimensions, however: $$𝔟=\mathrm{span}_𝐑\{1,q_1,p_2,q_1p_2+q_2,p_1\}$$ is a nilpotent basic algebra on $`T^{}𝐑^2`$ which is not isomorphic to h(4). Regardless, all nilpotent basic algebras on $`T^{}𝐑^n`$ enjoy the following property. We write $`𝐪=(q_1,\mathrm{},q_n)`$, etc. ###### Proposition 6 If $`𝔟`$ is a nilpotent basic algebra on $`T^{}𝐑^n`$, then as Poisson algebras $`P(𝔟)=𝐑[𝐪,𝐩].`$ *Proof.* That $`P(𝔟)𝐑[𝐪,𝐩]`$ is evident from Theorem 5. The opposite inclusion follows from an algorithm, developed in \[Pe, §5.4\], which constructs the $`\{q_\alpha ,p_\alpha |\alpha =1,\mathrm{},n\}`$ as polynomial functions of elements of a basis of $`𝔟`$. That $`P(𝔟)`$ and $`𝐑[𝐪,𝐩]`$ coincide as Lie algebras is due to the fact that the bundlization $`\phi _O`$ is a symplectomorphism or, equivalently, that the coordinates $`q_\alpha ,p_\alpha `$ are canonical. $`\mathrm{}`$ We will establish a quantum analogue of this result in the next section. Recall that the central ascending series for $`𝔟`$ is $$\{0\}=𝔟^0𝔟^1\mathrm{}𝔟^{\mathrm{}}=𝔟$$ for some positive integer $`\mathrm{}`$, where $`𝔟^{s+1}=ad^1(𝔟^s)`$. Then $`\{𝔟,𝔟^s\}𝔟^{s1}.`$ Also note that $`𝔟^1`$ is the center of $`𝔟`$ which, according to the transitivity condition in (B3), consists of constants. Choose a Jordan-Hölder basis $`\{b_1,\mathrm{},b_K\}`$ of $`𝔟`$. Then $`\{b_i,b_j\}=_{k=1}^Kc_{ij}^kb_k`$, where the structure constants $`c_{ij}^k=0`$ whenever $`k\mathrm{min}\{i,j\}`$. We take $`b_1=1.`$ We call the smallest integer $`N`$ such that $`b𝔟^{N+1}`$ the “nildegree” of $`b𝔟`$. Then nildeg$`(b_i)`$ nildeg$`(b_j)`$ whenever $`i<j.`$ The nildegree of a polynomial $`fP(𝔟)`$ is then the smallest integer $`N`$ such that $$\left(ad(b_{i_1})\mathrm{}ad(b_{i_{N+1}})\right)f=0$$ for all $`i_1,\mathrm{},i_{N+1}\{1,\mathrm{},K\}.`$ ## 5 Proof of Theorem 1 and Related Results Before proving Theorem 1 we establish several results which are useful in their own right. Let the basic algebra $`𝔟`$ be nilpotent. Fix a Lie subalgebra $`𝒪`$ of $`P(𝔟)`$ containing $`𝔟.`$ Suppose that $`𝒬:𝒪\mathrm{Op}(D)`$ is a quantization of $`(𝒪,𝔟)`$ on some invariant dense domain $`D`$ in a Hilbert space. ###### Proposition 7 $`𝒬`$ is injective. *Proof.* Let $`L=\mathrm{ker}𝒬`$; then given $`gL`$, there is a $`k`$ such that $`g𝒪^k`$, where $`𝒪^k`$ is the subspace of $`𝒪`$ consisting of polynomials of nildegree at most $`k`$ in the elements of $`𝔟`$. Consider the adjoint representation of $`𝔟`$ on $`𝒪^kL.`$ (This makes sense as $`L`$ is a Lie ideal.) This is a nilrepresentation, so by Engel’s theorem \[NS, §X.2\] there exists a nonzero element $`f𝒪^kL`$ such that $`\{f,b\}=0`$ for all $`b𝔟.`$ But then transitivity implies that $`f`$ is a constant, which contradicts (Q2). Thus $`L=\{0\}`$. $`\mathrm{}`$ Thus condition (Q6) is actually redundant in the case of nilpotent basic algebras. Let $`𝒜`$ be the associative algebra generated over $`𝐂`$ by $`\{𝒬(b)|b𝔟\}`$. The next result generalizes Proposition 6 to the quantum context. ###### Proposition 8 $`𝒜`$ is isomorphic to a Weyl algebra.<sup>1</sup><sup>1</sup>1 Recall that the Weyl algebra $`W(2k)`$ is the associative algebra over $`𝐂`$ generated by $`\{z_\alpha ,w_\beta |\alpha ,\beta =1,\mathrm{},k\}`$ and the relations $`[z_\alpha ,w_\beta ]=i\delta _{\alpha \beta }`$, $`[z_\alpha ,z_\beta ]=0=[w_\alpha ,w_\beta ]`$. *Proof.* First we claim that the center of $`𝒜`$ is just $`𝐂I`$. Indeed, suppose $`[A,𝒬(b)]=0`$ for all $`b𝔟`$. Since by construction every $`A𝒜`$ has an adjoint, we may decompose $`A`$ into its symmetric $`A_s`$ and skew-symmetric $`A_a`$ components. Algebraic irreducibility then implies that the symmetric operators $`A_s`$ and $`iA_a`$ are both scalar multiples of the identity. Next let $`\psi `$ be the homomorphism of the universal enveloping algebra $`U(𝒬(𝔟_𝐂))`$ into $`𝒜`$ determined by the inclusion $`𝒬(𝔟_𝐂)𝒜`$. Then $`J=\mathrm{ker}\psi `$ is a two-sided ideal in $`U(𝒬(𝔟_𝐂))`$. Clearly, $`\psi `$ is an epimorphism and thus $`U(𝒬(𝔟_𝐂))/J𝒜`$. Since furthermore $`𝒬(𝔟_𝐂)`$ is nilpotent, the desired result now follows from \[Di, Thm. 4.7.9\]. $`\mathrm{}`$ By requiring $`𝒬`$ to be complex linear, we may view it as a quantization of the complexification $`𝒪_𝐂`$. We next prove that $`𝒬`$ maps $`𝒪_𝐂`$ into $`𝒜`$. That “polynomials quantize to polynomials” can be regarded as a generalized “Von Neumann rule,” cf. \[Go2\]. ###### Proposition 9 $`𝒬(𝒪_𝐂)𝒜`$. *Proof.* We argue inductively on the nildegree of $`f𝒪`$ that $`𝒬(f)𝒜`$. In nildegree 0 this follows immediately from transitivity and (Q2). Now suppose it is also true for polynomials in $`𝒪`$ of nildegree $`JN`$, and let $`f𝒪`$ have nildegree $`N+1`$. Then for each $`b𝔟,`$ $$[𝒬(f),𝒬(b)]=i\mathrm{}𝒬\left(\{f,b\}\right)𝒜$$ by (Q1) and the inductive hypothesis, since $`\text{nildeg}\left(\{f,b\}\right)<\text{nildeg}(f).`$ Thus the map $$W[𝒬(f),W]$$ defines a derivation of the associative algebra $`𝒜`$. As it is well known that every derivation of a Weyl algebra is inner \[Di, §4.6.8\], by Proposition 1 there is thus an $`A𝒜`$ such that $`[𝒬(f),W]=[A,W]`$ for all $`W𝒜`$. Algebraic irreducibility then implies that the symmetric operator $`𝒬(f)`$ and the symmetric component $`A_s`$ of $`A`$ differ by a constant multiple of $`I`$. Thus the inductive step is proved and so $`𝒬(𝒪)`$, and hence $`𝒬(𝒪_𝐂)`$, are contained in $`𝒜`$. $`\mathrm{}`$ We are finally ready to show that there is no quantization of $`(P(𝔟),𝔟)`$. Set $`B_i=𝒬(b_i)`$. As $`𝒬(𝔟_𝐂)`$ is nilpotent, we may likewise define the nildegree of the $`B_i`$ etc.<sup>2</sup><sup>2</sup>2 This is so even though $`𝒬`$ need not be a nilrepresentation. Since $`𝒬`$ is faithful we have that nildeg$`(B_i)=`$ nildeg$`(b_i)`$. *Proof of Theorem 1*. Suppose that $`𝒬:P(𝔟)\mathrm{Op}(D)`$ were a quantization of $`(P(𝔟),𝔟)`$. Let $`𝒫=P(𝔟)_𝐂`$. From Proposition 9 we know that $`𝒬(𝒫)𝒜`$, and from Proposition 7 we have that $`𝒬`$ is injective. Thus if we can show that $`𝒬`$ is surjective, then $`𝒬`$ will be a Lie algebra isomorphism of $`𝒫`$ onto $`𝒜`$, thereby contradicting Theorem 4. To this end, we shall prove inductively that 1. If the monomial $`b_1^{r_1}\mathrm{}b_K^{r_K}P(𝔟)`$ is of nildegree $`J`$, $`JN`$, then $$𝒬(b_1^{r_1}\mathrm{}b_K^{r_K})=𝒮(B_1^{r_1}\mathrm{}B_K^{r_K})+\mathrm{𝑝𝑜𝑙𝑦𝑛𝑜𝑚𝑖𝑎𝑙𝑠}\mathrm{𝑜𝑓}\mathrm{𝑛𝑖𝑙𝑑𝑒𝑔𝑟𝑒𝑒}<J,$$ where $`𝒮`$ denotes symmetrization over all factors. We have already seen that condition ($`_0`$) holds. Now assume that $`b_1^{r_1}\mathrm{}b_K^{r_K}`$ has nildegree $`N+1.`$ By (Q1), $`[𝒬(b_1^{r_1}\mathrm{}b_K^{r_K}),B_j]`$ $`=`$ $`i\mathrm{}𝒬\left(\{b_1^{r_1}\mathrm{}b_K^{r_K},b_j\}\right)`$ $`=`$ $`i\mathrm{}𝒬\left({\displaystyle \underset{l=1}{\overset{K}{}}}r_lb_1^{r_1}\mathrm{}b_l^{r_l1}\{b_l,b_j\}\mathrm{}b_K^{r_K}\right)`$ $`=`$ $`i\mathrm{}{\displaystyle \underset{l,m=1}{\overset{K}{}}}r_lc_{lj}^m𝒬\left(b_1^{r_1}\mathrm{}b_m^{r_m+1}\mathrm{}b_l^{r_l1}\mathrm{}b_K^{r_K}\right)`$ $`=`$ $`i\mathrm{}{\displaystyle \underset{l,m=1}{\overset{K}{}}}r_lc_{lj}^m𝒮(B_1^{r_1}\mathrm{}B_m^{r_m+1}\mathrm{}B_l^{r_l1}\mathrm{}B_K^{r_K})`$ $`+\mathrm{𝑝𝑜𝑙𝑦𝑛𝑜𝑚𝑖𝑎𝑙𝑠}\mathrm{𝑜𝑓}\mathrm{𝑛𝑖𝑙𝑑𝑒𝑔𝑟𝑒𝑒}<N`$ where the last equality follows from ($`_N`$), since $`\text{nildeg}\left(c_{lj}^mb_1^{r_1}\mathrm{}b_m^{r_m+1}\mathrm{}b_l^{r_l1}\mathrm{}b_K^{r_K}\right)`$ $``$ $`\text{nildeg}\left(\{b_1^{r_1}\mathrm{}b_K^{r_K},b_j\}\right)`$ $`<`$ $`\text{nildeg}\left(b_1^{r_1}\mathrm{}b_K^{r_K}\right).`$ Furthermore, direct computation yields $$[𝒮(B_1^{r_1}\mathrm{}B_K^{r_K}),B_j]=i\mathrm{}\underset{l,m=1}{\overset{K}{}}r_lc_{lj}^m𝒮(B_1^{r_1}\mathrm{}B_m^{r_m+1}\mathrm{}B_l^{r_l1}\mathrm{}B_K^{r_K}).$$ Consequently for each $`j=1,\mathrm{},K`$, $$[𝒬(b_1^{r_1}\mathrm{}b_K^{r_K})𝒮(B_1^{r_1}\mathrm{}B_K^{r_K}),B_j]=\mathrm{𝑝𝑜𝑙𝑦𝑛𝑜𝑚𝑖𝑎𝑙𝑠}\mathrm{𝑜𝑓}\mathrm{𝑛𝑖𝑙𝑑𝑒𝑔𝑟𝑒𝑒}<N.$$ This implies that the polynomial $`𝒬(b_1^{r_1}\mathrm{}b_K^{r_K})𝒮(B_1^{r_1}\mathrm{}B_K^{r_K})𝒜`$ has nildegree at most $`N`$, and ($`_{N+1}`$) follows. Applying ($`_N`$) recursively, we see that as the $`𝒮(B_1^{r_1}\mathrm{}B_K^{r_K})`$ form a basis for $`𝒜`$, $`𝒬`$ maps onto $`𝒜`$. $`\mathrm{}`$ Even though one cannot quantize *all* of $`P(𝔟)`$, it is possible to quantize ‘sufficiently small’ Lie subalgebras thereof (see, e.g. \[Go3\]). We emphasize that Propositions 79 are valid in this context. It is an open problem to determine the maximal quantizable Lie subalgebras of $`P(𝔟).`$ ## 6 Solvable Basic Algebras We have shown that there is an obstruction to quantizing symplectic manifolds with nilpotent basic algebras. It is also known that there is an obstruction to quantizing $`T^{}S^1`$ with the Euclidean basic algebra e(2), which is solvable \[GG1\]. Thus it is natural to wonder if the nilpotent no-go theorem extends to the solvable case. It turn out that it does *not*: We now show that there is a polynomial quantization of $`T^{}𝐑_+=\{(q,p)𝐑^2|q>0\}`$ with the “affine” basic algebra $$\mathrm{a}(1)=\mathrm{span}_𝐑\{pq,q^2\}.$$ Upon writing $`x=pq,`$ $`y=q^2`$, the bracket relation becomes $`\{x,y\}=2y.`$ Thus a(1) is the simplest example of a solvable algebra which is not nilpotent. The corresponding polynomial algebra $`P=𝐑[x,y]`$ is free, and has the crucial feature that for each $`k0`$, the subspaces $`P_k`$ are *ad*-invariant, i.e., $$\{P_1,P_k\}P_k.$$ (2) (Here $`P_k`$ denotes the subspace of homogeneous polynomials of degree $`k`$ in $`x`$ and $`y`$, and $`P^k=_{l=0}^kP_l`$. Note that $`P_1=\mathrm{a}(1)`$). Because of this $`\{P_k,P_l\}P_{k+l1},`$ whence each $`P_{(k)}=_{lk}P_l`$ is a Lie ideal. We thus have the semidirect sum decomposition $$P=P^1P_{(2)}.$$ (3) Now on to quantization. In view of (3), we can obtain a quantization $`𝒬`$ of $`P`$ simply by finding an appropriate representation of $`P^1=𝐑P_1`$ and setting $`𝒬(P_{(2)})=\{0\}`$! The connected, simply connected covering group of a(1) is $`\mathrm{A}(1)_+=𝐑𝐑_+`$ with the composition law $$(\nu ,\lambda )(\beta ,\delta )=(\nu +\lambda ^2\beta ,\lambda \delta ).$$ (A(1)<sub>+</sub> is isomorphic to the group of orientation-preserving affine transformations of the line, whence the terminology.) Since A(1)<sub>+</sub> is a semidirect product we can generate its unitary representations by induction. Following the recipe in \[BR, §17.1\] we obtain two one-parameter families of unitary representations $`U_\pm `$ of A(1)<sub>+</sub> on $`L^2(𝐑_+,dq/q)`$ given by $$\left(U_\pm (\nu ,\lambda )\psi \right)(q)=e^{\pm i\mu \nu q^2}\psi (\lambda q)$$ with $`\mu >0.`$ We identify the parameter $`\mu `$ with $`\mathrm{}^1`$. According to Theorems 4 and 5 in \[BR, §17.1\] the remaining two representations (one for each choice of sign) are irreducible and inequivalent; moreover, up to equivalence these are the only nontrivial irreducible ones. Let $`DL^2(𝐑_+,dq/q)`$ be the linear span of the functions $`\sqrt{q}h_k(q)`$, where the $`h_k`$ are the Hermite functions. Writing $`\pi _\pm =i\mathrm{}dU_\pm `$ we get the representation(s) of a(1) on the dense subspace $`D`$: $$\pi _\pm (pq)=i\mathrm{}q\frac{d}{dq},\pi _\pm (q^2)=\pm q^2.$$ Extend these to $`P^1`$ by taking $`\pi _\pm (1)=I`$, and set $`𝒬_\pm =\pi _\pm 0`$ (cf. (3)). Clearly (Q1)–(Q3) hold, by construction (Q4) is satisfied, and $`𝒬_\pm \mathrm{a}(1)=\pi _\pm `$ is faithful. Finally, it is straightforward to verify that $`D`$ consists of analytic vectors for both $`\pi _\pm (pq)`$ and $`\pi _\pm (q^2)`$. Thus $`𝒬_\pm `$ are the required quantization(s) of $`(P,P_1)`$. *Remarks.* 1. The $`+`$ quantization of a(1) is exactly what one obtains by geometrically quantizing $`T^{}𝐑_+`$ in the vertical polarization. Carrying this out, we get $`=L^2(𝐑_+,dq)`$ and $$pqi\mathrm{}\left(q\frac{d}{dq}+\frac{1}{2}\right),q^2q^2.$$ The $`+`$ quantization is unitarily equivalent to this via the transformation $`L^2(𝐑_+,dq/q)L^2(𝐑_+,dq)`$ which takes $`f(q)f(q)/\sqrt{q}.`$ 2. Note that $`\mathrm{a}(1)\text{sp}(2,𝐑)`$. In fact, the $`+`$ quantization is equivalent to the restrictions to a(1) of the metaplectic representations of sp$`(2,𝐑)`$ on both $`L_{\text{even}}^2(𝐑,dq)`$ and $`L_{\text{odd}}^2(𝐑,dq)`$ \[Go2, §5.1\]. 3. Since $`𝒬(P_{(2)})=0`$, the quantization is somewhat ‘trivial.’ However, there are quantizations which are nonzero on $`P_{(2)}`$: for instance, set $`𝒬(x^k)=k𝒬(x)`$ for $`k>0`$, $`𝒬(x^ly)=𝒬(y)`$, and $`𝒬(x^ly^m)=0`$ for $`m>1.`$ 4. Our quantization of $`T^{}𝐑_+`$ should be contrasted with that given in \[Is, §4.5\]. Also, we observe that this example is symplectomorphic to $`𝐑^2`$ with the basic algebra $`\mathrm{span}\{p,e^{2q}\}`$. 5. This is not the first example of a polynomial quantization; in \[Go1\] a quantization of the entire Poisson algebra of the torus was constructed. However, the basic algebra in that example was *infinite*-dimensional. What makes this example work? After comparing it with other examples, it is evident that this polynomial quantization exists because we cannot decrease degree *in* $`P`$ by taking Poisson brackets. (That is, we have (2) as opposed to merely $`\{P_1,P_k\}P^k.`$) Based on this observation, it seems reasonable to suspect that there is an obstruction to quantizing $`(P(𝔟),𝔟)`$ iff it is possible to lower degree in $`P(𝔟)`$ by taking Poisson brackets. We shall pursue this line of investigation elsewhere (cf. also \[Go2\]). We thank M. Gerstenhaber, B. Kaneshige, and N. Wildberger for providing us with helpful comments and references.
no-problem/9902/math9902094.html
ar5iv
text
# Cohomology of subregular tilting modules for small quantum groups ## 1. Introduction Let $`R`$ be an irreducible root system with the Coxeter number $`h`$. Let $`l>h`$ be an odd integer (we assume that $`l`$ is not divisible by 3 if $`R`$ is of type $`G_2`$). Let $`U`$ be the quantum group of type 1 with divided powers associated to these data, see (of type 1 means that the elements $`K_i^l`$ are equal to 1). Let $`uU`$ be the Frobenius kernel, see loc. cit. Let $`\mathrm{𝟏}`$ be the trivial $`U`$module. The cohomology $`H^{}(u,\mathrm{𝟏})`$ was computed by V.Ginzburg and S.Kumar in , see also . They proved that the odd cohomology $`H^{odd}(u,\mathrm{𝟏})`$ vanishes and the algebra of even cohomology $`H^2(u,\mathrm{𝟏})`$ is isomorphic to the algebra $`[𝒩]`$ of functions on the nilpotent cone $`𝒩𝔤`$, where $`𝔤`$ is the semisimple Lie algebra associated to $`R`$. Moreover, this is an isomorphism of graded algebras with the grading on $`[𝒩]`$ corresponding to the natural $`^{}`$action on $`𝒩`$ by dilatations. This isomorphism is compatible with natural $`G`$structures of both algebras where $`G`$ is simply connected group associated to $`R`$. Now let $`s_a`$ be the simple affine reflection lying in the affine Weyl group associated to $`R,l`$, see e.g. . Let $`\mathrm{\Theta }_{s_a}`$ be the corresponding wall-crossing functor, see e.g. . Let $`T=\mathrm{\Theta }_{s_a}\mathrm{𝟏}`$. It is easy to see that cohomology $`H^{}(u,T)`$ has a natural algebra structure; namely for any simple $`U`$module $`L`$ with highest weight lying on the affine wall of the fundamental alcove we have $`H^{}(u,T)=Ext_u^{}(L,L)`$. Since $`T`$ is a $`U`$module the cohomology $`H^{}(u,T)`$ has a natural structure of $`G`$module. Let $`𝒪𝒩`$ be the subregular nilpotent orbit. The main result of this note is the following Main Theorem. The odd cohomology $`H^{odd}(u,T)`$ vanishes. The algebra $`H^2(u,T)`$ is isomorphic to the algebra $`[\overline{𝒪}]`$ of functions on the closure of $`𝒪`$. This is an isomorphism of graded algebras with the grading on $`[\overline{𝒪}]`$ corresponding to the action of $`^{}`$ by dilatations. This isomorphism is compatible with natural $`G`$structures of both algebras. Remark. One can prove the analogous theorem for the Frobenius kernel $`G_1`$ of an almost simple algebraic group $`G`$ over an algebraically closed field of characteristic $`p>h`$. We remark that $`[\overline{𝒪}]=[𝒪]`$ because of normality of $`\overline{𝒪}`$, see . In W.H.Hesselink computed the structure of $`[𝒩]`$ as graded $`G`$-module. It is easy to deduce the Hesselink Theorem from the Ginzburg-Kumar Theorem (or rather from Andersen-Jantzen vanishing Theorem, see ). In the same way we are able to compute the structure of $`[\overline{𝒪}]`$ as graded $`G`$module, see Corollary 3 below. For any dominant weight $`\lambda `$ one defines the indecomposable tilting module $`T(\lambda )`$ with highest weight $`\lambda `$, see e.g. . Some time I believed that cohomology of any $`T(\lambda )`$ has parity vanishing property. In fact this belief was main motivation for this work. In the end of this note we give an example when cohomology of indecomposable tilting module lives in both even and odd degrees. ## 2. Proof of the Main Theorem Recall that $`T`$ has a unique trivial submodule $`\mathrm{𝟏}`$ and $`T/\mathrm{𝟏}=H^0(s_a0)`$, see e.g. . Let $`\varphi :TH^0(s_a0)`$ be the quotient map. Lemma 1. The map $`\varphi _{}:H^{}(u,T)H^{}(u,H^0(s_a0))`$ is zero. Proof. The map $`\varphi _{}`$ is a map of $`H^2(u,\mathrm{𝟏})=[𝒩]`$\- modules. It is known that the support of $`H^{}(u,T)`$ in $`𝒩`$ is equal to $`\overline{𝒪}`$, see . The cohomology $`H^{}(u,H^0(s_a0))`$ was computed by H.H.Andersen and J.C.Jantzen in , 3.7. We reformulate their result as follows: (a) Let $`\pi :T^{}(G/B)G/B`$ be the cotangent bundle of the flag variety of the group $`G`$. Let $`s:T^{}(G/B)𝒩`$ be the Springer resolution. Let $`L_\theta `$ be the line bundle on $`G/B`$ corresponding to the root $`\theta `$ dual to the highest coroot of $`𝔤`$ (more directly $`\theta `$ is the unique dominant short root). Then the even cohomology $`H^{ev}(u,H^0(s_a0))`$ vanishes; the odd cohomology is equal up to shift to $`s_{}\pi ^{}L_\theta `$ (if we consider the cohomology as a coherent sheaf on $`𝒩`$). In particular, if $`\varphi _{}`$ is nontrivial we obtain a section of the line bundle $`\pi ^{}L_\theta `$ supported on $`s^1(\overline{𝒪})`$. Contradiction. Remark. In fact H.H.Andersen and J.C.Jantzen computed the cohomology of induced modules over algebraic group over a field of characteristic $`p>0`$. But their proof works in the quantum situation as well if we know some vanishing result. This vanishing Theorem was proved in in types $`A,B,C,D,G`$ or for strongly dominant weights. In our case the weight $`\theta `$ is not strongly dominant. A.Broer proved the desired vanishing in case of characteristic 0 in . In recent work all restrictions in Andersen-Jantzen vanishing Theorem were removed. This should be used in the above-mentioned generalization of our Main Theorem to characteristic $`p`$. Corollary 1. The odd cohomology $`H^{odd}(u,T)`$ vanishes. For any $`i0`$ we have an exact sequence $$0H^{2i1}(u,H^0(s_a0))H^{2i}(u,\mathrm{𝟏})H^{2i}(u,T)0.$$ In particular, the natural map $`H^{}(u,\mathrm{𝟏})H^{}(u,T)`$ is surjective. Proof follows easily from consideration of the cohomology long exact sequence associated with the short exact sequence $$0\mathrm{𝟏}TH^0(s_a0)0.$$ Proof of the Main Theorem. The surjectivity of the map $`[𝒩]=H^2(u,\mathrm{𝟏})H^2(u,T)`$ implies that there exists a surjection $`\psi :H^2(u,T)[\overline{𝒪}]`$. Let $`L(\lambda )`$ be the simple $`G`$module with highest weight $`\lambda `$. For any weight $`\mu `$ let $`m_\lambda (\mu )`$ be the multiplicity of the weight $`\mu `$ in $`L(\lambda ).`$ It is known that the multiplicity of $`L(\lambda )`$ in $`[\overline{𝒪}]`$ is equal to $`m_\lambda (0)m_\lambda (\theta )`$, see 4.7. It is easy to deduce from Corollary 1 and (a) that the multiplicity of $`L(\lambda )`$ in $`H^{}(u,T)`$ also equals $`m_\lambda (0)m_\lambda (\theta )`$ (we omit the proof since it is the same as the proof of Corollary 3 below). Hence $`\psi `$ is an isomorphism. The Theorem is proved. Let $`V=V(s_a0)`$ be the Weyl module with highest weight $`s_a0`$. Corollary 2. The cohomology $`H^{}(u,V)`$ is given by $$H^{2i}(u,V)=H^{2i}(u,T),$$ $$H^{2i+1}(u,V)=H^{2i}(u,\mathrm{𝟏}).$$ Proof. It is enough to consider the cohomology long exact sequence associated with the short exact sequence $$0VT\mathrm{𝟏}0$$ and note that the map $`H^{}(u,T)H^{}(u,\mathrm{𝟏})`$ is zero (this can be proved in the same way as Lemma 1). Remark. One can easily compute the cohomology of the simple module $`𝐋=𝐋(s_a0)`$ with highest weight $`s_a0`$ using the short exact sequence $$0𝐋H^0(s_a0)\mathrm{𝟏}0.$$ The answer is the following: $`H^2(u,𝐋)=0`$ and for any $`i0`$ we have short exact sequence $$0H^{2i}(u,\mathrm{𝟏})H^{2i+1}(u,𝐋)H^{2i+1}(u,H^0(s_a0))0.$$ Let $`R_+`$ be the set of positive roots and let $`W`$ be the Weyl group. For any $`wW`$ let $`(1)^w=det(w)`$. Let $`\rho `$ be the halfsum of positive roots. Let $`w\lambda =w(\lambda +\rho )\rho `$. For any dominant weight $`\lambda `$ let $`d_n(\lambda )`$ (resp. $`t_n(\lambda )`$) be the multiplicity of the simple module $`L(\lambda )`$ in the component of degree $`n`$ of $`[𝒩]`$ (resp. $`[\overline{𝒪}]`$). Let $`p_n`$ be the function on the set $`X`$ of weights, given by $$\underset{xX}{}\underset{n}{}p_n(x)t^ne^x=\underset{\alpha R_+}{}\frac{1}{1e^\alpha t}.$$ This function is essentially the Kostant-Lusztig partition function. Recall that Hesselink’s Theorem () states that $`d_n(\lambda )=_{wW}(1)^wp_n(w\lambda )`$. Let $`2k1`$ be the length of reflection in $`\theta `$. Corollary 3. (cf. 4.7) We have $$t_n(\lambda )=\underset{wW}{}(1)^w(p_n(w\lambda )p_{nk}(w\lambda \theta )).$$ Remark. For types $`A_l,B_l,C_l(l2),D_l(l3),G_2,F_4,E_6,E_7,E_8`$ the number $`k`$ equals to respectively $`l,l,2(l1),2l3,3,8,11,17,29`$. Proof. Let $`B`$ be the Borel subgroup of $`G`$. Let $`n`$ be the nilpotent radical of the Borel subalgebra in $`𝔤`$. Let $`S^{}(n^{})`$ be the algebra of functions on $`n`$. By we have $$H^{2i}(u,\mathrm{𝟏})=Ind_B^G(S^i(n^{})),R^{>0}Ind_B^G(S^i(n^{}))=0,$$ $$H^{2i1}(u,H^0(s_a0))=Ind_B^G(S^{ik}(n^{})\theta ),R^{>0}Ind_B^G(S^{ik}(n^{})\theta )=0.$$ Now the Euler characteristic of $`R^{}Ind_B^G(\mathrm{?})`$ is given by the Weyl character formula. The result follows. Example. Here we present an example when cohomology (over Frobenius kernel) of indecomposable tilting module lives in both odd and even degrees. Let $`R`$ be of type $`A_2`$. Let $`s_1,s_2`$ be the simple reflections in Weyl group, and let $`s_3`$ be the affine reflection. Consider indecomposable tilting module $`T=T(s_3s_1s_2s_30)`$. It has a filtration with subquotients $`H^0(s_3s_1s_2s_30),H^0(s_3s_1s_20),H^0(s_30)`$ and $`H^0(0)`$. Let $`\omega _1`$ and $`\omega _2`$ be the fundamental weights. We have $`s_3s_1s_2s_30=(3l3)\omega _2`$. By Andersen-Jantzen Theorem the cohomology of $`H^0(s_3s_1s_2s_30)`$ equals to $`Ind_B^G(3\omega _2S^{}(n^{}))`$ living in even degrees, the cohomology of $`H^0(s_3s_1s_20)`$ or $`H^0(s_30)`$ equals to $`Ind_B^G((\omega _1+\omega _2)S^{}(n^{}))`$ living in odd degrees, finally the cohomology of $`H^0(0)`$ equals to $`Ind_B^G(S^{}(n^{}))`$ living in even degrees. By Kostant multiplicity formula we obtain that multiplicity of $`L(\lambda )`$ in Euler characteristic of cohomology of $`T`$ equals to $`m_\lambda (3\omega _2)+m_\lambda (0)2m_\lambda (\omega _1+\omega _2)`$. In particular, multiplicity of $`L(0)`$ equals to 1 and multiplicity of $`L(3\omega _1)`$ equals to -1. This contradicts to the parity vanishing. Acknowledgements. I am grateful to M.Finkelberg for useful conversations. Thanks also due to H.H.Andersen and J.Humphreys for valuable suggestions. I would like to thank Aarhus University for its hospitality while this note was being written.
no-problem/9902/astro-ph9902024.html
ar5iv
text
# SCATTERING OF ULTRAHIGH ENERGY (UHE) EXTRAGALACTIC NEUTRINOS ONTO LIGHT RELIC NEUTRINOS IN GALACTIC HDM HALO OVERCOMING THE GZK CUT OFF ## 1 Introduction Most energetic cosmic rays UHE $`(E_{CR}>10^{19}eV)`$ are bounded to short $`(\stackrel{~}{<}10Mpc)`$ distances by the 2.73 K BBR opacity , (the GZK cut off) and by diffused radio intergalactic noise , . The main electromagnetic “viscosities” stopping the UHE cosmic ray (nuclei, nucleons, photons, electrons) propagation above $`10Mpc`$ are: (1) The Inverse Compton scattering of any charged lepton (mainly electrons) on the BBR $`(e_{CR}^\pm \gamma _{BBR}e^\pm \gamma )`$ , , (2) the nucleons photopair production at higher energies $`(p_{CR}+\gamma _{BBR}pe^+e^{})`$, (3) the UHE photon BBR or radio photon electron-pair productions $`(\gamma _{CR}+\gamma _{BBR}e^+e^{})`$, (4) nuclei fragmentation by photopion interactions. (5) the dominant nucleon photoproduction of pions $`(p_{CR}+\gamma _{BBR}p+N\pi ;n+\gamma _{BBR}n+N\pi )`$. The above GZK constrains apply to all known particle (proton, neutron, photons, nuclei) excluding neutrinos. Nevertheless, the UHE cosmic rays, either charged or neutral, flight straight keeping memory of the primordial source direction, because of the extreme magnetic rigidity . However all last well localized most energetic cosmic rays (as the Fly’s Eye 320 EeV event on October 1991) do not exhibit any cosmic nearby $`(<60Mpc)`$ source candidate in the same arrival direction error box. From here the UHE cosmic ray paradox arises. The few known solutions are difficult to accept: (a) An exceptional $`(B\stackrel{~}{>}10^7Gauss)`$ coherent magnetic field on huge extragalactic distances able to bend (by a large angle) the UHECR trajectory coming not from distant but from nearer off-axis sources like M 82 or Virgo A . This solution was not found plausible . (b) Exotic topological defect annihilations in diffused galactic halo is an ad hoc, and a posteriori solution. Moreover it is in contradiction with recent evidences by AGASA detector of data inhomogeneities, i.e. of doublets or triplet UHECR events arriving from the same directions . (c) A galactic halo population of UHECR sources (as the fast running pulsars associated with SGRs). These small size (neutron star - black-hole) jets sources must be extremely efficient in cosmic ray acceleration at energies well above the expected common maximum energy $`E<BR\stackrel{~}{<}10^{17}eV(\frac{B}{3.10^6G})(\frac{R}{50pc})`$ required by a supernova accelerating blast wave. Moreover their extended halo distribution must reflect in a dipole and/or in a quadrupole UHECR anisotropy, signature not yet identified. Therefore it seems at least premature to call for a solution in (local galactic - local group) location of microquasar jets. (d) A direct nucleus or a nucleon at high energies will be at 100Mpc distances severely suppressed by GZK $`(10^5)`$ cut off and, more dramatically, they will induce a strong signal of secondaries at energies just below the GZK bound, a totally unobserved signal. Our present solution is based on the key role of light $`(m_\nu \stackrel{~}{>}eV)`$ cosmic neutrinos clustered in extended galactic halo: these relic neutrinos act as a target calorimeter able to absorb the UHE $`\nu `$’s from cosmic distances and to produce hadronic showers in our galaxy. The primary UHECRs are the usual AGNs or Blazars able to produce huge powers and energies. Their photopion production and decays, near the source, into muonic and electronic neutrinos, generate the main $`\nu `$s messenger toward cosmic distances up to our galactic halo. Their final interactions with clustered relic $`\nu _r`$ (and $`\overline{\nu }_r`$) of all flavours (but preferentially with the heaviest and best clustered one $`(\nu _\tau ,\overline{\nu }_\tau )`$) may offer different channel reactions: (A) $`\nu \overline{\nu }_\tau `$ scattering via a Z exchanged in the s-channel leading to nucleons and photons. (B) $`\nu \overline{\nu }_r`$ scattering via t-channel of virtual W exchange between different flavours. This is able to produce copious UHE photons (mainly by $`\nu _\mu \overline{\nu }_\tau \mu ^{}\tau ^+`$ and $`\tau `$ pion decay), (C) $`\nu \overline{\nu }_r`$ production of $`W^{}W^+`$ or ZZ pairs. The latter channels are the best ones in our opinion to produce final nucleons $`(p,\overline{p},n,\overline{n})`$ which fit observational data. The UHE $`\nu `$ cross section interacting with relic $`\nu _\tau ,\overline{\nu }_\tau `$. The general framework to solve the GZK puzzle we proposed is a tale story beginning from a far AGN source whose UHE protons $`(E_p\stackrel{~}{>}10^{23}eV)`$ are themselves a source of pions and secondaries muons and UHE neutrinos. The latter may actually escape the GZK cut off, traveling unbounded all the needed cosmic distances $`(100Mpc)`$. Once near our galactic halo, the denser gravitationally-clustered light relic neutrinos, forming a hot dark halo, might be able to convert the UHE $`\nu `$’s energies by scattering and subsequent decays into an observable nucleon (or antinucleon), the final observed UHECR remnant. The $`\nu \nu `$ interaction cross-sections are the key filter which make possible and efficient the whole process. In fig. 1 from we show the three main processes cross-section as a function of the center of mass energies. The (s \- channel) $`\nu _\mu \overline{\nu }_{\mu R}Z`$ exhibits a resonance at $`E_\nu =10^{21}eV(\frac{m\nu }{4eV})^1`$; (t - channel) $`\nu _\mu \nu _{\tau _R}\mu ^{}\tau ^+`$ via virtual W. These reactions are the most probable ones but UHE photon seem to be excluded by geomagnetic high height cut off. The $`\nu \overline{\nu }_\tau W^+W^{}`$ cross section is also shown. There is an additional Z pair production channel $`(\nu \overline{\nu }_\tau ZZ)`$ almost coincident in its general behaviour with the $`W^+W^{}`$ production. It is not shown on the figure. Its global contribute (to be discussed elsewhere) is to double the $`(\nu \overline{\nu }_\tau W^{}W^+)`$ chain products making easier their detection . The table 5 from , (regarding the $`W^+W^{}`$ channel) is also shown. It describes the complex reaction branching (left column) of the reaction considered, the corresponding probability (left-center column ), the consequent multiplicity (center - right column of the by-products), the final energy of their secondaries. For example, the first reaction shows the nucleon $`+\gamma _{BBR}`$ photopion production at the primordial proton/neutron energy (see the final box $`E_p^{WW}7.10^{23}eV`$), the corresponding multiplicity, $`8\pi `$, which is reduced to 5 for the charged pions, and the corresponding relativistic, secondary pion energy $`E_\pi \frac{Ep}{9}`$. The following pion and muon decay in UHE neutrinos are straightforward. The $`\nu _\mu \overline{\nu }_{\mu \tau }W^+W^{}`$ reaction and probability must be considered at this energy range assuming a neutrino density clustering in galactic halo: $`P\sigma n_{\nu _\tau }l_g`$. The clustered neutrino density contrast is comparable to the barionic one $`\frac{n\nu _\tau }{n\nu _{BBR}}\frac{\rho _G}{\rho _W}10^{5÷7}`$. In conclusion , the total probability of the processes and the corresponding needed primordial proton energy $`E_p^{WW}`$ (calibrated by the final observed nucleon cosmic ray at 320 EeV = $`4.310^4E_p^{WW}`$) are summarized in the square boxes. The probability (taking into account global multiplicity and probability) to occur is at least $`P^{WW}\stackrel{~}{>}10^3(\frac{m\nu }{10eV})^1`$ corresponding, for the candidate source MCG 8-11-11, to a needed average power $`E^{WW}2.510^{48}ergs^1`$. The additional $`\nu \overline{\nu }ZZ`$ channel will reduce to half the power above , making the value comparable with the MCG8-11-11 observed low-gamma MeV luminosity $`(L_\gamma 710^{46}ergs^1)`$. We predicted parassite signals photons at $`10^{16}eV`$ energies as well as a peculiar imprint on larger sample data, due to the central overlapping of neutron-antineutron prompt arrival toward the source line of sight. An additional twin mirror (deviated) signal due to protons and antiprotons random walk will arrive late nearly at opposite (few degrees) sides. These characteristic signatures might be already recorded by AGASA in the last few doublet and triplet UHECR events. The UHE neutrinos above the GZK cut-off are observable from almost all the Universe while the corresponding UHE nucleons (or gamma) above the GZK energies are born in a smaller constrained “GZK” volume.Therefore the expected flux ratio for UHE $`\nu `$’s over nuclei or photons at GZK energies is roughly (in euclidean approximation) independent of the source spectra: $$\frac{\varphi _\nu }{\varphi _{GZK}}[\frac{z_\nu }{z_{GZK}}]^{3/2}310^4\eta (\frac{z_\nu }{2})^{3/2}(\frac{z_{GzK}}{210^3})^{3/2}$$ where $`z_\nu `$ is a characteristic UHE $`\nu ^{}`$s source redshift $`2`$ and $`z_{GZK}210^3`$. Assuming an efficiency ratio $`\eta `$ for the conversion from UHE proton to UHE $`\nu `$’s of a few percent, the ratio $`\varphi _\nu /\varphi _{GZK}\stackrel{~}{>}10^3`$ is naturally consistent with the inverse probability $`(P^{WW}\stackrel{~}{>}10^3)^1`$ found above. Therefore a $`10^{+3}`$ fold larger flux of UHE $`\nu `$’s (than the corresponding nucleon flux C.R.) above GZK (mainly of tau nature ) should be observed easily in a $`Km^3`$ neutrino detector in a very near future.
no-problem/9902/cond-mat9902233.html
ar5iv
text
# Delocalization transitions of semi-flexible manifolds ## Abstract Semi-flexible manifolds such as fluid membranes or semi-flexible polymers undergo delocalization transitions if they are subject to attractive interactions. We study manifolds with short-ranged interactions by field-theoretic methods based on the operator product expansion of local interaction fields. We apply this approach to manifolds in a random potential. Randomness is always relevant for fluid membranes, while for semi-flexible polymers there is a first order transition to the strong coupling regime at a finite temperature. Low dimensional manifolds play an important role in a variety of different contexts, e.g., as soft matter objects or as domain boundaries in condensed matter systems. They can perform large shape fluctuations driven by entropy . According to their fluctuations they can be divided into two classes. Flexible manifolds, such as interfaces, polymerized membranes, and long polymers, fluctuate under a tension controlling their area or length. The other class is governed by bending energy; i.e., regions of high curvature are penalized. Examples are polymers not much longer than their persistence length, like actin or DNA, and fluid membranes. These objects are stiffer, and we call them semi-flexible manifolds. Whenever a fluctuating manifold is attracted towards some other “defect” manifold, there is a competition between freely fluctuating configurations favored by entropy and configurations bound to the defect, which are preferred by energy. This competition can lead to a phase transition, the so called delocalization or unbinding transition. It is often of second order, that is, the amplitude of the fluctuations diverges continuously as the transition point is approached from within the bound phase. This leads to a scaling regime close to the transition whose universal characteristics can be described by a continuum field theory. Well-known examples of delocalization are wetting phenomena . For interfaces and polymers, these transitions have been widely studied . In the case of polymers, even the generalized problem of $`N`$ mutually attracting objects can be treated. Analytically continued to $`N=0`$, this describes a directed polymer in a random medium , which in turn is related to theories of stochastic surface growth . The delocalization transition then corresponds to a roughening transition between a smooth and a rough growth mode. An important class of low-dimensional manifolds are stretched objects with mainly transversal fluctuations. These are described by a $`d`$-dimensional displacement field $`𝐫(t)`$ which depends on a $`D`$-dimensional internal variable $`t`$. The continuum Hamiltonian takes the form $$=\left[\frac{1}{2}(^k𝐫)^2+V(𝐫,𝐫)\right]\mathrm{d}^Dt,$$ (1) where $`(^k𝐫)^2_{\alpha =1}^D_{i=1}^d(^kr_i(t)/t_\alpha ^k)^2`$ is the leading tension ($`k=1`$) or curvature ($`k=2`$) energy in a small-gradient expansion. The potential $`V`$ describes the interaction of the manifold with an external object or boundary at $`𝐫=0`$, or the mutual interaction between two manifolds with relative displacement $`𝐫(t)`$. Physical realizations of flexible manifolds ($`k=1`$) are directed polymers and flux lines in a type II superconductor ($`D=1,d=2`$, steps on a tilted crystal surface ($`D=1,d=1`$), and domain walls in a ferromagnet ($`D=2,d=1`$). Short-ranged interactions of definite sign can be represented as $`V(𝐫(t))=g\mathrm{\Phi }(t)`$ in terms of the local contact field $$\mathrm{\Phi }(t)\delta (𝐫(t)).$$ (2) The scaling dimension of this field, $`x_\mathrm{\Phi }=d\chi `$, is given in terms of the roughness exponent $`\chi `$, with $`\chi =(2D)/2`$ for $`k=1`$. There exists a well known perturbative framework to treat such interactions, which has been applied extensively to the problem of self-avoiding manifolds . More complicated short-ranged and long-ranged interactions have been studied as well. In this Letter, we develop the field theory of semi-flexible manifolds with local interactions, described by the Hamiltonian (1) with $`k=2`$. Physically interesting cases are again polymers ($`D=1,d=1,2`$ and, in particular, fluid membranes ($`D=2,d=1`$. Since a semi-flexible manifold has a locally well-defined orientation, we have to consider interactions $`V(𝐫(t),𝐫(t))`$ that depend both on the displacement and on the orientation. We find there are now two important scaling fields: Local contacts at arbitrary orientation are still represented by the field $`\mathrm{\Phi }(t)`$ given by (2), which has dimension $`x_\mathrm{\Phi }=d\chi `$ with $`\chi =(4D)/2`$. Due to the stiffness, however, closeby contacts always have a preferred orientation parallel to the defect. Such contacts are described by the field $$\mathrm{\Omega }(t)\delta (𝐫(t))\delta (𝐫(t))$$ (3) of dimension $`x_\mathrm{\Omega }=d\chi +dD(\chi 1)`$. The scaling fields $`\mathrm{\Phi }`$ and $`\mathrm{\Omega }`$ are found to obey an operator product expansion. This is a well-known concept in field theory (see, e.g., Ref. ), which has been applied extensively to flexible manifolds . It allows us to write down renormalization group equations for generic local interactions. This leads to results for the delocalization of semi-flexible polymers and of fluid membranes. Most importantly, the bound state of a semi-flexible manifold turns out to be maintained by contact interactions (3) at fixed orientation, while the bound state of a flexible manifold involves interactions of the form (2). Typical bound state configurations are compared in Fig. 1 for the case of polymers ($`D=1`$). Our one-loop results are in agreement with previous results obtained by approximate renormalization methods and by approaches specific to polymers . They are exact for $`D=1`$ and can be improved systematically for higher values of $`D`$. Furthermore, they can be applied to a semi-flexible manifold in a quenched random potential (taken to be Gauss distributed with mean $`\overline{V(𝐫,t)}=0`$ and variance $`\overline{V(𝐫,t)V(𝐫^{},t^{})}=\sigma ^2\delta (tt^{})\delta (𝐫𝐫^{})`$). In the replica formalism, this is equivalent to $`N`$ interacting semi-flexible manifolds in the limit of vanishing $`N`$. For fluid membranes, any amount of disorder is relevant and leads to a strong coupling phase. For semi-flexible polymers, however, small amounts of disorder are irrelevant (unlike for their flexible counterparts). There is now a first-order transition to the strong coupling phase at a finite amount of disorder. A quantitative description of the disordered strong coupling phase is, however, beyond the means of perturbation theory. We study the manifold displacement field in a hypercube of longitudinal extension $`0t_\alpha T`$ ($`\alpha =1,\mathrm{},D`$) and of transversal extension $`0r_iR`$ ($`i=1,\mathrm{},d`$). For $`TR^{1/\chi }`$, the free energy becomes extensive, $`FT^DR^{D/\chi }`$, and the perturbation series for the free energy density $`fF/T^D`$ becomes invariant under translations of $`t`$. This series takes the easiest form if the manifold is subject to wall constraints forcing the probability density $`\rho (𝐫^{})\delta (𝐫(t)𝐫^{})`$ to vanish on the boundary of the hypercube, in particular along the “edge” $`𝐫=0`$. The density then takes the asymptotic scaling form $`\rho (𝐫)(r_1\mathrm{}r_d)^\theta R^{d(1+\theta )}`$ for $`|𝐫|R`$, with an exponent $`\theta >0`$ expressing long-ranged suppression of the configurations close to the boundary. The wall constraint is natural in $`d=1`$ for a fluid membrane at a planar system boundary, or for a pair of membranes without mutual intersections. The generalization to arbitrary $`d`$ has been chosen such that the Hamiltonian (1) remains factorizable. Short-ranged interactions with the manifold $`𝐫=0`$ are now described by the local field $$\mathrm{\Omega }(t)\underset{r0}{lim}r^{d\theta }\underset{i=1}{\overset{d}{}}\delta (r_i(t)r),$$ (4) whose expectation values are finite. Due to the constraint, these interactions are always at fixed orientation $`𝐫=0`$. Hence, we have used the same symbol $`\mathrm{\Omega }`$ as for the field (3) of the unconstrained system. The correlation functions $`\mathrm{\Omega }(t)`$ $``$ $`R^{x_\mathrm{\Omega }/\chi }`$ (5) $`\mathrm{\Omega }(t)\mathrm{\Omega }(t^{})`$ $``$ $`|tt^{}|^{x_\mathrm{\Omega }}\mathrm{\Omega }(t)+\mathrm{}(|tt^{}|^\chi R)`$ (6) define the scaling dimension $`x_\mathrm{\Omega }`$. In the constrained system, the density (5) is linked to the pressure of the system by a wall theorem, $$\mathrm{\Omega }\left(f/R\right)^dR^{d(1+D/\chi )}.$$ (7) This determines the exponent value $$x_\mathrm{\Omega }=d(\chi +D),$$ (8) which differs from that of the unconstrained system. It is in agreement with a conjecture from functional renormalization for general $`D`$ and a direct calculation for $`D=1`$. The interaction part $`\delta f(h,R)f(h,R)f(0,R)`$ of the free energy of the system with $`V(𝐫(t),𝐫(t))=h\mathrm{\Omega }(t)`$ can be expanded as a power series $$\delta f=h\mathrm{\Omega }+\frac{h^2}{2}\mathrm{d}^Dt\mathrm{\Omega }(0)\mathrm{\Omega }(t)_c+O(h^3)$$ (9) containing the connected correlation functions $$\mathrm{\Omega }(0)\mathrm{\Omega }(t)_c\mathrm{\Omega }(0)\mathrm{\Omega }(t)\mathrm{\Omega }^2$$ (10) etc., taken at $`h=0`$. By (8), there is a whole line in the ($`D`$,$`d`$) plane, where the interaction $`\mathrm{\Omega }`$ is marginal, i.e., where $`x_\mathrm{\Omega }=D`$. This line is given by $$d^{}(D)=\frac{2D}{4+D}.$$ (11) The perturbation series (9) has poles in $$ϵDx_\mathrm{\Omega }=Dd(2+D/2),$$ (12) which can be regularized around any point on the line of marginality (11); see the discussion in for flexible manifolds. The singularity of the two-point function (10) is given by the operator product expansion $$\mathrm{\Omega }(t)\mathrm{\Omega }(t^{})|tt^{}|^{x_\mathrm{\Omega }}\mathrm{\Omega }(t)+\mathrm{}.$$ (13) This singularity determines in a standard way the one-loop renormalization group equation of the dimensionless coupling constant $`vhR^{ϵ/\chi }`$. In an appropriate scheme, this takes the form $$\dot{v}=ϵvv^2+O(v^3).$$ (14) The unstable fixed point $`v^{}=ϵ+O(ϵ^2)`$ represents the transition. The linearized form $`\dot{v}=ϵ^{}(vv^{})+\mathrm{}`$ with $`ϵ^{}=ϵ+O(ϵ^2)`$ then determines the scaling of the transversal localization length $`\xi 𝐫^2^{1/2}`$, $$\xi (v^{}v)^{\chi /ϵ^{}}(v<v^{}),$$ (15) and the scaling dimension $$x_\mathrm{\Omega }^{}=Dϵ^{}=2Dx_\mathrm{\Omega }+O(ϵ^2),$$ (16) which takes the place of $`x_\mathrm{\Omega }`$ in the correlations (5) and (6) at the transition point. These relations describe the scaling of a bound state maintained by contact forces at fixed orientation. Typical configurations look similar to those of Fig. 1(b) but are confined to the region $`r_i>0`$. The most interesting application of (15) and (16) is the delocalization transition of a fluid membrane from a hard wall ($`D=2,d=1`$), where $`\xi (T_cT)^1`$ (since the effective coupling is temperature-dependent) and $`x_\mathrm{\Omega }^{}=1`$. Not surprisingly, these one-loop results are in agreement with those from functional renormalization . They also fit very well the numerical values of , which implies that higher order corrections must be small. The system with wall constraint at $`h=0`$ can be regarded as an unconstrained system in the limit of large repulsive interaction. Conversely, the scaling at the transition point of the constrained system may be related to that of the free unconstrained system. Indeed, the one-loop value $`x_\mathrm{\Omega }^{}`$ from (16) equals the dimension $`x_\mathrm{\Omega }=1`$ of the free field (3), indicating that the sum of the higher order corrections in (14) and (16) may vanish altogether at the specific point $`(D=2,d=1)`$. For the case of polymers ($`D=1`$) it is easy to show that the multi-point correlations entering (9) factorize after “time-ordering” the interaction points, $$\mathrm{\Omega }(t_1)\mathrm{}\mathrm{\Omega }(t_n)=\mathrm{\Omega }(t_1)(t_2t_1)\mathrm{}(t_nt_{n1})$$ (17) for $`t_1<\mathrm{}<t_n`$. The factors $`(t)\mathrm{\Omega }(0)\mathrm{\Omega }(t)/\mathrm{\Omega }`$ can be interpreted as “return” probabilities. Using (17), one can show that the polymer perturbation series (9) is one-loop renormalizable; i.e., the connected two-point function $`\mathrm{\Omega }(0)\mathrm{\Omega }(t)_c`$ generates the only primitive singularity, and there are no higher-order terms in (14). The arguments are completely analogous to those for the case $`k=1`$ . The implications of (15) and (16) on general semi-flexible polymers are discussed and verified numerically in . Here we have given a unified derivation of these relations, stressing the theoretical analogies with their known counterparts for $`k=1`$ . We now turn to systems without the wall constraint and restrict ourselves to $`D=1`$, namely mutually interacting semiflexible polymers and, in particular, a single such polymer in a random medium. The latter system has a possible biostatistical application in the theory of sequence alignment . In the absence of a wall constraint, we have to study generic contact interactions $`V(t)=g\mathrm{\Phi }(t)+h\mathrm{\Omega }(t)`$ involving the fields (2) and (3). The perturbation series then contains connected correlations $`\mathrm{\Phi }(t_1)\mathrm{}\mathrm{\Phi }(t_n)\mathrm{\Omega }(t_1^{})\mathrm{}\mathrm{\Omega }(t_m^{})`$ in the free theory ($`g=h=0`$), which can be computed exactly. They can be shown to obey the operator product expansion $`\mathrm{\Phi }(t)\mathrm{\Phi }(t^{})`$ $``$ $`|tt^{}|^d\mathrm{\Omega }(t)+\mathrm{}`$ (18) $`\mathrm{\Phi }(t)\mathrm{\Omega }(t^{})`$ $``$ $`|tt^{}|^{\frac{3}{2}d}\mathrm{\Omega }(t)+\mathrm{}`$ (19) $`\mathrm{\Omega }(t)\mathrm{\Omega }(t^{})`$ $``$ $`|tt^{}|^{2d}\mathrm{\Omega }(t)+\mathrm{}.`$ (20) These relations just say that any pair of closeby contacts amounts to a single contact at fixed orientation, multiplied by a singular prefactor. These singularities determine the one-loop renormalization group equations $`\dot{u}`$ $`=`$ $`(13d/2)u`$ (21) $`\dot{v}`$ $`=`$ $`ϵvv^2u^2cuv`$ (22) for the dimensionless couplings $`ugR^{(13d/2)/\chi }`$ and $`vhR^{ϵ/\chi }`$ with $$ϵ1x_\mathrm{\Omega }=12d.$$ (23) The corresponding flow diagram for $`d=1`$ is shown in Fig. 2. The unique delocalization fixed point $`(u^{}=0,v^{}=ϵ)`$ is on the line $`u=0`$. This property ensures that the constant $`c`$ in (22) drops out of the critical exponents. It will be preserved for any $`d>2/3`$ also at higher orders since any operator product $`_{i,j}\mathrm{\Phi }(t_i)\mathrm{\Omega }(t_j^{})`$ couples only to $`\mathrm{\Omega }(t_1)`$. The perturbation series at $`u=0`$, however, is factorizable according to (17) and one-loop renormalizable in exactly the same way as with the wall constraint. Hence, the (in $`D=1`$) exact relations (15) and (16) still hold (with $`ϵ`$ given by (23) and $`x_\mathrm{\Omega }=2d`$), resulting in $`\xi (T_\mathrm{c}T)^{3/(24d)}`$ for $`2/3<d<1`$ and $`x_\mathrm{\Omega }^{}=22d`$. This scaling dimension turning negative for $`d>1`$ indicates that the transition becomes of first order; see the discussion and extensive numerics in . An analogous first-order regime is known for flexible polymers . The above arguments can be generalized to the replica theory of $`N`$ semiflexible polymers coupled by pair contact forces. If these are of type $`\mathrm{\Omega }`$, the time-ordered perturbation series can be mapped term by term onto the perturbation series of flexible polymers with $`\mathrm{\Phi }`$ interactions. Its leading divergencies are due to “ladder” diagrams with the same pair of polymers interacting at subsequent points $`t_i`$ . The presence of $`\mathrm{\Phi }`$ interactions for semiflexible polymers does not change these singularities by the same argument as for $`N=2`$: Due to their stiffness, any two semi-flexible polymers interacting twice in a short interval have to be parallel to each other, or, in other words, the leading divergent diagrams behave like diagrams involving only $`\mathrm{\Omega }`$ operators. For the $`\mathrm{\Omega }`$ system, however, the results of immediately carry over and imply that the critical behavior at the delocalization transition does not depend on $`N`$. In particular, the random limit of vanishing $`N`$ becomes trivial. We conclude that a $`1+d`$ dimensional semi-flexible polymer in a random potential has a phase transition between a weak and a strong coupling phase at a critical strength of the randomness for $`d>2/3`$. This phase transition corresponds to the roughening transition of the Kardar-Parisi-Zhang equation in $`4d`$ dimensions. For fluid membranes, on the other hand, an arbitrarily small amount of disorder is relevant and leads to a strong coupling phase. We gratefully acknowledge useful discussions with G. Gompper, C. Hiergeist, T. Hwa, and, in particular, with R. Lipowsky and K. Wiese.
no-problem/9902/hep-ph9902229.html
ar5iv
text
# 1 Introduction ## 1 Introduction The practical applicability of perturbative QCD to exclusive processes such as hadronic electromagnetic form factors cannot yet considered to be settled. It has been argued that even at the highest momenta explored so far in the laboratory, the dominant contribution to form factors comes from the end point regions of the wave function, where the perturbative treatment fails . In the case of hadron-hadron scattering there exist further difficulties, such as the common failure of the helicity conservation selection rules to agree with experimental data. However in nearly all experiments one finds that the naive prediction for quark counting scaling laws tend to agree very well with data. In view of the problems listed, this is quite mysterious, since so far there does not exist any alternate mechanism which can explain these scaling laws. An interesting prediction of perturbative QCD is color transparency . At large momentum transfers only the short distance components of the hadron wave function can contribute to exclusive processes. Since the total cross section of hadrons $`\sigma `$ is inversely proportional to their area $`b^2`$, the strong interactions of these hadrons is expected to be reduced. If we consider quasi-exclusive electron-nucleus scattering, $`eAe^{}p(A1)`$, where $`A`$ is nuclear number, then the nucleus is predicted to be transparent to all protons participating in this process. This is an asymptotic argument applicable for fixed $`A`$ as $`Q^2\mathrm{}`$. Experimentally, however, we can only take the limit of large $`A`$ and moderately large $`Q^2`$, in which such processes appear to be more complicated and much more interesting. One picture that is emerging is that exclusive processes in free space get significant contribution from perturbatively calculable hard amplitudes but also have non-negligible soft contamination. The corresponding nuclear processes, however, may be much cleaner because the large quark separations will be strongly attenuated in nuclear medium. This phenomenon, called nuclear filtering, has some experimental support. Experimentally one finds that the fixed-angle free space process $`pp^{}p^{\prime \prime }p^{\prime \prime \prime }`$ shows significant oscillations at 90 degrees as a function of energy. These oscillations are not a small effect, but roughly 50% of the $`1/s^{10}`$ behavior, and are interpreted as coming from interference of long and short distance amplitudes. The corresponding process in a nuclear environment $`pAp^{}p^{\prime \prime }(A1)`$ shows no oscillations, and obeys the pQCD scaling power law better than the free-space data . The $`A`$ dependence, when analyzed at fixed $`Q^2`$, shows statistically significant evidence of reduced attenuation . ## 2 Formalism We briefly review the framework for calculation of hadronic form factors following Li and Sterman . It has long been known that the transverse separation of quarks in free space reactions is controlled by effects known as the Sudakov form factor. The pion form factor is the simplest example. Li and Sterman included Sudakov effects here, arguing that a perturbative treatment becomes fairly reliable at momenta of the order of 5 GeV. As low as 2 GeV, it was found that less than 50 % of the contribution comes from the soft region. Let $`b_{ij}`$ be the transverse separation between quarks $`i`$ and $`j`$, or $`b`$ the corresponding quantity for a single pair of quarks. An essential feature is the inclusion of $`exp(S)`$, a Sudakov form factor which suppresses the large $`b`$ region. Including the $`b`$ dependence, the pion electromagnetic form factor can be written as, $`F_\pi (Q^2)={\displaystyle 𝑑x_1𝑑x_2\frac{d\stackrel{}{b}}{(2\pi )^2}𝒫(x_2,\stackrel{}{b},P^{},\mu )T_H(x_1,x_2,\stackrel{}{b},Q,\mu )𝒫(x_2,\stackrel{}{b},P,\mu )}.`$ (1) where $$𝒫(x,b,P,\mu )=exp(S)\times \varphi (x,1/b)+O(\alpha _s(1/b)),$$ plays the role of the hadron wave function, $`\varphi (x,1/b)`$ is the meson distribution amplitude, $`P`$ and $`P^{}`$ are the incident and outgoing pion momenta respectively, and $`S`$ is the Sudakov form factor. The improved factorization used in retains the intrinsic transverse momentum $`k_T`$ dependence in the gluon propagator, since $`k_T`$ need not be small compared to $`\sqrt{x_1x_2}Q`$, if one of the $`x_i`$ get close to zero. The variable $`b`$ in Eq. 1 is conjugate to $`k_{T1}k_{T2}`$, where $`k_{T1}`$ and $`k_{T2}`$ are the transverse momenta of the incident and outgoing pions. As long as $`x_1`$ and $`x_2`$ are not close to their endpoints, the dominant scale in the scattering is $`\sqrt{x_1x_2}Q`$ and the small $`b`$ region dominates the amplitude. Close to the end points of $`x_1`$ or $`x_2`$, $`\sqrt{x_1x_2}Q`$ may become very small. However, the dominant scale in this region is $`1/b`$, which is again not too small since the large $`b`$ region is strongly damped by the Sudakov form factor. The results for the free space form factor for pion using this procedure are given in . The authors show that at $`Q^2=5`$ GeV<sup>2</sup>, something like 90% of the contribution comes from a region where $`\alpha _s/\pi `$ is less than 0.7 and hence could be regarded as perturbative. The nuclear medium modifies the quark wave function such that $`𝒫_A(x,b,P,\mu )=f_A(Q^2,b)𝒫(x,b,P,\mu ),`$ (2) where $`𝒫_A`$ is the wave function inside the medium and $`f_A`$ is the nuclear filtering amplitude. We use a simple model for $`f_A`$, $$f_A=exp(𝑑z\sigma \rho ).$$ The effective inelastic cross section $`\sigma `$ is known to scale like $`b^2`$ in QCD, where $`b`$ is the size of the hadron. We parametrize it as $`kb^2`$ and adjust the value of $`k`$ to find a reasonable fit to the experimental data. The situation for the proton form factor is somewhat more complicated than that of the pion; we do not have the space for all details here which are given in Ref. . There has been some controversy regarding the proper choice of the infrared cutoff in the Sudakov exponent. In the case of pion this was simply the quark-antiquark separation $`b`$. The choice proposed in uses the largest distance between the three quarks as the cutoff. It was found that this gave results about 50 % smaller than experiments. Perhaps this is the right direction, if indeed other wave functions (and in particular, non-zero quark angular momentum) contribute heavily in free space. On the other hand, in it was observed that the largest distance does not correspond to a physical size of the three quark system. A more appropriate choice might be obtained by considering the triplet of valence quarks as a quark-diquark system. This choice takes the maximum value of the distance between quark and diquark as the effective cutoff in the Sudakov exponent. This essentially amounts to using a scale $`cw`$ for infrared cutoff, with $`c1.14`$, where $`w`$ is the inverse of the largest distance between any two valence quarks in proton. Remarkably, this small modification leads to results in good agreement with the experiment . From investigations of the proton form factor in free space, it seems that Sudakov effects eliminate about 50 % of the contribution from the soft region. The Sudakov filtering in free space does something useful, but does not seem to be sufficient to make present free-space calculations totally reliable. The same diagrams for Sudakov effects of course occur in a nuclear environment. In addition, there are much stronger interactions with the nuclear target, when one goes from pure “vacuum filtering” by Sudakov to nuclear filtering. We find that nuclear medium eliminates much of the remaining 50 % of the soft region. These are the first full calculations of these ideas within perturbative QCD. We find that the main uncertainty in the nuclear calculation arises from uncertainties in nuclear medium itself, in particular, in uncertainties in the nuclear spectral functions and correlations. With standard assumptions one can proceed with the calculation essentially using zero parameters and no model dependence. However, we find that numerical differences between models of nuclear matter are large enough to cause significant uncertainties. Indeed, comparison with data shows that the uncertainties in the nuclear spectral functions and the nuclear correlations now dominate the theoretical uncertainties, and are larger effects than, for example, the dependence on the hadron distribution amplitude. ## 3 Results and Discussions The results for free space proton form factor are shown in Fig. 1. An important feature of this result, which is independent of the details of the wave function, is that it shows scaling for $`Q^2`$ larger than about 10 GeV<sup>2</sup>. This is a nontrivial confirmation that $`Q^2`$ indeed dominates over the intrinsic momentum $`k_T^2`$. In Fig. 2, we show results for color transparency for electroproduction of pions for different nuclei using the CZ wave function. Here we adjust the value of k, corresponding to the pion attenuation cross-section of 25-30 mb for a pion size of about 0.8 fm. The predicted results are shown for k=4. The precise value of $`k`$ might best obtained by making a fit to the data for color transparency after it becomes available, or perhaps by detailed comparison with diffractive calculations. Compared to the asymptotic wave functions, the results for $`T`$ change by less than 3% for $`Q^2`$ larger than 10 GeV<sup>2</sup>. The results for the proton transparency ratio are given in Fig. 3. The parameter $`k`$ in the attenuation cross section $`\sigma =kb^2`$ was chosen so as to provide a reasonable fit to the experimental data . We find that a value of $`k=6`$ gives a reasonable fit. Taking the attenuation cross section of normal protons to be 36 mb, this corresponds to a typical $`b`$ of about 0.77 fm, which is a reasonable estimate of the proton size. Since the data for $`T`$ is available only in the region where the calculated free space form factor is in disagreement with the experimental result, the value of $`k`$ obtained by this procedure cannot be taken too seriously. In fact, parameter $`k`$ would be best obtained by fitting to the experimental value of $`T`$ after it is measured at higher energies. A reasonable range of $`k`$ values, which we take to be $`k=5`$ and $`k=6`$, corresponds to $`b`$ values of 0.85 fm and 0.77 fm respectively, and has been used in the figure. We have also checked the dependence of our result on the infrared cutoff parameter $`c`$ and the choice of the wave function. We find in Fig. 4 that the results for transparency ratio change very little if we use the CZ wave function instead of the KS. This is a surprising result, and one of the basis of our claim that the dominant uncertainty in transparency ratio may be due to the nuclear model itself. ## 4 Conclusion We have reviewed the calculation of hadronic electromagnetic form factors and color transparency using perturbative QCD. We find a slow rise in the transparency ratio for energies that can be probed in the future at CEBAF and ELFE. As discussed elsewhere , precision experiments can discover color transparency even with a slow rise in $`Q^2`$ by measuring the $`A`$ dependence at fixed moderately large $`Q^2`$. Due to filtering of long distance components in the medium, the nuclear calculation is considerably cleaner compared to the free space calculation. We also find rather remarkable insensitivity of the transparency ratio to present theoretical uncertainties in the perturbative QCD treatment, such as the choice of the distribution amplitude. To further improve the accuracy of predictions for color transparency ratio, it is necessary to improve the modelling of nuclear medium which now appears to be the dominant source of error. ## Acknowledgements We thank Hsiang-nan Li for many useful discussions. Financial support for this work was provided by the Board of Research in Nuclear Sciences (BRNS), the Crafoord Foundation and the DOE grant 85ER401214.
no-problem/9902/hep-ex9902026.html
ar5iv
text
# B and D Spectroscopy at LEP ## Introduction Detailed understanding of the spectroscopy of orbitally excited heavy mesons containing a $`b`$ or a $`c`$ quark provides important information regarding the underlying theory. A flavor-spin symmetry arises from the fact that the mass of a heavy quark $`Q`$ is large relative to $`\mathrm{\Lambda }_{\mathrm{QCD}}`$. In this approximation, the spin $`\stackrel{}{s}_Q`$ of the heavy quark $`Q`$ is conserved in the interactions, independently of the total angular momentum $`\stackrel{}{j}_q=\stackrel{}{s}_q+\stackrel{}{L}`$ of the light quark $`q`$. Corrections to this symmetry are a series expansion in $`1/m_Q,1/m_Q^2`$, calculable in Heavy Quark Effective Theory (HQET) HQS . The $`L=0`$ mesons, for which $`j_q=1/2`$, have two possible spin states: a pseudo-scalar $`P`$ ($`J^P=0^{}`$) and a vector $`V`$ ($`J^P=1^{}`$). If the spin of the heavy quark is conserved independently, the relative production rate of these states is expected to be $`V/(V+P)=0.75`$. Corrections due to the decay of higher excited states are predicted to be small. Recent measurements of this rate for the B system BstarL3 ; BstarLEP agree well with this ratio. In the case of $`L=1`$ orbitally excited B mesons two sets of doublets are expected: the $`\mathrm{B}_0^{}`$ and $`\mathrm{B}_1^{}`$ ($`j_q=1/2`$) and the $`\mathrm{B}_1`$ and $`\mathrm{B}_2^{}`$ ($`j_q=3/2`$) mesons (see Table 1). Their relative production rate follows from spin state counting (2J+1 states)SpinParity . For the dominant two-body decays, the $`j_q=1/2`$ states can decay via an S-wave transition and their decay widths are expected to be broad in comparison to those of the $`j_q=3/2`$ states which must decay via a D-wave transition. Many measurements exist for $`L=1`$ orbitally excited charm mesons. All six narrow states, a doublet ($`\mathrm{D}_2^{}`$ and $`\mathrm{D}_1`$) for each quark content ($`c\overline{u},c\overline{d}`$ and $`c\overline{s}`$) are well establishedDdstar . The wide $`L=1`$ states are hard to measure and have not been clearly identified. Several models based on HQET and on the charmed $`L=1`$ meson data, have made predictions for the masses and widths of orbitally excited $`\mathrm{B}^{}`$ mesonsGronau ; Eichten ; Falk ; Isgur ; Ebert (see Table 1). Some of these models place the average mass of the $`j_q=3/2`$ states above that of the $`j_q=1/2`$ states, while others predict the opposite (“spin-orbit inversion”). The mass splitting within each doublet is predicted to be 12 MeV. ## $`\mathrm{B}^{}`$ Spectroscopy At LEP excited states of B mesons are produced. Each of the four experiments has collected about $`4\times 10^6`$ hadronic events out of which $`0.9\times 10^6`$ events contain $`B\overline{B}`$ pairs. In all $`\mathrm{B}^{}`$ analyses, first, the $`b`$-quark purity of the data sample is increased by applying a lifetime based event tag. $`\mathrm{B}`$ mesons are reconstructed inclusively. Typically the two most energetic jets of the event are considered B candidates. The decay products of the B meson are separated from the background due to fragmentation particles using secondary vertex tagging (for charged decay particles) or the rapidity of the decay products with respect to the B-jet axis. An alternative method is to fully reconstruct the B-meson decay chain which improves the resolution B-mass resolution but suffers from low statistics. The decay of a $`\mathrm{B}^{}`$ meson ($`\mathrm{B}^{}\mathrm{B}^{()}\pi `$) is carried out via a strong interaction and thus the transition pion originates at the primary event vertex. In addition, the predicted masses for the $`L=1`$ states correspond to relatively small $`Q`$ values, so that the pion direction is forward with respect to the B-meson direction. The track with the largest component of momentum in the direction of the B jet is selected. A first measurement of $`L=1`$ orbitally excited B mesons has been presented by OPALBdstarOPAL . Using secondary vertex charge tagging to inclusively reconstruct B mesons, the invariant mass distributions of $`\mathrm{B}^{()+}\pi ^{}`$ and $`\mathrm{B}^{()+}\mathrm{K}^{}`$ combinations show enhancements consistent with the decay of $`\mathrm{B}^{}`$ resonances as shown in Fig. 1. An excess of 1738 $`\pm `$ 121 $`\mathrm{B}^{()+}\pi ^{}`$ and 149 $`\pm `$ 30 $`\mathrm{B}^{()+}\mathrm{K}^{}`$ candidates is observed in the mass ranges 5.60 - 5.85 GeV and 5.80 - 6.00 GeV, respectively. The background is estimated with the wrong (like)-sign combinations. Fitting the excess with a single Breit-Wigner function yields an average mass $`M(\mathrm{B}_{u,d}^{})`$ and a production rate $`f_\mathrm{B}^{}=(b\mathrm{B}_{u,d}^{})/(b\mathrm{B}_{u,d})`$. Throughout this paper, isospin symmetry is always employed to account for $`\mathrm{B}^{}`$ decays via neutral pions. DELPHI and ALEPH have made similar measurements using rapidity to inclusively reconstruct B mesonsBdstarDELPHI ; BdstarALEPHi . The results are summarized in Table 2. A new measurement using an exclusive method is presented by ALEPHBdstarALEPH . Using many decay modes ($`\mathrm{B}\mathrm{D}^{()}X`$, where $`X[\pi ,\rho ,a_1]`$ and $`\mathrm{B}J/\psi (\psi ^{})K^{()}`$) 238 charged and 166 neutral B meson candidates have been fully reconstructed. The sample has a B meson purity of 85 %. Each B candidate is then combined with a charged pion from the primary vertex. An excess of $`45\pm 13`$ events is seen in the right-sign sample compared to the wrong-sign sample. Fig. 2 shows a fit to the right-sign mass spectrum where the signal shape consists of five Breit-Wigner peaks. The relative masses, the widths, and the relative production rates of the individual $`\mathrm{B}^{}`$ mesons have been fixed to the predictions from HQET. The mass of the $`\mathrm{B}_2^{}`$ meson and the overall production rate are measured to be: $`M_{\mathrm{B}_2^{}}`$ $`=`$ $`5739_{11}^{+8}{}_{4}{}^{+6}MeV`$ $`f_\mathrm{B}^{}`$ $`=`$ $`0.31\pm 0.09_{0.05}^{+0.06}.`$ A new analysis using an inclusive method is presented here by the L3 experimentBdstarL3 . Several techniques are used to both improve on the resolution of the $`\mathrm{B}\pi `$ mass spectrum and to unfold this resolution from the signal components. As a result, L3 is able to extract measurements for the masses and widths of both the D-wave $`\mathrm{B}_2^{}`$ decays and the S-wave $`\mathrm{B}_1^{}`$ decays. B meson candidates are reconstructed inclusively from all charged and neutral particles with rapidity $`y>1.6`$ relative to the original jet axis. The measurement of the direction of the B meson is determined by an error-weighted average of the direction of the measured secondary decay vertex and of the direction of the B candidate. The angular resolution obtained is $`\sigma _1=12\mathrm{mrad}`$ for $`\varphi `$, and $`\sigma _1=18\mathrm{mrad}`$ for $`\theta `$, respectively. The energy of the B meson candidate, $`E_\mathrm{B}`$, is estimated by taking advantage of the known center-of-mass energy at LEP, $`E_{\mathrm{cm}}`$, to be $$E_\mathrm{B}=\frac{E_{\mathrm{cm}}^2M_\mathrm{B}^2+M_{\mathrm{recoil}}^2}{2E_{\mathrm{cm}}},$$ (1) where $`M_{\mathrm{recoil}}`$ is the mass of all particles in the event recoiling against the $`\mathrm{B}`$ candidate. The difference between reconstructed and generated values for the B-meson energy can be described by an asymmetric Gaussian with widths of $`1.9\mathrm{GeV}`$ and $`2.8\mathrm{GeV}`$. Fig. 3a) shows the resulting $`\mathrm{B}\pi `$ invariant mass spectrum together with the expected background from Monte Carlo. A clear signal due $`\mathrm{B}^{}\mathrm{B}^{()}\pi `$ decays is seen above the background which is well described by the simulation. Thus the background is parameterized by a threshold function, the shape of which is determined from the Monte Carlo. To resolve the underlying structure of the signal, it is necessary to unfold effects due to detector resolution. The dominant sources of uncertainty for the mass measurement are, with about equal magnitude, the angular and energy resolution of the B meson. The dependence of the $`\mathrm{B}\pi `$ mass resolution on the $`\mathrm{B}^{}`$ mass is studied by simulating signal events at four different values of $`\mathrm{B}^{}`$ mass and Breit-Wigner width. Each signal $`\mathrm{B}\pi `$ mass distribution is then fit to a Breit-Wigner function convoluted with a Gaussian resolution. The Breit-Wigner width is fixed to its generated value and the Gaussian resolution is extracted from the fit and shown in Fig. 3b) as a function of the $`\mathrm{B}\pi `$ mass together with a linear parameterization. The mass resolution is increasing with increasing $`\mathrm{B}\pi `$ mass. The fit function for the signal consists of five Breit-Wigner mass peaks one for the each of the five decay modes allowed by spin-parity rules: $`\mathrm{B}_2^{}\mathrm{B}\pi ,\mathrm{B}^{}\pi `$, $`\mathrm{B}_1\mathrm{B}^{}\pi `$, $`\mathrm{B}_1^{}\mathrm{B}^{}\pi `$, and $`\mathrm{B}_0^{}\mathrm{B}\pi `$. Each Breit-Wigner width is convoluted with the mass dependent Gaussian resolution. No attempt is made to tag subsequent $`\mathrm{B}^{}\mathrm{B}\gamma `$ decays, as the efficiency for selecting the soft photon is low. The relative production rate, and the mass splittings and the relative width within each doublet are constrained to the predictions from HQET (see Table 1). The $`\mathrm{B}\pi `$ invariant mass distribution fit with the signal and background function described above is shown in Fig. 4. The results of the fit provide the first measurements of the masses and decay widths of the $`\mathrm{B}_2^{}`$ ($`j_q=3/2`$) and $`\mathrm{B}_1^{}`$ ($`j_q=1/2`$) mesons: $`M_{\mathrm{B}_2^{}}`$ $`=`$ $`(5770\pm 6\text{ (stat.)}\pm 4\text{(}\text{syst}\text{)})MeV`$ $`\mathrm{\Gamma }_{\mathrm{B}_2^{}}`$ $`=`$ $`(21\pm 24\text{ (stat.)}\pm 15\text{(}\text{syst}\text{)})MeV`$ $`M_{\mathrm{B}_1^{}}`$ $`=`$ $`(5675\pm 12\text{ (stat.)}\pm 4\text{(}\text{syst}\text{)})MeV`$ $`\mathrm{\Gamma }_{\mathrm{B}_1^{}}`$ $`=`$ $`(75\pm 28\text{ (stat.)}\pm 15\text{(}\text{syst}\text{)})MeV.`$ A total of $`2652\pm 232`$ events that occupy the signal region correspond to a relative production rate $`f_\mathrm{B}^{}`$ for all $`L=1`$ spin states of $`f_\mathrm{B}^{}=0.39\pm 0.06\text{ (stat.)}\pm 0.06\text{(}\text{syst}\text{)}.`$ Systematic errors are mainly due to the modelling of the background, the limited knowledge of the signal function and the mass constraint within the doublets. These results disfavor recent theoretical models proposing spin-orbit inversion Isgur ; Ebert , but do agree well with several earlier models Gronau ; Eichten ; Falk and provide strong support for HQET. ## $`\mathrm{B}_c^+`$ Studies DELPHI, ALEPH, and OPAL have published searches for $`\mathrm{B}_c^+`$ mesons in $`Z`$ decaysBsubcADO . No signals have been found. Table 3 shows the number of candidate events and the obtained upper limits on the production rates $`(Z\mathrm{B}_c^+X)\times (\mathrm{B}_c^+J/\psi \pi ^+,J/\psi \mathrm{}^+\nu ,J/\psi \pi ^+\pi ^{}\pi ^+)`$. The 3 $`\mathrm{B}_c^+J/\psi \pi ^+`$ candidates are consistent with a background estimate of 2.3 expected events. A fit to the mass yields the following values: $`M_{J/\psi \pi ^+}=6.342\pm 0.027`$ GeV (DELPHI) and $`M_{J/\psi \pi ^+}=6.32\pm 0.06`$ GeV (OPAL average), respectively. Predictions for the $`\mathrm{B}_c^+`$ mass are in the range 6.24 to 6.31 GeV. The CDF experiment at the Tevatron has recently reported the observation of the $`\mathrm{B}_c^+`$ meson in the decay channel $`\mathrm{B}_c^+J/\psi \mathrm{}^+\nu `$BsubcCDF . They find $`20.4_{5.8}^{+6.2}`$ events and obtain a mass value of $`M(\mathrm{B}_c^+)=6.40\pm 0.39\pm 0.13`$ GeV. ## $`\mathrm{D}^{}`$ Spectroscopy During the last year several new results on $`\mathrm{D}^{}`$ production have been presented by the LEP collaborations. $`\mathrm{D}^0`$ mesons are fully reconstructed in the decay chain $`\mathrm{D}^0\mathrm{D}^+\pi ^{}`$, $`\mathrm{D}^+\mathrm{D}^0\pi ^+`$, $`\mathrm{D}^0\mathrm{K}^{}\pi ^+`$. High momentum $`\mathrm{D}^0`$ together with short decay lengths are selected to obtain $`c\overline{c}`$ enriched samples whereas B and D meson vertexing is used to select $`b\overline{b}`$ enriched samples. Table 4 shows the results for the $`\mathrm{D}^{}`$ production fractions measured by OPAL, DELPHI, and ALEPHDdstarOPAL ; DdstarDELPHI ; DdstarALEPH . DELPHI has also presented a new $`\mathrm{B}\mathrm{D}^{}\mathrm{}\nu `$ analysisDdstarlnu . In semileptonic events, the decay chain $`\mathrm{D}^+\mathrm{D}^0\pi ^+`$, $`\mathrm{D}^0\mathrm{K}^{}\pi ^+,\mathrm{K}^{}\pi ^+\pi ^{}\pi ^+,\mathrm{K}^{}\pi ^+(\pi ^0)`$ is fully reconstructed. The $`\mathrm{D}^+`$ candidates are then combined with opposite sign $`\pi ^{}`$ and the $`\mathrm{D}^+\pi ^{}`$ mass, shown in Fig. 5a), is fit to the narrow $`\mathrm{D}^{}`$ states, resulting in the following branching fraction: $`(B^{}\mathrm{D}_{1}^{}{}_{}{}^{0}\mathrm{}^{}\overline{\nu })=0.72\pm 0.22\pm 0.13\%.`$ A fit to the impact parameter distribution of the bachelor pion $`\pi _{}`$ stemming from the $`\mathrm{D}^{}\mathrm{D}^{}\pi `$ transition for right (unlike)-sign and wrong (like)-sign combinations, as shown in Fig. 5b), allows to extract the following branching fraction $`(B^{}\mathrm{D}^+\pi ^{}\mathrm{}^{}\overline{\nu })=1.15\pm 0.17\pm 0.14\%`$ where the signal comprises narrow and wide $`\mathrm{D}^{}`$ resonances plus non-resonant $`\mathrm{D}^+\pi ^{}`$ combinations. These results are in agreement with previous LEP and CLEO measurements. ## $`\mathrm{D}^{^{}}`$ Studies DELPHI has recently reported an excess of events in the $`\mathrm{D}^+\pi ^{}\pi ^+`$ mass spectrum as shown in Fig. 6a)DstarprDELPHI . The fit yields $`N=66\pm 14`$ events, corresponding to a production rate $`f_\mathrm{D}^{^{}}/f_\mathrm{D}^{}=0.49\pm 0.18\pm 0.10`$, a mass $`M=2637\pm 2\pm 6`$ MeV and a width consistent with the experimental resolution. This mass value is consistent with predictions of a radial excited $`\mathrm{D}^{^{}}`$ meson. OPAL has performed a similar analysisDstarprOPAL . The resulting $`\mathrm{D}^+\pi ^{}\pi ^+`$ mass spectrum is shown in Fig. 6b) for data and Monte Carlo events, where a DELPHI-like signal has been added in the simulation. No excess is seen in the data ($`N<32.8`$ at 95 % CL) corresponding to a limit on the production rate of $`f_\mathrm{D}^{^{}}/f_\mathrm{D}^{}<0.21`$ at 95 % CL, thus not confirming the DELPHI result. CLEO also has examined their $`\mathrm{D}^+\pi ^{}\pi ^+`$ mass spectrum and does not confirm the DELPHI evidenceDstarprCLEO . ## Acknowledgements I wish to thank my colleagues from the other LEP collaborations for providing me with the results and the figure files. I also thank S. Goldfarb for his help when preparing this presentation.