text
stringlengths 18
160k
| meta
dict |
---|---|
Defending champion Reilly Opelka and 2018 champ Kevin Anderson headline a stacked field of singles players committed to compete in this year’s New York Open men’s tennis tournament, which takes place Feb. 9-16 on the unique black courts at NYCB LIVE’s Nassau Veterans Memorial Coliseum.
more
The Kumon Method was founded in Japan in 1954. Toru Kumon, a high school math teacher, was trying to help his own child. Convinced that his second-grade son could be taught the necessary skills to understand advanced mathematics, he created . . .
more
After a number of residents urged the city to install mats on the barrier island’s beaches so that those with mobility issues can easily access Long Beach’s iconic shoreline, the city last week released a plan to do just that.
more
|
{
"pile_set_name": "pile-cc"
}
|
I admit I never thought I’d do any analytical work on Resident Evil 4. Don’t get me wrong; it’s one of my favorite games. Being extremely fun to play, and a having a wonderfully creepy aesthetic are just a few of the game’s strengths. But it’s a silly game in a lot of ways. There aren’t larger messages and meanings to glean from it. And at some moments, the game mechanics and the overall story don’t match up very well. I’d like to discuss one of those moments: Luis’s famous death in castle. I’ll use this example to discuss the elements of motivation and capacity in video games.
You may recall from my previous article a discussion of a player’s motivation and capacity while playing a video game. Although I mentioned these terms, I never adequately defined them, so I’ll do so now. A player’s motivation is the reason he or she cares about playing the game and effecting some kind of difference in the world and story in which he or she is immersed. This element is present in other mediums—it’s the reason we continue to read/watch the story. A player’s capacity, unlike motivation, is intrinsically tied to game mechanics; it consists of the things that the player is able to do in the game world. The analogue of capacity in other media is simply the reader’s ability to continue to watch or read the story. There are further complexities in games, however, including that a player’s degree of capacity can effect his or her motivation, but that is a discussion for another time (probably focusing on Deus Ex). For now, recall my claim that in order for a game to be truly impactful, both of these elements must exist (or noticeably not exist, as I’ll discuss in my next article). Note also that the capacity element is harder and more important to make present, since it’s the element that video games have introduced to storytelling.
So let’s see whether both motivation and capacity exist in the case of Luis’s death. For those who haven’t played the game, you can watch a video of the moment below.
[youtube https://www.youtube.com/watch?v=NsfebZKsp94&w=420&h=315]
I’ll summarize the scene: Luis was attempting to help you get a “sample” of great value from the game’s primary villain, Saddler. But Saddler kills Luis, taking back the sample (classic). As Luis lies dying before you, he gives you a temporary antidote to the parasite that you are infected with, and tells you to stop Saddler. When he finally passes away, Leon yells out him name in anguish.
So this moment most certainly provides the player with motivation. And there are multiple reasons for this. In his final moments, Luis literally gives the player the primary plot incentive for the remainder of the game: stop Saddler and get the sample back. On top of that, finding out that Leon isn’t doomed to die at the hands of his parasite energies the player with a new sense of hope. But, most importantly, the player’s motivation is an extension of Leon’s grief. If the player empathizes with Leon (hopefully they do if they’ve gotten this far in the game), when Leon cries out in agony over the death of his friend, the player, too, is motivated seek justice for a lost friend. This is the opposite of Link’s silence that I discussed in my last article. Unlike Link, who leaves you to form your experience for yourself, Leon hands you his grief, and in this way motivates you to action. This is good storytelling. Clearly Resident Evil 4 has nailed the motivational aspect of making a powerful narrative in a game. But at this point there’s no particular reason story-wise to play the game instead of just watching it.
So let’s think about capacity. Capacity is the additional element that makes a video game unique as opposed to movies or books. And the extent of what video games can do with capacity hasn’t been fully explored. So, even though it’s not necessary for a game to use dynamics of capacity in order to tell a good story, ignoring this element is missing an opportunity for better storytelling. Since capacity is unique to games, if there’s no interesting storytelling or fun mechanics, there’s no reason to play the game instead of just watch it.
Now, in order to discuss capacity in the case of death, capacity takes on a slight nuance. When a character dies in a game, the key aspect of capacity is actually the change in capacity. It doesn’t matter so much what the capacity level is in general, it just matters if there is a difference before and after a character’s death. This provides meaning for the death within the structure of the game mechanics. For example, if a character dies in a game but is immediately replaced by an almost identical analogue, their death has little visceral impact for the player, who can’t feel the impact of the character’s death on the gameplay. This actually does happen: in Legend of The Dragoon, when one of your party members, Lavitz, dies, Albert, who literally takes on the stats of Lavitz, almost immediately replaces him. Even more notably, Cait Sith dies in Final Fantasy VII, but is quite literally replaced by an entirely identical avatar called “Cait Sith 2”. In these situations the player’s capacity has not changed, and thus the deaths have lost some of their potential narrative impact.
That’s kind of what it feels like when Luis dies in Resident Evil 4. Although he helps you out as a non-player character (NPC) at various points in the game, it’s not as though he’s the only character who does this. And there’s nothing to distinguish him mechanics-wise from the other characters you meet and team up with. What’s more, you continue to have access to other NPCs you can get help from after he dies. So, in terms of how it feels to play the game, you don’t even notice he’s gone. As far as the player’s capacity to team up with others in order to stay alive, very little changes after Luis’s death. The player’s capacity thus changes very little when he dies, and though the player grieves with Leon, it’s difficult to feel a real sense of loss.
To sum up, although the story of Luis’s death in Resident Evil is well told in terms of movies and books, the developers missed the opportunity to tell this story through the game mechanics in terms of a change in player capacity. Luis was not a unique character mechanically, and other characters that Leon meets effectively replace his avatar. The death is meaningful to watch, but far less meaningful to play.
Nathan Randall - Video Game Analyst
Nathan Randall is a Master’s candidate in game design at FIEA. He analyzes how gameplay mechanics and design impact the storytelling of video games.
Learn more here.
With a Terrible Fate is dedicated to developing the best video game analysis anywhere, without any ads or sponsored content. If you liked what you just read, please consider supporting us by leaving a one-time tip or becoming a contributor on Patreon.
|
{
"pile_set_name": "pile-cc"
}
|
---
abstract: 'We study supersymmetric black holes in $AdS_4$ in the framework of four dimensional gauged $\N=2$ supergravity coupled to hypermultiplets. We derive the flow equations for a general electrically gauged theory where the gauge group is Abelian and, restricting them to the fixed points, we derive the gauged supergravity analogue of the attractor equations for theories coupled to hypermultiplets. The particular models we analyze are consistent truncations of M-theory on certain Sasaki-Einstein seven-manifolds. We study the space of horizon solutions of the form $AdS_2\times \Sig_g$ with both electric and magnetic charges and find a four-dimensional solution space when the theory arises from a reduction on $Q^{111}$. For other $SE_7$ reductions, the solutions space is a subspace of this. We construct explicit examples of spherically symmetric black holes numerically.'
---
1.5cm Nick Halmagyi$^*$, Michela Petrini$^*$, Alberto Zaffaroni$^{\dagger}$\
0.5cm $^{*}$ Laboratoire de Physique Théorique et Hautes Energies,\
Université Pierre et Marie Curie, CNRS UMR 7589,\
F-75252 Paris Cedex 05, France\
0.5cm $\dagger$ Dipartimento di Fisica, Università di Milano–Bicocca,\
I-20126 Milano, Italy\
and\
INFN, sezione di Milano–Bicocca,\
I-20126 Milano, Italy\
0.5cm
halmagyi@lpthe.jussieu.fr\
petrini@lpthe.jussieu.fr\
alberto.zaffaroni@mib.infn.it
Introduction
============
Supersymmetric, asymptotically $AdS_4$ black holes[^1] with regular spherical horizons have recently been discovered in $\N=2$ gauged supergravities with vector multiplets [@Cacciatori:2009iz]. These solutions have been further studied in [@DallAgata2011; @Hristov:2010ri]. The analytic solution for the entire black hole was constructed and shown to be one quarter-BPS. For particular choices of prepotential and for particular values of the gauge couplings, these black holes can be embedded into M-theory and are asymptotic to $AdS_4\times S^7$.
The goal of this work is to study supersymmetric, asymptotically $AdS_4$ black holes in more general gauged supergravities, with both vector and hypermultiplets. The specific theories we focus on are consistent truncations of string or M-theory. Supersymmetric black holes in these theories involve running hypermultiplet scalars and are substantially different from the examples in [@Cacciatori:2009iz]. The presence of hypers prevents us from finding analytic solutions of the BPS conditions, nevertheless we study analytically the space of supersymmetric horizon solutions $AdS_2\times \Sigma_g$ and show that there is a large variety of them. We will then find explicit spherically symmetric black hole solutions interpolating between $AdS_4$ and $AdS_2\times S^2$ by numerical methods. The black holes we construct have both electric and magnetic charges.
Our demand that the supergravity theory is a consistent truncation of M-theory and that the asymptotic $AdS_4$ preserves $\N=2$ supersymmetry limits our search quite severely. Some of the gauged supergravity theories studied in [@Cacciatori:2009iz] correspond to the $\N=2$ truncations [@Cvetic1999b; @Duff:1999gh] of the de-Wit/Nicolai $\N=8$ theory [@deWit:1981eq] where only massless vector multiplets are kept. In this paper we will focus on more general theories obtained as consistent truncations of M-theory on seven-dimensional Sasaki-Einstein manifolds. A consistent truncation of eleven-dimensional supergravity on a Sasaki-Einstein manifold to a universal sector was obtained in [@Gauntlett:2007ma; @Gauntlett:2009zw]. More recently the general reduction of eleven-dimensional supergravity to four dimensions on left-invariant coset manifolds with $SU(3)$-structure has been performed in [@Cassani:2012pj][^2]. Exploiting the coset structure of the internal manifold it is possible to truncate the theory in such a way to also keep massive Kaluza-Klein multiplets. These reductions can, by their very construction, be lifted directly to the higher dimensional theory and are guaranteed to solve the higher dimensional equations of motion.
The black holes we construct represent the gravitational backreaction of bound states of M2 and M5-branes wrapped on curved manifolds in much the same manner as was detailed by Maldacena and Nunez [@Maldacena:2000mw] for D3-branes in $AdS_5 \times S^5$ and M5-branes in $AdS_7 \times S^4$. To preserve supersymmetry, a certain combination of the gauge connections in the bulk is set equal to the spin connection, having the effect of twisting the worldvolume gauge theory in the manner of [@Witten:1988xj]. For D3-branes, for particular charges, the bulk system will flow to $AdS_3 \times \Sigma_g$ in the IR and the entire solution represents an asymptotically $AdS_5$ black string. The general regular flow preserves just 2 real supercharges and thus in IIB string theory it is $\frac{1}{16}$-BPS. Similarly, for the asymptotically $AdS_7$, black M5-brane solutions, depending on the charges, the IR geometry is $AdS_5\times \Sig_g$ and the dual $CFT_4$ may have $\N=2$ or $\N=1$ supersymmetry. These $\N=2$ SCFT’s and their generalizations have been of much recent interest [@Gaiotto2012h; @Gaiotto2009] and the $\N=1$ case has also been studied [@Benini:2009mz; @Bah:2012dg].
By embedding the $AdS_4$ black holes in M-theory we can see them as M2-brane wrapping a Riemann surface. For particular charges, the bulk system will flow to $AdS_2 \times \Sigma_g$ in the IR and represents a black hole with regular horizon. The original examples found in [@Caldarelli1999] can be reinterpreted in this way; it has four equal magnetic charges and can be embedded in $AdS_4 \times S^7$. The explicit analytic solution is known and it involves constant scalars and a hyperbolic horizon. A generalization of [@Maldacena:2000mw] to M2-branes wrapping $\Sig_g$ was performed in [@Gauntlett2002] where certain very symmetric twists were considered. Fully regular solutions for M2 branes wrapping a two-sphere with running scalars were finally found in [@Cacciatori:2009iz] in the form of $AdS_4$ black holes. It is note-worthy that of all these scenarios of branes wrapping Riemann surfaces, the complete analytic solution for general charges is known only for M2-branes on $\Sig_g$ with magnetic charges [@Cacciatori:2009iz].
One way to generalize these constructions of branes wrapped on $\Sig_g$ is to have more general transverse spaces. This is the focus of this article. For M5-branes one can orbifold $S^4$ while for D3-branes one can replace $S^5$ by an arbitrary $SE_5$ manifold and indeed a suitable consistent truncation on $T^{11}$ has indeed been constructed [@Bena:2010pr; @Cassani:2010na]. For M2-branes one can replace $S^7$ by a seven-dimensional Sasaki-Einstein manifold $SE_7$ and, as discussed above, the work of [@Cassani:2012pj] provides us with a rich set of consistent truncations to explore. Interestingly, in our analysis we find that there are no solutions for pure M2-brane backgrounds, there must be additional electric and magnetic charges corresponding to wrapped M2 and M5-branes on internal cycles. Asymptotically $AdS_4$ black holes with more general transverse space can be found in [@Donos:2008ug] and [@Donos2012d] where the solutions were studied directly in M-theory. These include the M-theory lift of the solutions we give in Sections \[sec:Q111Simp\] and \[numericalQ111\].
The BPS black holes we construct in this paper are asymptotically $AdS_4$ and as such they are states in particular (deformed) three-dimensional superconformal field theories on $S^2\times \mathbb{R}$. The solution in [@Cacciatori:2009iz] can be considered as a state in the twisted ABJM theory [@Aharony:2008ug]. The solutions we have found in this paper can be seen as states in (twisted and deformed) three dimensional Chern-Simons matter theory dual to the M-theory compactifications of homogeneous Sasaki-Einstein manifolds[^3]. One feature of these theories compared to ABJM is the presence of many baryonic symmetries that couple to the vector multiplets arising from non trivial two-cycles in the Sasaki-Einstein manifold. In terms of the worldvolume theory, the black holes considered in this paper are then electrically charged states of a Chern-Simons matter theory in a monopole background for $U(1)_R$ symmetry and other global symmetries, including the baryonic ones[^4].
Gauged $\N=2$ supergravity with hypermultiplets is the generic low-energy theory arising from a Kaluza-Klein reduction of string/M-theory on a flux background. The hypermultiplet scalars interact with the vector-multiplet scalars through the scalar potential: around a generic $AdS_4$ vacuum the eigenmodes mix the hypers and vectors. In the models we study, we employ a particular simplification on the hypermultiplet scalar manifold and find solutions where only one real hypermultiplet scalar has a non-trivial profile. Given that the simplification is so severe it is quite a triumph that solutions exist within this ansatz. It would be interesting to understand if this represents a general feature of black holes in gauged supergravity.\
The paper is organized as follows. In Section 2 we summarize the ansatz we use and the resulting BPS equations for an arbitrary electrically gauged $\N=2$ supergravity theory. The restriction of the flow equations to the horizon produces gauged supergravity analogues of the attractor equations.
In Section 3 we describe the explicit supergravity models we consider. A key step is that we use a symplectic rotation to a frame where the gauging parameters are purely electric so that we can use the supersymmetry variations at our disposal.
In Section 4 we study horizon geometries of the form $AdS_2\times \Sig_g$ where $g\neq 1$. We find a four parameter solution space for $Q^{111}$ and the solutions spaces for all the other models are truncations of this space.
In Section 5 we construct numerically black hole solutions for $Q^{111}$ and for $M^{111}$. The former solution is a gauged supergravity reproduction of the solution found in [@Donos2012d] and is distinguished in the space of all solutions by certain simplifications. For this solution, the phase of the four dimensional spinor is constant and in addition the massive vector field vanishes. The solution which we construct in $M^{111}$ turns out to be considerably more involved to compute numerically and has all fields of the theory running. In this sense we believe it to be representative of the full solution space in $Q^{111}$.
The Black Hole Ansatz
=====================
We want to study static supersymmetric asymptotically $AdS_4$ black holes in four-dimensional $\mathcal{N}=2$ gauged supergravity. The standard conventions and notations for $\mathcal{N}=2$ gauged supergravity [@Andrianopoli:1996vr; @Andrianopoli:1996cm] are briefly reviewed in Appendix \[gsugra\].
Being supersymmetric, these black holes can be found by solving the supersymmetry variations - plus Maxwell equations . In this section we give the ansatz for the metric and the gauge fields, and a simplified form of the SUSY variations we will study in the rest of this paper. The complete SUSY variations are derived and discussed in Appendix \[sec:BPSEqs\].
The Ansatz {#sec:bhansatz}
----------
We will focus on asymptotically $AdS_4$ black holes with spherical ($AdS_2\times S^2$) or hyperbolic ($AdS_2\times \HH^2$) horizons. The modifications required to study $AdS_2\times \Sig_g$ horizons, where $\Sigma_g$ is a Riemann surface of genus $g$, are discussed at the end of Section \[sec:BPSflow\]. The ansatz for the metric and gauge fields is \[ansatz\] ds\^2&=& e\^[2U]{} dt\^2- e\^[-2U]{} dr\^2- e\^[2(V-U)]{} (d\^2+F()\^2 d\^2) \[metAnsatz\]\
A\^&=& \^(r) dt- p\^(r) F’() d, with F()={ :& S\^2 (=1)\
:& \^2 (=-1) .
The electric and magnetic charges are p\^&=& \_[S\^2]{} F\^\[elinv\] ,\
q\_&& \_[S\^2]{} G\_= -e\^[2(V-U)]{} \_ ’\^+\_ p\^ , \[maginv\] where $G_\Lam$ is the symplectic-dual gauge field strength G\_ =R\_ F\^-\_ \*F\^ . In addition, we assume that all scalars in the theory, the fields $z^i$ from the $n_v$-vector multiplets and $q^u$ from the $n_h$-hypermultiplets, are functions of the radial coordinate $r$, only. Moreover, we will restrict our analysis to abelian gaugings of the hypermultiplet moduli space and assume that the gauging is purely electric. As discussed in [@deWit:2005ub], for Abelian gauge groups one can always find a symplectic frame where this is true.
The BPS Flow Equations {#sec:BPSflow}
----------------------
In Appendix \[sec:BPSEqs\], we derive the general form that the SUSY conditions take with our ansatz for the metric and gauge fields and the hypothesis discuss above for the gaugings. We will only consider spherical and hyperbolic horizons.
Throughout the text, when looking for explicit black hole solutions we make one simplifying assumption, namely that the Killing prepotentials $P^x_\Lam$ of the hypermultiplet scalar manifold $\cM_h$ satisfy[^5] P\^1\_=P\^2\_=0. \[P120\] The flow equations given in this section reduce to the equations in [@DallAgata2011; @Hristov:2010ri] when the hypermultiplets are truncated away and thus $P^3_\Lam$ are constant.\
The preserved supersymmetry is \_A= e\^[U/2]{} e\^[i/2]{} \_[0A]{} where $\eps_{0A}$ is an $SU(2)$-doublet of constant spinors which satisfy the following projections \_[0A]{}&=&i \_[AB]{}\^[0]{} \_0\^[B]{},\
\_[0A]{}&=& (\^3)\_A\^[ B]{} \^[01]{} \_[0B]{} . As a result only $2$ of the 8 supersymmetries are preserved along any given flow. Imposing these two projections, the remaining content of the supersymmetry equations reduces to a set of bosonic BPS equations. Some are algebraic p\^P\_\^3&=&1 \[pP1\] ,\
p\^k\_\^u &=& 0 \[pk1\] ,\
\_r\^P\^3\_&=& e\^[2(U-V)]{}e\^[-i]{}\[Alg1\] ,\
\^P\^3\_&=& 2 e\^U \_r\^P\^3\_\[qP\] ,\
\^k\^u\_&=& 2 e\^U \_r\^k\^u\_\[qk\] , and some differential (e\^U)’&=& \_i\^P\^3\_ - e\^[2(U-V)]{}( e\^[-i]{} ) \[UEq\] ,\
V’ &=& 2 e\^[-U]{} \_i\^P\^3\_ \[VEq\] ,\
z’\^i &=& e\^[i]{}e\^[U-2V]{}g\^[i]{}D\_ i e\^[i]{}e\^[-U]{} g\^[i]{} [|f]{}\_\^ P\_\^3 \[tauEq\] ,\
q’\^u&=&2e\^[-U]{} h\^[uv]{} \_v \_i\^ P\^3\_\[qEq\] ,\
’&=&-A\_re\^[-2U]{}\^P\_\^3 \[psiEq\] ,\
p’\^&=& 0 , where we have absorbed a phase in the definition of the symplectic sections \^=\_r\^+i \_i\^= e\^[-i]{} L\^ . $\cZ$ denotes the central charge &=&p\^M\_- q\_L\^\
&=& L\^\_ (e\^[2(V-U)]{} \^+ ip\^),\
D\_ &=&\^\_ \_ e\^[2(V-U)]{} ’\^ +ip\^ . Once $P^3_\Lam$ are fixed, the $\pm$-sign in the equations above can be absorbed by a redefinition $(p^\Lam,q_\Lam,e^U)\ra -(p^\Lam,q_\Lam,e^U)$.\
Since the gravitino and hypermultiplets are charged, there are standard Dirac quantization conditions which must hold in the vacua of the theory p\^P\^3\_&& ,\
p\^k\^u\_&& . We see from and that the BPS conditions select a particular integer quantization.\
Maxwell’s equation becomes q’\_=2 e\^[-2U]{} e\^[2(V-U)]{}h\_[uv]{} k\^u\_k\^v\_\^\[Max1\] . Notice that for the truncations of M-theory studied in this work, the non-trivial RHS will play a crucial role since massive vector fields do not carry conserved charges.\
Using standard special geometry relations, one can show that the variation for the vector multiplet scalars and the warp factor $U$, and , are equivalent to a pair of constraints for the sections $\cL^\Lam$ \_r e\^[U]{} \_r\^& =& ’\^ , \[delLr2\]\
\_r e\^[-U]{} \_i\^& =& \^ P\_\^3 2 e\^[-3U]{} \^P\^3\_\_r\^ . \[delLi2\] Importantly we can integrate to get \^=2 e\^U \_r\^+ c\^\[qLr\] for some constant $c^\Lam$. From and we see that this gauge invariance is constrained to satisfy c\^P\^3\_=0, c\^k\^u\_=0. We note that due to the constraint on the sections \_ \^\^=-, and give $(2n_v+1)$-equations.\
One can show that the algebraic relation is an integral of motion for the rest of the system. Specifically, differentiating one finds a combination of the BPS equations plus Maxwell equations contracted with $\cL_i^\Lam$. One can solve for $\psi$ and find that it is the phase of a modified “central charge" $\hcZ$: &=& e\^[i]{}||, =(e\^[2(U-V)]{}i L\^P\^3\_).
Our analysis also applies to black holes with $AdS_2\times \Sig_g$ horizons, where $\Sig_g$ is a Riemann surface of genus $g\ge 0$. The case $g>1$ is trivially obtained by taking a quotient of $\HH^2$ by a discrete group, since all Riemann surfaces with $g > 1$ can be obtained in this way. Our system of BPS equations (\[pP1\]) - (\[psiEq\]) also applies to the case of flat or toroidal horizons ($g=1$) ds\^2&=& e\^[2U]{} dt\^2- e\^[-2U]{} dr\^2- e\^[2(V-U)]{} (d x\^2+ d y\^2)\
A\^&=& \^(r) dt- p\^(r) x d y , with q\_&& -e\^[2(V-U)]{} \_ ’\^- \_ p\^ ,\
&=&L\^\_ (e\^[2(V-U)]{} \^- i p\^), provided we substitute the constraint (\[pP1\]) with p\^P\_\^3= 0 . We will not consider explicitly the case of flat horizons in this paper although they have attracted some recent interest [@Donos2012d].
$AdS_2\times S^2$ and $AdS_2\times \HH^2$ Fixed Point Equations {#sec:horizonEqs}
---------------------------------------------------------------
At the horizon the scalars $(z^i,q^u)$ are constant, while the functions in the metric and gauge fields take the form e\^U=, e\^V=, \^= r\_0\^\
with $q_0^\Lam$ constant. The BPS equations are of course much simpler, in particular they are all algebraic and there are additional superconformal symmetries.
There are the two Dirac quantization conditions p\^P\_\^3 &=&1, \[pP\]\
p\^k\_\^u &=&0 , \[pk\] and give two constraints on the electric component of the gauge field \_0\^P\^x\_&=& 0 , \[tqP\]\
\_0\^k\^u\_&=& 0 \[tqk\] . The radii are given by and &=&2 \_i\^P\^3\_\[R1hor\] ,\
&=&- 2( e\^[-i]{} ) \[R2hor\] . In addition, the algebraic constraint becomes (e\^[-i]{} )=0 and the hyperino variation gives \_i\^k\^u\_=0.\[hyphor\]
Finally, combining , and , we can express the charges in terms of the scalar fields p\^&=& - \_i\^R\_2\^2 \^ P\_\^3 \[phor\] ,\
q\_&=& - \_[i]{} R\_2\^2 \_ \^ P\^3\_ , \[qhor\] with $\cM_{i\,\Lam}=\Im (e^{-i\psi} M_\Lam)$. These are the gauged supergravity analogue of the [*attractor equations*]{}.\
It is of interest to solve explicitly for the spectrum of horizon geometries in any given gauged supergravity theory. In particular this should involve inverting and to express the scalar fields in terms of the charges. Even in the ungauged case, this is in general not possible analytically and the equations here are considerably more complicated. Nonetheless one can determine the dimension of the solution space and, for any particular set of charges, one can numerically solve the horizon equations to determine the value of the various scalars. In this way one can check regularity of the solutions.
Consistent Truncations of M-theory {#sec:truncations}
==================================
Having massaged the BPS equations into a neat set of bosonic equations we now turn to particular gauged supergravity theories in order to analyze the space of black hole solutions. We want to study models which have consistent lifts to M-theory and which have an $\cN=2$ $\ AdS_4$ vacuum somewhere in their field space, this limits our search quite severely. Two examples known to us are $\N=2$ truncations of the de-Wit/Nicolai $\N=8$ theory [@deWit:1981eq] and the truncation of M-theory on $SU(3)$-structure cosets [@Cassani:2012pj]. In this paper we will concentrate on some of the models constructed in [@Cassani:2012pj]. The ones of interest for us are listed in Table 1.\
\[tb1\]
$M_7$ $n_v:m^2=0$ $n_v:m^2\neq0$ $n_h$
----------------------- ------------- ---------------- -------
$Q^{111}$ 2 1 1
$M^{111}$ 1 1 1
$N^{11}$ 1 2 2
$\frac{Sp(2)}{Sp(1)}$ 0 2 2
$\frac{SU(4)}{SU(3)}$ 0 1 1
: The consistent truncations on $SU(3)$-structure cosets being considered in this work. $M_7$ is the 7-manifold, the second column is the number of massless vector multiplets at the $AdS_4$ vacuum, the third column is the number of massive vector multiplets and final column is the number of hypermultiplets.
For each of these models there exists a consistent truncation to an $\N=2$ gauged supergravity with $n_v$ vector multiplets and $n_h$ hypermultiplets. We summarize here some of the features of these models referring to [@Cassani:2012pj] for a more detailed discussion.
We denote the vector multiplets scalars z\^i = b\^i + i v\^i i = 1, …, n\_v where the number of vector multiplets $n_v$ can vary from 0 to 3. Notice that all models contain some massive vector multiplets. For the hypermultiplets, we use the notation ( z\^i, a, , \^A) where $a, \phi$ belong to the universal hypermultiplet. This is motivated by the structure of the quaternionic moduli spaces in these models, which can be seen as images of the c-map. The metric on quaternionic Kähler manifolds of this kind can be written in the form [@Ferrara:1989ik] ds\_[QK]{}\^2=d\^2 +g\_[i]{} dz\^i d\^ +e\^[4]{}da+\^T d \^2 -e\^[2]{}d\^T d, where $\{z^i,\zbar^{\jbar}|i=1,\ldots,n_h-1\}$ are special coordinates on the special Kähler manifold $\cM_c$ and $\{ \xi^A,\txi_A| A=1,\ldots,n_h\}$ form the symplectic vector $\xi^T=(\xi^A,\txi_A)$ and are coordinates on the axionic fibers.\
All these models, and more generally of $\mathcal{N}=2$ actions obtained from compactifications, have a cubic prepotential for the vector multiplet scalars and both magnetic and electric gaugings of abelian isometries of the hypermultiplet scalar manifold. In ungauged supergravity the vector multiplet sector is invariant under $Sp(2n_v+2,\RR)$. The gauging typically breaks this invariance, and we can use such an action to find a symplectic frame where the gauging is purely electric[^6]. Since $Sp(2n_v+2,\RR)$ acts non trivially on the prepotential $\mathcal{F}$, the rotated models we study will have a different prepotential than the original ones in [@Cassani:2012pj] .
The Gaugings {#sec:gaugings}
------------
In the models we consider, the symmetries of the hypermultiplet moduli space that are gauged are non compact shifts of the axionic fibers $\xi_A$ and $U(1)$ rotations of the special Kähler basis $z^i$. The corresponding Killing vectors are the Heisenberg vector fields: h\^A&=& \_[\_A]{} + \^A \_a, h\_A= \_[\^A]{} - \_A \_a, h=\_a which satisfy $[h_A,h^B]=\delta_A^B h$, as well as f\^A&=&\_A\_[\^A]{}- \^A\_[\_A]{}, ([indices not summed]{})\
g&=&\_[z]{}+ z \_ .
For some purposes it is convenient to work in homogeneous coordinates on $\cM_c$ =
\^A\
\_A
Z =
Z\^A\
Z\_A
with $z^i = Z^i/Z^0$ and to define k\_=(Z)\^A +()\^A +()\^A +()\_A , where $\UU$ is a $2n_h\times 2n_h$ matrix of gauging parameters. In special coordinates $k_\UU$ is a sum of the Killing vectors $f^A$ and $g$.
A general electric Killing vector field of the quaternionic Kähler manifold is given by k\_=k\^u\_=\_[0]{} k\_ +Q\_[A]{} h\^A+Q\_\^[ A]{} h\_A -e\_ h , where $Q_{\Lam A}$ and $Q_\Lam^A$ are also matrices of gauge parameters, while the magnetic gaugings are parameterized by [@Cassani:2012pj] \^=-m\^h. For these models, the resulting Killing prepotentials can be worked out using the property \[Pkw\] P\^x\_=k\^u\_\^x\_u \^[x ]{} = \^[u ]{} \^x\_u , where $\om^x_u$ is the spin connection on the quaternionic Kähler manifold [@Ferrara:1989ik] \^1+i \^2&=& e\^[+ K\_[c]{}/2]{} Z\^T d,\
\^3 &=& da + \^T d- 2 e\^[K\_c]{} Z\^A\_[AB]{} d\^B . The Killing vector $k_\UU$ may contribute a constant shift to $P^3_0$, and this is indeed the case for the examples below.
As already mentioned, we will work in a rotated frame where all gaugings are electric. The form of the Killing vectors and prepotentials is the same, with the only difference that now $\tk^{\Lam}=-m^\Lam h$ and $ \tilde{P}^{x \, \Lam}$ will add an extra contribution to the electric ones.
The Models
----------
The models which we will study are summarized in Table 1. They all contain an $AdS_4$ vacuum with $\mathcal{N}=2$ supersymmetry. The vacuum corresponds to the ansatz (\[ansatz\]) with warp factors e\^U=, e\^[V]{}=, and no electric and magnetic charges p\^= q\_= 0 . The $AdS_4$ radius and the non trivial scalar fields are R=\^[3/4]{}, v\_i= , e\^[-2]{} = .\[AdS4Sol\]
This is not an exact solution of the flow equations in Section \[sec:BPSflow\] which require a non-zero magnetic charge to satisfy . The black holes of this paper will asymptotically approach $AdS_4$ in the UV but will differ by non-normalizable terms corresponding to the magnetic charge. The corresponding asymptotic behavior has been dubbed [*magnetic*]{} $AdS$ in [@Hristov:2011ye].
### $Q^{111}$
The scalar manifolds for the $Q^{111}$ truncation are \_v=\^3, \_h= \_[2,1]{} = .
The metric on $\cM_{2,1}$ is ds\^2\_[2,1]{}=d\^2 +e\^[4]{} da+(\^0 d\_0-\_0 d\^0) \^2 + e\^[2]{}(d\^0)\^2 + d\_0\^2 , and the special Kähler base $\cM_c$ is trivial. Nonetheless we can formally use the prepotential and special coordinates on $\cM_c$ =, Z\^0=1 to construct the spin connection and Killing prepotentials.\
The natural duality frame which arises upon reduction has a cubic prepotential[^7] F=- \[FQ111\] ,\
with sections $X^\Lam = (1,z^)$ and both electric and magnetic gaugings = 0 & 4\
-4 & 0, e\_00, m\^1=m\^2=m\^3=-2. \[gaugeQ111\] Using an element $\cS_0\in Sp(8,\ZZ)$ we rotate to a frame where the gaugings are purely electric. Explicitly we have \_0=A & B\
C& D , A=D= (1,0,0,0) , B=-C= (0,-1,-1,-1)\[S0rotation\] and the new gaugings are = 0 & 4\
-4 & 0, e\_00, e\_1=e\_2=e\_3=-2. \[gaugeQ11elec\] The Freund-Rubin parameter $e_0>0$ is unfixed. In this duality frame the special geometry data are F&=&2,\
X\^&=& (1,z\^2 z\^3,z\^1z\^3, z\^1 z\^2),\
F\_&=& (z\^1z\^2 z\^3,z\^1,z\^2,z\^3).
### $M^{111}$
The consistent truncation on $M^{111}$ has \_v=\^2, \_h= \_[2,1]{} and is obtained from the $Q^{111}$ reduction by truncating a single massless vector multiplet. This amounts to setting v\_3=v\_1, b\_3=b\_1, A\^3=A\^1. \[M110trunc\]
### $N^{11}$
The consistent truncation of M-theory on $N^{11}$ has one massless and two massive vector multiplets, along with two hypermultiplets. The scalar manifolds are \_v=\^3, \_h= \_[4,2]{} =.
The metric on $\cM_{2,4}$ is ds\_[4,2]{}\^2&=&d\^2 + +e\^[-2]{} d\^2+e\^[4]{} da+ (\^0 d\_0-\_0 d\^0+\^1 d\_1-\_1 d\^1)\
&&+e\^[2+]{}d\^0+ d\^1 \^2+e\^[2+]{}d\_0- d\_1 \^2\
&& +e\^[2-]{} d\^0- d\^1 + (d\_0- d\_1) \^2\
&& +e\^[2-]{} d\_0+ d\_1- ( d\^0+ d\^1 ) \^2 , \[SO42met\] and the special coordinate $z$ on the base is given by e\^+i = , d\^2 + e\^[-2]{} d\^2= . This differs slightly from the special coordinate used in [@Cassani:2012pj], where the metric is taken on the upper half plane instead of the disk. The prepotential and special coordinates on $\cM_c$ are given by =, Z\^A=(1,z).
The cubic prepotential on $\cM_v$ obtained from dimensional reduction is the same as for $Q^{111}$, , however the models differ because of additional gaugings Q\_1\^[ 1]{}=Q\_2\^[ 1]{}=2, Q\_3\^[ 1]{}=-4. \[QelecN11\] The duality rotation we used for the $Q^{111}$ model to make the gaugings electric would not work here since it would then make magnetic. However using the fact that $m^\Lam$ and $Q_\Lam^{\ 1} $ are orthogonal m\^Q\_\^[ 1]{}=0, we can find a duality frame where all parameters are electric and $Q_{\Lam}^{\ A}$ is unchanged. Explicitly we use \_1= \^[-1]{} where &=& 1 & 0& 0& 0\
0 & c\_& s\_&0\
0& -s\_& c\_&0\
0 & 0&0 & 1 1 & 0& 0& 0\
0 & 1& 0& 0\
0 & 0& c\_& s\_\
0& 0 & -s\_& c\_, =/4, =,\
&=& \^[-1]{} & 0\
0& ,\
&=& A & B\
C& D , A=D= (1,0,1,1), B=-C=(0,-1,0,0). The Killing vectors are then given by and .
The prepotential in this frame is rather complicated in terms of the new sections, which are in turn given as a function of the scalar fields $z^i$ by X\^&=&(3, 2z\^1-z\^2-z\^3+z\^[123]{} ,2z\^2-z\^1-z\^3+z\^[123]{} ,2z\^3-z\^1-z\^2+z\^[123]{}),\
z\^[123]{}&=& z\^1 z\^2 +z\^2z\^3 + z\^3 z\^1.
### Squashed $S^7$ $\sim\frac{Sp(2)}{Sp(1)}$
This is obtained from the $N^{11}$ model by eliminating the massless vector multiplet. Explicitly, this is done by setting v\_2=v\_1, b\_2=b\_1, A\^2=A\^1. In addition to the $\N=2$, round $S^7$ solution this model contains in its field space the squashed $S^7$ solution, although this vacuum has only $\cN=1$ supersymmetry. Thus flows from this solution lie outside the ansatz employed in this work.
### Universal $\frac{SU(4)}{SU(3)}$ Truncation
This model was first considered in [@Gauntlett:2009zw]. It contains just one massive vector multiplet and one hypermultiplet, and can be obtained from the $M^{111}$ truncation by setting v\_2=v\_1, b\_2=b\_1, A\^2=A\^1.
Horizon Geometries {#sec:hyperhorizons}
==================
We now apply the horizon equations of Section \[sec:horizonEqs\] to the models of Section \[sec:truncations\]. We find that there is a four dimensional solution space within the $Q^{111}$ model and that this governs all the other models, even though not all the other models are truncations of $Q^{111}$. The reason is that the extra gaugings present in the $N^{11}$ and squashed $S^{7}$ model can be reinterpreted as simple algebraic constraints on our $Q^{111}$ solution space.
In the following, we will use the minus sign in and subsequent equations. We also recall that $\kappa =1$ refers to $AdS_2\times S^2$ and $\kappa =-1$ to $AdS_2\times \HH^2$ horizons.
M-theory Interpretation
-----------------------
The charges of the four-dimensional supergravity theory have a clear interpretation in the eleven-dimensional theory. This interpretation is different from how the charges lift in the theory used in [@Cacciatori:2009iz], which we now review. In the consistent truncation of M-theory on $S^7$ [@deWit:1984nz; @Nicolai:2011cy] the $SO(8)$-vector fields lift to Kaluza-Klein metric modes in eleven-dimensions. In the further truncation of [@Cvetic1999b; @Duff:1999gh] only the four-dimensional Cartan subgroup of $SO(8)$ is retained, the magnetic charges of the four vector fields in [@Cacciatori:2009iz] lift to the Chern numbers of four $U(1)$-bundles over $\Sig_g$. One can interpret the resulting $AdS_4$ black holes as the near horizon limit of a stack of M2-branes wrapping $\Sig_g\subset X_5$, where $X_5$ is a praticular non-compact Calabi-Yau five-manifold, constructed as four line bundles over $\Sig_g$: A similar description holds for wrapped D3-branes and wrapped M5-branes in the spirit of [@Maldacena:2000mw]. The general magnetic charge configurations have been analyzed recently for D3 branes in [@Benini2013a] and M5-branes in [@Bah:2012dg]. Both these works have computed the field theory central charge and matched the gravitational calculation [^8]. This alone provides convincing evidence that the holographic dictionary works for general twists. There has not yet been any such computation performed from the quantum mechanics dual to the solutions of [@Cacciatori:2009iz], but, as long as the charges are subject to appropriate quantization so as to make $X_5$ well defined, one might imagine there exist well defined quantum mechanical duals of these solutions.
Now returning to the case at hand, the eleven-dimensional metric from which the four-dimensional theory is obtained is [@Cassani:2012pj] ds\_[11]{}\^2= e\^[2V]{} \^[-1]{} ds\_4\^2 + e\^[-V]{} ds\_[B\_6]{}\^2+ e\^[2V]{}(+A\^0)\^2 , where $B_6$ is a Kähler-Einstein six-manifold, $\tha$ is the Sasaki fiber, $V$ is a certain combination of scalar fields (not to be confused with $V$ in ), $\cK=\coeff{1}{8}e^{-K}$ with $K$ the Kähler potential, and $A^0$ is the four-dimensional graviphoton[^9]. In addition, vector fields of massless vector multiplets come from the three-form potential expanded in terms of cohomologically non-trivial two forms $\om_i$ C\^[(3)]{}\~A\^i\_i. The truncations discussed above come from reductions with additional, cohomologically trivial two-forms, which give rise to the vector fields of massive vector mutliplets. This is an important issue for our black hole solutions since only massless vector fields carry conserved charges.\
The solutions described in this section carry both electric and magnetic charges. The graviphoton will have magnetic charge $p^0$ given by , which means the eleven-dimensional geometry is really of the form AdS\_2M\_9 , where $M_9$ is a nine-manifold which can be described as a $U(1)$ fibration The electric potential $\tq^0$ will vanish from which we learn that this $U(1)$ is not fibered over $AdS_2$, or in other words the M2 branes that wrap $\Sig_g$ do not have momentum along this $U(1)$. In addition the charges that lift to $G^{(4)}$ correspond to the backreaction of wrapped M2 and M5-branes on $H_2(SE_7,\ZZ)$ and $H_5(SE_7,\ZZ)$.
We can check that the Chern number of this $U(1)$ fibration is quantized as follows. First we have +A\^0=d++ A\^0 where $\psi$ has periodicity $2\pi \ell$ for some $\ell\in \RR$ and $\eta$ is a Kähler potential one-form on $B_6$ which satisfies $d\eta=2J$. Such a fibration over a sphere is well defined if n= .\[nZZ\] Recalling and preempting , we see that n=p\^0 =-. For the $SE_7$ admitting spherical horizons used in this paper one has Q\^[111]{},N\^[11]{}:&&=,\
M\^[111]{}:&&= and is satisfied.
$Q^{111}$ {#sec:Q111Horizons}
---------
To describe the solution space of $AdS_2\times S^2$ or $AdS_2\times \HH^2$ solutions, we will exploit the fact that the gaugings are symmetric in the indices $i=1,2,3$. We can therefore express the solution in terms of invariant polynomials under the diagonal action of the symmetric group $\cS_3$[^10] (v\_[1]{}\^[i\_1]{}v\_2\^[i\_2]{} v\_3\^[i\_3]{}b\_[1]{}\^[i\_1]{}b\_2\^[i\_2]{} b\_3\^[i\_3]{}) =\_[S\_3]{}v\_[(1)]{}\^[i\_1]{}v\_[(2)]{}\^[i\_2]{} v\_[(3)]{}\^[i\_3]{}b\_[(1)]{}\^[i\_1]{}b\_[(2)]{}\^[i\_2]{} b\_[(3)]{}\^[i\_3]{}. First we enforce , which gives \^0=0, \_0=0 . The Killing prepotentials are then given by P\^3\_=(4-e\^[2]{}e\_0, -e\^[2]{},-e\^[2]{},-e\^[2]{}) and the non-vanishing components of the Killing vectors by k\^a\_=-(e\_0, 2,2,2).
Solving and we get two constraints on the magnetic charges p\^0=- , p\^1+p\^2+p\^3=- . \[pLamReps\] We find that the phase of the spinor is fixed = , while and are redundant (v\_1b\_2)=0.\[constr1\] Then from we get (v\_1v\_2)-(b\_1b\_2) =e\_0.\[constr2\] We can of course break the symmetry and solve the equations above for, for instance, $(b_3,v_3)$ v\_3&=& ,\
b\_3&=&- . Using we find the radius of $AdS_2$ to be R\_1\^2&=& . The algebraic constraint is nontrivial and can be used to solve for $q_0$ in terms of $(p^\Lam,q_{i},v_j,b_k)$.
Using the value of $p^0$ given in we can solve and and find e\^[2]{}&=& ,\
R\_2\^2&=& R\_1\^2 1- ,\
&&\
q\_0&=& ,\
q\_[0n]{}&=& -(v\_1\^3v\_3 b\_1\^3)+(v\_1v\_3\^3b\_1\^2b\_2) - (v\_1v\_2v\_3)\^2 (b\_1)-b\_1b\_2b\_3(v\_1\^2 b\_2\^2)+(v\_1\^2 b\_2b\_3)\
&&-v\_1v\_2v\_3 (v\_1 b\_1 b\_2\^2) -2 (v\_1 b\_2\^2 b\_3) -2 (v\_1\^2 v\_2 b\_3) ,\
&&\
p\^1&=& ,\
p\^1\_n&=& 2 v\_1\^2v\_2v\_3(v\_2\^2+v\_3\^2+v\_2v\_3) ,\
&& +v\_2 v\_3 (v\_2\^2+v\_3\^2) b\_1\^2 -2 v\_1 v\_2 v\_3(v\_2+v\_3) b\_2 b\_3 +2(v\_2\^2+v\_3\^2)b\_1\^2 b\_2 b\_3 +2 v\_1\^2 b\_2\^2 b\_3\^2\
&&-2 v\_1 v\_3\^2(v\_2+v\_3) b\_1 b\_3 +(-v\_1\^2 v\_2+2 v\_1 v\_2v\_3 + (2 v\_1+v\_2) v\_3\^2)v\_3 b\_2\^2 + (23)\
&&+ 2 v\_3\^2 b\_1b\_2\^2 b\_3 + (v\_1\^2+v\_3\^2) b\_2\^3 b\_3 + (23) ,\
&&\
q\_1&=& ,\
q\_[1n]{}&=& -v\_1v\_2v\_3 (v\_1) b\_1 -v\_1\^2b\_2 (v\_1v\_2) +(23)\
&&+2 v\_1\^2 b\_1b\_2b\_3 +v\_2\^2b\_1\^3 + 2 v\_3\^2 b\_1\^2 b\_2 + (v\_1\^2+v\_3\^2) b\_1 b\_2\^2 +(23) , where = v\_1v\_2v\_3 (v\_1) - (v\_1\^2 b\_2\^2)-( v\_1\^2 b\_2b\_2). The charges $(p^2,p^2,q_2,q_3)$ are related to $(p^1,q_1)$ by symmetry of the $i=1,2,3$ indices.
The general solution space has been parameterized by $(v_i,b_j)$ subject to the two constraints and leaving a four dimensional space. From these formula, one can easily establish numerically regions where the horizon geometry is regular. A key step omitted here is to invert these formulae and express the scalars $(b_i,v_j)$ in terms of the charges $(p^\Lam,q_\Lam)$. This would allow one to express the entropy and the effective $AdS_2$ radius in term of the charges [@wip].
### A $Q^{111}$ simplification {#sec:Q111Simp}
The space of solutions in the $Q^{111}$ model simplifies considerably if one enforces a certain symmetry p\^1=p\^2, q\_1=-q\_2.\[Q111Simp\] One then finds a two-dimensional space of solutions part of which was found in [@Donos:2008ug; @Donos2012d] v\_2&=&v\_1, b\_3=0, b\_2=-b\_1\
b\_1&=& \_1\
e\^[2]{} &=&\
R\_1&=&\
R\_2\^2&=& R\_1\^2\
q\_0&=& 0\
q\_1&=&-\_1 \[q1Q111Simp\]\
q\_3&=& 0\
p\^0&=&-\
p\^1&=&- \[p1Q111Simp\]\
p\^3&=& -2p\^1 , where $\eps_1=\pm$ is a choice. One cannot analytically invert and to give $(v_1,v_3)$ in terms of $(p^1,q_1)$ but one can numerically map the space of charges for which regular solutions exist.
$M^{111}$ {#sec:M110Solutions}
---------
The truncation to the $M^{111}$ model does not respect the simplification . The general solution space is two-dimensional b\_3&=&b\_1, v\_3=v\_1, p\^3=p\^1, q\_3=q\_1,\
b\_1&=& \_2 ,\
b\_2&=&- ,\
e\^[2]{}&=& ,\
R\_1&=& ,\
R\_2\^2 &=& R\_1\^2 , p\^0&=&-, \[p0M110\]\
p\^2&=&-2p\^1 ,\
p\^1&=&- ,\
q\_0&=&-\
&& ,\
&&\
q\_1&=&- ,\
q\_2&=& - , \[q3M110\] where $\eps_2$ is a choice of sign.
$N^{11}$
--------
In setting $P^1_\Lam=P^2_\Lam=0 $ we get \^A=\_A=0, z\^1=\^1=0 , and so the only remaining hyper-scalars are $(\phi,a)$. With this simplification the Killing prepotentials are the same as for $Q^{111}$ P\^3\_=(4-e\^[2]{}e\_0, -e\^[2]{},-e\^[2]{},-e\^[2]{}) , while the Killing vectors have an additional component in the $\xi^1$-direction: k\^a\_&=&-(e\_0, 2,2,2),\
k\^[\^1]{}\_&=& (0,-2,-2,4).
From this one can deduce that the spectrum of horizon solutions will be obtained from that of $Q^{111}$ by imposing two additional constraints p\^k\^[\^1]{}\_&=& 0,\
\^k\^[\^1]{}\_&=& 0, which amount to p\^3&=& (p\^1+p\^2 ), \[N11constraint1\]\
v\_3&=& (v\_1+v\_2 ).\[N11constraint2\]
One can then deduce that the $AdS_2\times \Sig_g$ solution space in the $N^{11}$ model is a two-dimensional restriction of the four dimensional space from the $Q^{111}$ model. While can easily be performed on the general solution space, it is somewhat more difficult to enforce since the charges are given in terms of the scalars. We can display explicitly a one-dimensional subspace of the $N^{11}$ family by further setting $v_3=v_1$: v\_1&=& ,\
b\_1&=& -+1 ,\
b\_3&=& --+1 ,\
R\_1\^2&=& v\_1\^[3/2]{} ,\
R\_2\^2&=& - ,\
e\^[2]{}&=& ,\
p\^1&=& ,\
p\^2&=& ,\
p\^3&=& ,\
q\_0&=& - ,\
q\_1&=& - ,\
q\_2&=&- ,\
q\_3&=& - .
$\frac{Sp(2)}{Sp(1)}$
---------------------
The truncation of M-theory on $\frac{Sp(2)}{Sp(1)}$ is obtained from the $N^{11}$ truncation by removing a massless vector multiplet. Explcitly, this is done by setting v\_2=v\_1, b\_2=b\_1, A\^2=A\^1. Alternatively one can set p\^2=p\^1, v\_2=v\_1 on the two-dimensional $M^{111}$ solution space of Section \[sec:M110Solutions\]. This leaves a unique solution, the universal solution of $\frac{SU(4)}{SU(3)}$ we next describe.
$\frac{SU(4)}{SU(3)}$ {#sec:SU4SU3Sols}
---------------------
This solution is unique and requires $\kappa=-1$. Therefore it only exists for hyperbolic horizons: v\_1&=& ,\
b\_1&=& 0,\
R\_1&=& \^[3/4]{},\
R\_2&=& \^[3/4]{}. It is connected to the central $AdS_4$ vacuum by a flow with constant scalars, which is known analytically [@Caldarelli1999] .
Black Hole solutions: numerical analysis {#numerical}
========================================
Spherically symmetric, asymptotically $AdS$ static black holes can be seen as solutions interpolating between $AdS_4$ and $AdS_2\times S^2$. We have seen that $AdS_2\times S^2$ vacua are quite generic in the consistent truncations of M-theory on Sasaki-Einstein spaces and we may expect that they arise as horizons of static black holes. In this section we will show that this is the case in various examples and we expect that this is true in general.
The system of BPS equations (\[pP1\]) - (\[psiEq\]) can be consistently truncated to the locus $$\label{hyperlocus}
\xi^A =0\, , \qquad \tilde\xi_A=0 \, ;$$ this condition is satisfied at the fixed points and enforces (\[P120\]) along the flow. The only running hyperscalar is the dilaton $\phi$. The solutions of (\[pP1\]) - (\[psiEq\]) will have a non trivial profile for the dilaton, all the scalar fields in the vector multiplets, the gauge fields and the phase of the spinor. This makes it hard to solve the equations analytically. We will find asymptotic solutions near $AdS_4$ and $AdS_2\times S^2$ by expanding the equations in series and will find an interpolating solution numerically. The problem simplifies when symmetries allow to set all the massive gauge fields and the phase of the spinor to zero. A solution of this form can be found in the model corresponding to the truncation on $Q^{111}$. The corresponding solution is discussed in Section \[numericalQ111\] and it corresponds to the class of solutions found in eleven dimensions in [@Donos2012d]. The general case is more complicated. The $M^{111}$ solution discussed in Section \[numericalM110\] is an example of the general case, with most of the fields turned on.
Black Hole solutions in $Q^{111}$ {#numericalQ111}
---------------------------------
We now construct a black hole interpolating between the $AdS_4 \times Q^{111}$ vacuum and the horizon solutions discussed in Section \[sec:Q111Simp\] with p\^1=p\^2, q\_1=-q\_2.\[Q111Simp2\] The solution should correspond to the M-theory one found in [@Donos2012d]. Due to the high degree of symmetry of the model, we can truncate the set of fields appearing in the solution and consistently set $$v_2=v_1\,,\ \ \ b_3=0\,,\ \ \ b_2=-b_1$$ along the flow. This restriction is compatible with the following simplification on the gauge fields $$\tilde q_2(r)=-\tilde q_1(r)\,,\ \ \ \tilde q_0(r)=0\,,\ \ \ \tilde q_3(r)=0\, .$$ It follows that $$\label{simpcond}
k^a_\Lambda \, \tilde q^\Lambda =0\, , \\\ \qquad P^3_\Lambda\, \tilde q^\Lambda =0$$ for all $r$. The latter conditions lead to several interesting simplifications. $k^a_\Lambda \, \tilde q^\Lambda =0$ implies that the right hand side of Maxwell equations (\[Max1\]) vanishes and no massive vector field is turned on. Maxwell equations then reduce to conservation of the invariant electric charges $q_\Lambda$, and we can use the definition (\[maginv\]) to find an algebraic expression for $\tilde q_\Lambda$ in terms of the scalar fields. Moreover, the condition $P^3_\Lambda\, \tilde q^\Lambda =0$ implies that the phase $\psi$ of the spinor is constant along the flow. Indeed, with our choice of fields, $A_r=0$ and the equation (\[psiEq\]) reduces to $\psi^\prime =0$. The full set of BPS equations reduces to six first order equations for the six quantities $$\{ U,V,v_1,v_3,b_1,\phi \} \, .$$
For simplicity, we study the interpolating solution corresponding to the horizon solution in Section \[sec:Q111Simp\] with $v_1=v_3$. This restriction leaves a family of $AdS_2\times S^2$ solutions which can be parameterized by the value of $v_1$ or, equivalently, by the magnetic charge $p^1$. We perform our numerical analysis for the model with e\^[-2]{} = , v\_1 = v\_3 = , b\_1 = - and electric and magnetic charges p\^1=- 12 , q\_1= . We fixed $e_0=8 \sqrt{2}$. The values of the scalar fields at the $AdS_4$ point are given in (\[AdS4Sol\]).\
It is convenient to define a new radial coordinate by $dt= e^{-U} dr$. $t$ runs from $+\infty$ at the $AdS_4$ vacuum to $-\infty$ at the horizon. It is also convenient to re-define some of the scalar fields $$v_i(t) = v_i^{AdS} e^{e_i(t)}\, , \qquad \phi(t)=\phi_{AdS} -\frac12 \rho(t) \, ,$$ such that they vanish at the $AdS_4$ point. The metric functions will be also re-defined $$U(t) =u(t)+\log(R_{AdS})\, , \qquad V(t)=v(t)$$ with $u(t)=t,v(t)=2t$ at the $AdS_4$ vacuum. The BPS equations read
$$\begin{aligned}
u'&=&
e^{-e_1-\frac{e_3}{2}} - \frac{3}{4} e^{-e_1-\frac{e_3}{2}-\rho} +\frac{1}{4} e^{e_1-\frac{e_3}{2}-\rho}
+\frac{1}{2} e^{\frac{e_3}{2}-\rho} +\frac{3}{8} e^{-\frac{e_3}{2}+2 u-2 v} -\frac{3}{4} e^{-e_1+\frac{e_3}{2}+2 u-2 v}\nonumber\\
&&-\frac{1}{8} e^{e_1+\frac{e_3}{2}+2 u-2 v} - \frac{15 \sqrt{5} e^{-e_1+\frac{e_3}{2}+2 u-2 v} b_1}{32\ 2^{3/4}}+\frac{3 e^{-e_1-\frac{e_3}{2}-\rho} b_1^2}{16 \sqrt{2}}-\frac{3 e^{-e_1+\frac{e_3}{2}+2 u-2 v} b_1^2}{32 \sqrt{2}}\, , \nonumber\\
v'&=& 2 e^{-e_1-\frac{e_3}{2}}-\frac{3}{2} e^{-e_1-\frac{e_3}{2}-\rho}+\frac{1}{2} e^{e_1-\frac{e_3}{2}-\rho}+e^{\frac{e_3}{2}-\rho}+\frac{3 e^{-e_1-\frac{e_3}{2}-\rho} b_1^2}{8 \sqrt{2}}\, ,\nonumber\\
e'_1&=&
2 e^{-e_1-\frac{e_3}{2}} -\frac{3}{2} e^{-e_1-\frac{e_3}{2}-\rho} -\frac{1}{2} e^{e_1-\frac{e_3}{2}-\rho} +\frac{3}{2} e^{-e_1+\frac{e_3}{2}+2 u-2 v}
-\frac{1}{4} e^{e_1+\frac{e_3}{2}+2 u-2 v}\nonumber \\
&&+\frac{15 \sqrt{5} e^{-e_1+\frac{e_3}{2}+2 u-2 v} b_1}{16\ 2^{3/4}} +\frac{3 e^{-e_1-\frac{e_3}{2}-\rho} b_1^2}{8 \sqrt{2}}
+\frac{3 e^{-e_1+\frac{e_3}{2}+2 u-2 v} b_1^2}{16 \sqrt{2}} \, ,\\
e'_3&=& 2 e^{-e_1-\frac{e_3}{2}} -\frac{3}{2} e^{-e_1-\frac{e_3}{2}-\rho} +\frac{1}{2} e^{e_1-\frac{e_3}{2}-\rho} -e^{\frac{e_3}{2}-\rho}
-\frac{3}{4} e^{-\frac{e_3}{2}+2 u-2 v} -\frac{3}{2} e^{-e_1+\frac{e_3}{2}+2 u-2 v}\nonumber\\
&&-\frac{1}{4} e^{e_1+\frac{e_3}{2}+2 u-2 v} -\frac{15 \sqrt{5} e^{-e_1+\frac{e_3}{2}+2 u-2 v} b_1}{16\ 2^{3/4}}
+\frac{3 e^{-e_1-\frac{e_3}{2}-\rho} b_1^2}{8 \sqrt{2}} -\frac{3 e^{-e_1+\frac{e_3}{2}+2 u-2 v} b_1^2}{16 \sqrt{2}} \, ,\nonumber\\
b'_1&=& - \frac{5 \sqrt{5} e^{e_1+\frac{e_3}{2}+2 u-2 v}}{4\ 2^{1/4}} -e^{e_1-\frac{e_3}{2}-\rho} b_1- \frac{1}{2} e^{e_1+\frac{e_3}{2}+2 u-2 v} b_1 \, ,\nonumber\\
\rho'&=&
- 3 e^{-e_1-\frac{e_3}{2}-\rho}+e^{e_1-\frac{e_3}{2}-\rho}+2 e^{\frac{e_3}{2}-\rho}+\frac{3 e^{-e_1-\frac{e_3}{2}-\rho} b_1^2}{4 \sqrt{2}} \, .\nonumber\end{aligned}$$
This set of equations has two obvious symmetries. Given a solution, we can generate other ones by u(t)u(t) + d\_1 , v(t)v(t) + d\_1 , \[simm1\] or by translating all fields $\phi_i$ in the solution \_(t) \_i(t- d\_2) , \[simm2\] where $d_1$ and $d_2$ are arbitrary constants.\
We can expand the equations near the $AdS_4$ UV point. We should stress again that $AdS_4$ is not strictly a solution due to the presence of a magnetic charge at infinity. However, the metric functions $u$ and $v$ approach the $AdS_4$ value and, for large $t$, the linearized equations of motion for the scalar fields are not affected by the magnetic charge, so that we can use much of the intuition from the AdS/CFT correspondence. The spectrum of the consistent truncation around the $AdS_4$ vacuum in absence of charges have been analyzed in details in [@Cassani:2012pj]. It consists of two massless and one massive vector multiplet (see Table 1). By expanding the BPS equations for large $t$ we find that there exists a family of asymptotically (magnetic) $AdS$ solutions depending on three parameters, corresponding to two operators of dimension $\Delta=1$ and an operator of dimension $\Delta=4$. The asymptotic expansion of the solution is
$$\begin{aligned}
\label{expUVQ111}
u(t) &=& t+ \frac{1}{64} e^{-2 t} \left(16-6 \epsilon_1^2-3 \sqrt{2} \beta_1^2\right)+ \cdots \nonumber \\
v(t) &=& 2 t -\frac{3}{32} e^{-2 t} \left(2 \epsilon_1^2+\sqrt{2} \beta_1^2\right) +\cdots \nonumber \\
e_1(t)&=&
-\frac{1}{2} e^{-t} \epsilon_1+\frac{1}{80} e^{-2 t} \left(-100-4 \epsilon_1^2-3 \sqrt{2} \beta_1^2\right) +\cdots \non \\
&& +\frac{1}{140} e^{-4 t} \left(140 \epsilon_4+
\left(-\frac{375}{8}+ \cdots \right) t\right) +\cdots \nonumber\\
e_3(t)&=&
e^{-t} \epsilon_1+\frac{1}{80} e^{-2 t} \left(200-34 \epsilon_1^2-3 \sqrt{2} \beta_1^2\right) +\cdots \non \\
&&+e^{-4 t} \frac{1}{448} \left(1785 + 448 \epsilon_4 - 150 t+\cdots \right ) +\cdots \nonumber\\
b_1(t)&=& e^{-t} \beta_1+e^{-2 t} \left(\frac{5 \sqrt{5}}{4\ 2^{1/4}}-\epsilon_1 \beta_1\right)+ \cdots \\
\rho(t) &=& \frac{3}{40} e^{-2 t} \left(2 \epsilon_1^2-\sqrt{2} \beta_1^2\right)+\cdots +\frac{1}{17920}e^{-4 t} \left(-67575-26880 \epsilon_4
+9000 t +\cdots \right) +\cdots \, .\nonumber \end{aligned}$$
where the dots refer to exponentially suppressed terms in the expansion in $e^{-t}$ or to terms at least quadratic in the parameters $(\epsilon_1,\epsilon_4,\beta_1)$. We also set two arbitrary constant terms appearing in the expansion of $u(t)$ and $v(t)$ to zero for notational simplicity; they can be restored by applying the transformations (\[simm1\]) and (\[simm2\]). The constants $\epsilon_1$ and $\beta_1$ correspond to scalar modes of dimension $\Delta=1$ in the two different massless vector multiplets (cfr Table 7 of [@Cassani:2012pj]). The constant $\epsilon_4$ corresponds to a scalar mode with $\Delta=4$ belonging to the massive vector multiplet. A term $t e^{-4t}$ shows up at the same order as $\epsilon_4$ and it is required for consistency. Notice that, although $e_1=e_3$ both at the UV and IR, the mode $e_1-e_3$ must be turned on along the flow.
In the IR, $AdS_2\times S^2$ is an exact solution of the BPS system. The relation between the two radial coordinates is $r-r_0\sim e^{ a t }$ with $a= 8 \, 2^{1/4} /3^{1/4}$, where $r_0$ is the position of the horizon. By linearizing the BPS equations around $AdS_2\times S^2$ we find three normalizable modes with behavior $e^{a \Delta t}$ with $\Delta = 0$, $\Delta=1$ and $\Delta=1.37$. The IR expansion is obtained as a double series in $e^{a t}$ and $e^{1.37 a t}$ $$\begin{aligned}
\{u(t),v(t),e_1(t),e_3(t),b_1(t),\rho(t) \} =&& \hskip -0.6truecm \{ 1.49 + a \, t , 0.85 + a\, t , -0.49, -0.49, -1.88, -0.37 \} \nonumber \\
&& \hskip -0.8truecm +\{1, 1, 0, 0, 0, 0 \} c_1 + \{-1.42, -0.53, 0.76,
0.53, -0.09, 1\} \, c_2 \, e^{a t} \nonumber \\
&& \hskip -0.8truecm +\{0.11, 0.11, 0.07,
0.93, -0.54, 1\} \, c_3 \, e^{1.37 a t}\nonumber \\ && \hskip -0.8truecm +\sum_{p,q} {\vec d}^{p,q} c_2^p c_3^q e^{(p + 1.37 q) a t}\, ,
\end{aligned}$$ where the numbers ${\vec d}^{p,q}$ can be determined numerically at any given order. The two symmetries (\[simm1\]) and (\[simm2\]) are manifest in this expression and correspond to combinations of a shift in $c_1$ and suitable rescalings of $c_2$ and $c_3$.
With a total number of six parameters for six equations we expect that the given IR and UV expansions can be matched at some point in the middle, since the equations are first order and the number of fields is equal to the number of parameters. There will be precisely one solution with the UV and IR asymptotics given above; the general solution will be obtained by applying the transformations (\[simm1\]) and (\[simm2\]). We have numerically solved the system of BPS equation and tuned the parameters in order to find an interpolating solution. The result is shown in Figure \[fig:flowQ111\].\
![Plots of $u',v'$ and $\rho$ on the left and of $e_1,e_2$ and $b_1/2$ on the right corresponding to the IR parameters $c_1=-1.208,c_2=0.989,c_3=-0.974$ and the UV parameters $\beta_1=-2.08,\epsilon_1=-1.325, \epsilon_4=5$. []{data-label="fig:flowQ111"}](PlotQ1.pdf)
![Plots of $u',v'$ and $\rho$ on the left and of $e_1,e_2$ and $b_1/2$ on the right corresponding to the IR parameters $c_1=-1.208,c_2=0.989,c_3=-0.974$ and the UV parameters $\beta_1=-2.08,\epsilon_1=-1.325, \epsilon_4=5$. []{data-label="fig:flowQ111"}](PlotQ2.pdf)
\
We would like to stress that the asymptotic expansions of the solutions contain integer powers of $r$ (and logs) in the UV ($AdS_4$) and irrational powers depending on the charges in the IR ($AdS_2\times S^2$). This suggests that it would be hard to find analytic solutions of the system of BPS equations (\[pP1\]) - (\[psiEq\]) with running hypermultiplets. By contrast, the static $AdS_4$ black holes in theories without hypermultiplets [@Cacciatori:2009iz] depends only on rational functions of $r$ which made it possible to find an explicit analytic solution.
Black Hole solutions in $M^{111}$ {#numericalM110}
---------------------------------
Whenever we cannot enforce any symmetry on the flow, things are much harder. This is the case of the interpolating solutions for $M^{111}$ which we now discuss. The solution can be also embedded in the $Q^{111}$ model and it is a general prototype of the generic interpolating solution between $AdS_4$ and the horizons solutions discussed in Section \[sec:hyperhorizons\].\
Let us consider an interpolating solution corresponding to the horizon discussed in Section \[sec:M110Solutions\]. The conditions (\[simpcond\]) cannot be imposed along the flow. As a consequence, the phase of the spinor will run and a massive gauge field will be turned on. Moreover, the IR conditions $b_2=-2 b_1$ and $\tilde q_0=\tilde q_3=0, \tilde q_2=-\tilde q_1$ do not hold for finite $r$ and all gauge and vector scalar fields are turned on. The only simplification comes from the fact that on the locus (\[hyperlocus\]) the right hand side of Maxwell equations (\[Max1\]) is proportional to $k^a_\Lambda$. For $M^{111}$, $k^a_1=k^a_2$ and we still have two conserved electric charges $$( q_1 -q_2 )' =0\, , \qquad ( k^a_1 q_0 -k^a_0 q_1)' =0 \, .$$ In other words, two Maxwell equations can be reduced to first order constraints while the third remains second order. It is convenient to transform the latter equation into a pair of first order constraints. This can be done by introducing $q_0$ as a new independent field and by using one component of Maxwell equations and the definition (\[maginv\]) of $q_\Lambda$ as a set of four first order equations for ($\tilde q_0,\tilde q_1,\tilde q_2, q_0$). The set of BPS and Maxwell equations consists of twelve first order equations for twelve variables $$\{ U,V,v_1,v_2,b_1,b_2,\phi, \psi ,\tilde q_0,\tilde q_1,\tilde q_2, q_0 \} \, .$$
A major simplification arises if we integrate out the gauge fields using (\[qLr\]). The system collapses to a set of eight first order equations for eight unknowns. The resulting set of equations have singular denominators and it is convenient to keep the extra field $q_0$ and study a system of nine first order equations for $$\{ U,V,v_1,v_2,b_1,b_2,\phi, \psi , q_0 \} \, .$$ The final system has an integral of motion which would allow to eliminate algebraically $q_0$ in terms of the other fields.\
The system of BPS equations is too long to be reported here but it can be studied numerically and by power series near the UV and the IR. We will study the flow to the one-parameter family of horizon solutions with $v_1=v_2$ and $b_2=-2 b_1$. These horizons can be parametrized by the value of $v_1$ or, equivalently, by the magnetic charge $p^2$. We perform our numerical analysis for the model with e\^[-2]{} = , v\_1 = v\_2 = 2\^[1/4]{} , b\_1 = 2\^[1/4]{} and electric and magnetic charges p\^2=- 2 , q\_2= . We fixed $e_0=24 \sqrt{2}$. The values of the scalar fields at the $AdS_4$ point are given in (\[AdS4Sol\]). As in the previous section, it is also convenient to define a new radial coordinate by $dt= e^{-U} dr$ and to re-define some of the scalar fields and metric functions $$v_i(t) = v_i^{AdS} e^{e_i(t)}\, , \,\,\, \phi(t)=\phi_{AdS} -\frac12 \rho(t)\, , \,\,\, U(t) =u(t)+\log(R_{AdS})\, , \,\,\, V(t)=v(t) \, .$$
In absence of charges, the spectrum of the consistent truncation around the $AdS_4$ vacuum consists of one massless and one massive vector multiplet [@Cassani:2012pj] (see Table 1). By expanding the BPS equations for large $t$ we find that there exists a family of asymptotically (magnetic) $AdS$ solutions depending on three parameters corresponding to operators of dimension $\Delta=1$, $\Delta=4$ and $\Delta=5$. The asymptotic expansion of the solution is
$$\begin{aligned}
u(t) &=& t -\frac{1}{64} e^{-2 t} \left(-16+24 \epsilon_1^2+3 \sqrt{2} \beta_1^2\right)+\cdots \nonumber\\
v(t) &=& 2 t -\frac{3}{32} e^{-2 t} \left(8 \epsilon_1^2+\sqrt{2} \beta_1^2\right)+\cdots \nonumber\\
e_1(t) &=& e^{-t} \epsilon_1-\frac{1}{80} e^{-2 t} \left(-60+16 \epsilon_1^2+3 \sqrt{2} \beta_1^2\right) +\cdots \non \\
&& -\frac{e^{-4 t} (1317+7168 \rho_4+864 t +\cdots )}{10752} +\cdots \nonumber\\
e_2(t) &=&
-2 e^{-t} \epsilon_1-\frac{1}{80} e^{-2 t} \left(120+136 \epsilon_1^2+3 \sqrt{2} \beta_1^2\right) +\cdots \non \\
&&-\frac{e^{-4 t} (6297+3584 \rho_4+432 t+\cdots )}{5376} +\cdots \nonumber\\
b_1(t) &=& e^{-t} \beta_1-\frac{1}{4} e^{-2 t} \left(3\ 2^{1/4} \sqrt{3}+4 \epsilon_1 \beta_1\right)+\cdots +\frac{1}{12} e^{-5 t}( m_3 +\cdots ) +\cdots \nonumber\\
b_2(t) &=& -2 e^{-t} \beta_1+\frac{1}{2} e^{-2 t} \left(3\ 2^{1/4} \sqrt{3}+10 \epsilon_1 \beta_1 \right)+\cdots +\frac{1}{12} e^{-5 t}( m_3 +\cdots ) +\cdots \nonumber\\
\rho(t) &=&
\frac{3}{40} e^{-2 t} \left(8 \epsilon_1^2-\sqrt{2} \beta_1^2\right) +\cdots + \frac{1}{224} e^{-4 t} (224 \rho_4+27 t +\cdots) +\cdots \nonumber\\
\theta(t) &=&
-\frac{15}{64} \sqrt{3} e^{-2 t}+\frac{9}{40} e^{-3 t} \left(3 \sqrt{3} \epsilon_1+2^{3/4} \beta_1\right)+\cdots \nonumber\\
&+ &\!\!\!\!\! \frac{e^{-5 t} \! \left(12 \sqrt{3} \epsilon_1 (2529+3312 t)+2^{1/4} 7\left(160 \sqrt{2} m_3-9 \sqrt{2} \beta_1 (-157+264 t)\right) +\cdots \right)}{35840}+\cdots \nonumber\\
q_0(t) &=&
-\frac{15 \sqrt{3}}{8\ 2^{3/4}}+\frac{27}{5} e^{-t} \left(2^{1/4} \sqrt{3} \epsilon_1-\beta_1\right)+\cdots \non \\
&&+\frac{1}{140} e^{-3 t} \left(140 m_3+27 \left(92\ 2^{1/4} \sqrt{3} \epsilon_1-77 \beta_1\right) t\right)+\cdots \nonumber \end{aligned}$$
where the dots refer to exponentially suppressed terms in the expansion in $e^{-t}$ or to terms at least quadratic in the parameters $(\epsilon_1,\rho_4,\beta_1,m_3)$. As for the $Q^{111}$ black hole, we set two arbitrary constant terms in the expansion of $u(t)$ and $v(t)$ to zero for notational simplicity; they can be restored applying the transformations (\[simm1\]) and (\[simm2\]). The parameters $\epsilon_1$ and $\beta_1$ are associated with two modes with $\Delta=1$ belonging to the massless vector multiplet, while the parameters $\rho_4$ and $m_3$ correspond to a scalar with $\Delta=4$ and a gauge mode with $\Delta=5$ in the massive vector multiplet (cfr Table 7 of [@Cassani:2012pj]).\
Around the $AdS_2\times S^2$ vacuum there are four normalizable modes with behavior $e^{a \Delta t}$ with $\Delta = 0$, $\Delta=1$, $\Delta=1.44$ and $\Delta=1.58$ where $a=4 \sqrt{2}$. At linear order the corresponding fluctuations are given by modes $(U,V,v_1,v_2,b_1,b_2,\phi, \psi , q_0 )$ proportional to $$\begin{aligned}
&& \{1,1,0,0,0,0,0,0,0\}
\nonumber \\
&&
\{-2.45, -0.97,
1.22,
0.31,
-0.09,
0.40,
0.82, \
-0.09,
1\}
\nonumber \\
&& \{0.05,
0.05,
0.30,
-0.39,
-0.17,
-0.64,
0.26,
-0.41,
1\}
\nonumber\\
&& \{-0.27,
-0.27,
-1.85,
2.62,
-4.81,
-2.22,
-1.23,
-3.22,
1\}
\end{aligned}$$ The mode with $\Delta=0$ is just a common shift in the metric functions corresponding to the symmetry (\[simm1\]). The other modes give rise to a triple expansion in powers \_[p,q,r]{} d\_[p,q,r]{} c\_1\^p c\_2\^q c\_3 \^r e\^[(p + 1.44 q + 1.58 r) a t]{}of all the fields.
We have a total number of eight parameters for nine equations which possess an algebraic integral of motion. We thus expect that the given IR and UV expansions can be matched at finite $t$. With some pain and using a precision much greater than the one given in the text above, we have numerically solved the system of BPS equation and found an interpolating solution. The result is shown in Figure \[fig:flowM110\].
![Plots of $u',v', (2 b_1+b_2)/3,\rho$ on the left and of $(b_2-b_1)/3, e_1,e_2,\pi-\psi$ on the right corresponding to the value $c_1=1.7086,c_2=-2.4245,c_3=0.6713,c_4=-3.7021$. The UV expansion will be matched up to the transformations (\[simm1\]) and (\[simm2\]).[]{data-label="fig:flowM110"}](PlotM1.pdf)
![Plots of $u',v', (2 b_1+b_2)/3,\rho$ on the left and of $(b_2-b_1)/3, e_1,e_2,\pi-\psi$ on the right corresponding to the value $c_1=1.7086,c_2=-2.4245,c_3=0.6713,c_4=-3.7021$. The UV expansion will be matched up to the transformations (\[simm1\]) and (\[simm2\]).[]{data-label="fig:flowM110"}](PlotM2.pdf)
\
[**Acknowledgements**]{} A.Z. is supported in part by INFN, the MIUR-FIRB grant RBFR10QS5J “String Theory and Fundamental Interactions”, and by the MIUR-PRIN contract 2009-KHZKRX. We would like to thank I. Bah, G. Dall’Agata, J. Gauntlett, K. Hristov, D. Klemm, J. Simon and B. Wecht for useful discussions and comments. M. P. would like to thank the members of the Theory Group at Imperial College for their kind hospitality and support while this work was being completed.
Four Dimensional Gauged Supergravity {#gsugra}
====================================
In this Appendix, in order to fix notation and conventions, we recall few basic facts about $\mathcal{N}=2$ gauged supergravity. We use the standard conventions of [@Andrianopoli:1996vr; @Andrianopoli:1996cm].
The fields of $\N=2$ supergravity are arranged into one graviton multiplet, $n_v$ vector multiplets and $n_h$ hypermultiplets. The graviton multiplet contains the metric, the graviphoton, $A_\mu^0 $ and an $SU(2)$ doublet of gravitinos of opposite chirality, ($ \psi_\mu^A, \psi_{\mu \, A} $), where $A=1,2$ is an $SU(2)$ index. The vector multiplets consist of a vector, $A^I_\mu,$, two spin 1/2 of opposite chirality, transforming as an $SU(2)$ doublet, ($\lambda^{i \,A}, \lambda^{\bar{i}}_A$), and one complex scalar $z^i$. $A=1,2$ is the $SU(2)$ index, while $I$ and $i$ run on the number of vector multiplets $I= 1, \dots, n_{\rm V}$, $i= 1, \dots, n_{\rm V}$. Finally the hypermultiplets contain two spin 1/2 fermions of opposite chirality, ($\zeta_\alpha, \zeta^\alpha$), and four real scalar fields, $q_u$, where $\alpha = 1, \dots 2 n_{\rm H}$ and $u = 1, \ldots, 4 n_{\rm H}$.
The scalars in the vector multiplets parametrise a special Kähler manifold of complex dimension $n_{\rm V}$, $\mathcal{M}_{\rm SK}$, with metric g\_[i |[j]{}]{} = - \_i \_[|[j]{}]{} K(z, |[z]{}) where $ K(z, \bar{z})$ is the Kähler potential on $\mathcal{M}_{\rm SK}$. This can be computed introducing homogeneous coordinates $X^\Lambda(z)$ and define a holomorphic prepotential $\mathcal{F}(X)$, which is a homogeneous function of degree two \[Kpotdef\] K(z |[z]{}) = - i (|[X]{}\^F\_- X\^|[F]{}\_) , where $F_\Lambda = \del_\Lambda F$. In the paper we will use both the holomorphic sections $(X^\Lambda, F_\Lambda)$ and the symplectic sections (L\^, M\_) = e\^[K/2]{} (X\^, F\_) .
The scalars in the hypermultiplets parametrise a quaternionic manifold of real dimension $4 n_{\rm H}$, $\mathcal{M}_{\rm Q}$, with metric $h_{uv}$.
The bosonic Lagrangian is \[boslag\] \_[bos]{} & =& - R + i ( |\_ \^[- ]{}\_[ ]{} \^[- ]{} - \_ \^[+ ]{}\_[ ]{} \^[+ ]{} )\
&& + g\_[i |[j]{}]{} \^z\^i \_|[z]{}\^[|[j]{}]{} + h\_[u v]{} \^q\^u \_q\^[v]{} - (z, |[z]{}, q) , where $\Lambda, \Sigma = 0, 1, \ldots, n_{\rm V}$. The gauge field strengths are defined as \^\_ = F\^\_\_F\^ , with $F^\Lambda_{\mu \nu} = \frac{1}{2} (\partial_\mu A^\Lambda_\nu - \partial_\nu A^\Lambda_\mu)$. In this notation, $A^0$ is the graviphoton and $A^\Lambda$, with $\Lambda = 1, \ldots, n_{\rm V}$, denote the vectors in the vector multiplets. The matrix $\cN_{\Lambda \Sigma}$ of the gauge kinetic term is a function of the vector multiplet scalars \[periodmat\] \_ = |\_ + 2 i
The covariant derivatives are defined as \[scalarder\] && \_z\^i = \_z\^i + k\^i\_[ ]{} A\^\_[ ]{} ,\
&& \_q\^u = \_q\^u + k\^u\_[ ]{} A\^\_[ ]{} , where $k^i_\Lambda$ and $k^u_\Lambda$ are the Killing vectors associated to the isometries of the vector and hypermultiplet scalar manifold that have been gauged. In this paper we will only gauge (electrically) abelian isometries of the hypermultiplet moduli space. The Killing vectors corresponding to quaternionic isometries have associated prepotentials: these are a set of real functions in the adoint of SU(2), satisfying \^x\_[uv]{}k\^u\_=-\_v P\^x\_ , where $\Om^x_{uv} = d \omega^x + 1/2 \epsilon^{x y z} \omega^y \wedge \omega^z$ and $\nabla_v$ are the curvature and covariant derivative on $\cM_{{\rm Q}}$. In the specific models we consider in the text, one can show that the Killing vectors preserve the connection $\omega^x$ and the curvature $\Omega^x_{uv}$. This allows to simplify the prepotential equations, which reduce to P\^x\_= k\^u\_\^x\_u .
Typically in models obtained from $M$/string theory compactifications, the scalar fields have both electric and magnetic charges under the gauge symmetries. However, by a symplectic transformation of the sections $(X^\Lambda, F_\Lambda)$, it is always possible to put the theory in a frame where all scalars are electrically charged. Such a transformation[^11] leaves the Kähler potential invariant, but changes the period matrix and the preprepotential $\mathcal{F}(X)$ .
The models we consider in this paper [@Cassani:2012pj] are of this type: they have a cubic prepotential and both electrical and magnetic gaugings of some isometries of the hypermultiplet moduli space. The idea is then to perform a sympletic rotation to a frame with purely electric gaungings, allowing for sections $(\tX^{\Lam},\widetilde{F}_{\Lam})$ which are a general symplectic rotation of those obtained from the cubic prepotential.
The scalar potential in couples the hyper and vector multiplets, and is given by (z, |[z]{}, q) = ( g\_[i |[j]{}]{} k\^i\_k\^[|j]{}\_ + 4 h\_[u v]{} k\^u\_k\^v\_) |[L]{}\^L\^+ ( f\_i\^g\^[i |[j]{}]{} f\^ \_[|j]{} - 3 |[L]{}\^L\^) P\^x\_P\^x\_ , where $L^\Lambda$ are the symplectic sections on $\mathcal{M}_{\rm SK}$, $f_i^\Lambda= (\partial_i + \frac{1}{2} \partial_i K) L^\Lambda$ and $P^x_\Lambda$ are the Killing prepotentials.
Maxwell’s equation is \[Maxwelleq\] \_ \_ F\^[ ]{} + \_ \^F\^\_= h\_[uv]{} k\^u\_\^q\^v where, for simplicity of notation, we have defined the following matrices \_ = [Re]{} \_ \_ = [Im]{} \_ .
The full Lagrangian is invariant under $\cN =2$ supersymmetry. In the electric frame, the variations of the fermionic fields are given by \[gravitinoeq\] \_[A]{}&=& \_\_A + i S\_[AB]{} \_\^B + 2i \_ L\^ \_\^[- ]{} \^\_[AB]{} \^B ,\
\[gauginoeq\] \^[iA]{}&=& i \_z\^i \^\^A -g\^[i]{} \^\_ \_ \^[- ]{}\_ \^\^[AB]{}\_B + W\^[i A B]{} \_B ,\
\[hyperinoeq\] \_&=& i \^[B]{}\_u\_q\^u \^\^A \_[AB]{}\_ + N\^[A]{}\_ \_A , where $ \cU^{B\beta}_u$ are the vielbeine on the quaternionic manifold and S\_[AB]{}&=& (\_x)\_A\^[C]{} \_[BC]{} P\^x\_L\^ ,\
W\^[iAB]{}&=&\^[AB]{}k\_\^i |L\^+ [i]{}(\_x)\_[C]{}\^[B]{} \^[CA]{} P\^x\_ g\^[ij\^]{} [|f]{}\_[j\^]{}\^ , \[pesamatrice\]\
[N]{}\^A\_&=& 2 [U]{}\_[u]{}\^A k\^u\_ |L\^ .
Notice that the covariant derivative on the spinors \_\_A = \_\_A + (\^x)\_A\^[ B]{} A\^\_ P\^x\_ \_B . contains a contribution from the gauge fields from the vector-$U(1)$ connection \_\_A = (D\_+ A\_)\_A +\^x\_(\^x)\_A\^[ B]{}\_B , \[DepsA\] the hyper-$SU(2)$ connection and the gaugings (see eqs. 4.13,7.57, 8.5 in [@Andrianopoli:1996cm]) && \_\^x= \_q\^u \^x\_u ,\
&& A\_= (K\_i \_z\^i -K\_ \_z\^ ) \[A1\] .
Derivation of the BPS Equations {#sec:BPSEqs}
===============================
In this section we consider an ansatz for the metric and the gauge fields that allows for black-holes with spherical or hyperbolic horizons, and we derive the general conditions for 1/4 BPS solutions. The metric and the gauge fields are taken to be \[ansApp\] ds\^2&=& e\^[2U]{} dt\^2- e\^[-2U]{} dr\^2- e\^[2(V-U)]{} (d\^2+F()\^2 d\^2)\
A\^&=& \^(r) dt- p\^(r) F’() d, where the warp factors $U$ and $V$ are functions of the radial coordinate $r$ and F()={ & S\^2 (=1)\
& \^2 (=-1) . The modifications needed for the flat case are discussed at the end of Section \[sec:BPSflow\].
We also assume that all scalars in the vector and hypermultiplets, as well as the Killing spinors $\epsilon_A$ are functions of the radial coordinate only.
To derive the BPS conditions it is useful to introduce the central charge &=&p\^M\_- q\_L\^\
&=& L\^\_ (e\^[2(V-U)]{} \^+ ip\^) , where $q_\Lam$ is defined in (\[maginv\]) and its covariant derivative D\_ =\^\_ \_ e\^[2(V-U)]{} ’\^ +ip\^ .
In the case of flat space we need to replace $\kappa p^\Lam \rightarrow - p^\Lam$ in the definition (\[maginv\]) of $q_\Lam$ and in the above expression for ${\cal Z}$.
Gravitino Variation
-------------------
With the ansatz , the gravitino variations become 0&=&\^[1]{}\_A + e\^[-U]{}\^P\^x\_ \^0(\^x)\_A\^[ B]{} \_B +iS\_[AB]{}\^B - e\^[2 (U- V)]{} \_+ \_[AB]{} \^B \[gr1\] ,\
0&=&\^1\_1\_A +i S\_[AB]{}\^B - e\^[2 (U- V)]{} \_- \_[AB]{} \^B \[gr2\] ,\
0&=& (V’-U’)e\^U \^1\_A+ i S\_[AB]{}\^B + e\^[2 (U- V)]{} \_- \_[AB]{} \^B \[gr3\] ,\
0&=&e\^[U-V]{} \^2 \_A+(V’-U’)e\^U \^1 \_A - e\^[U-V]{} p\^P\^x\_ \^3(\^x)\_A\^[ B]{} \_B +iS\_[AB]{}\^B\
&& + e\^[2 (U- V)]{} \_+ \_[AB]{} \^B \[gr4\] , where, to simplify notations, we introduced the quantity \_= \^[01]{} i \^[02]{} (F\^[-1]{} F\^ \_ L\^p’\^) .
Let us consider first . The term proportional to $F^\prime$ must be separately zero, since it is the only $\tha$-dependent one. This implies \[algc\] \_ L\^p’\^=0 .
Similarly, setting to zero the $\tha$-dependent terms in , which is the usual statement of [*setting the gauge connection equal to the spin connection*]{}, gives the projector \[proj1\] | | \_A = - p\^P\^x\_(\^x)\_A\^[ B]{} \^[01]{} \_B . This constraint also holds in the case of flat horizon if we set $\kappa=0$. The $\tha$-independent parts of and are equal and give a second projector \[proj2\] S\_[AB]{}\^B = (V’-U’)e\^U \^1\_A - e\^[2 (U-V)]{} \^[01]{} \_[AB]{} \^B . Subtracting the $\tha$ independent parts of and gives a third projector \[proj3\] \_A= - e\^[U-2V]{} \_[AB]{}\^[0]{} \^B + e\^[-2U]{} \^P\^x\_ \^[01]{}(\^x)\_A\^[ B]{} \_B .
Finally, subtracting and we obtain an equation for the radial dependence of the spinor \_1 \_A=\_A + e\^[-U]{}\^P\^x\_ \^[01]{}(\^x)\_A\^[ B]{} \_B. \[raddep\] In total we get three projectors, - , one differential relation on the spinor and one algebraic constraint . The idea is to further simply these equations so as to ensure that we end up with two projectors. From now on we will specify to the case of spherical or hyperbolic symmetry, since this is what we will use in the paper. In order to reduce the number of projectors we impose the constraint \^P\_\^x = c e\^[2U]{} p\^P\_\^x, x=1,2,3 \[pcq\] for some real function $c$. By squaring we obtain the algebraic condition (p\^P\_\^x)\^2= 1 which can be used to rewrite as \[ceq\] c= e\^[-2U]{}\^P\_\^x p\^P\_\^x.
Substituting in and using , we obtain the projector \[projNa\] \_A= - \_[A B]{} \^0 \^B which, squared, gives the norm of $\cZ$ ||\^2= e\^[4V - 2U]{}\[(2U’-V’)\^2 +c\^2\] . \[normZ\]
Then we can rewrite as \_A=ie\^[i ]{} \_[AB]{}\^[0]{} \^B \[proj4\] , where $e^{i \psi}$ is the relative phase between $\cZ$ and $2 U^\prime - V^\prime - i c$ \[phase\] e\^[i ]{} = - . Using the definition of $S_{AB}$ given in and the projectors and , we can reduce to a scalar equation i \^P\^x\_ p\^P\_\^x = e\^[2(U-V)]{} e\^[- i]{} -(V’-U’)e\^[U]{} , \[LPrel\] where we defined \^= e\^[- i]{}L\^=\_r+ i \_i. Combining and , we can also write two equations for the warp factors && e\^U U’= -i \^P\^x\_ p\^P\_\^x - e\^[2 (U- V)]{} e\^[-i ]{} + i c e\^[U]{} ,\
&& e\^U V’ = -2i \^P\^x\_ p\^P\_\^x + i c e\^[U]{} .
Using the projectors above, becomes \_r \_A=-A\_r\_A -\^x\_[r]{}(\^x)\_A\^[ B]{}\_B+\_A - \_A.
Gaugino Variation
-----------------
The gaugino variation is \[zdot1\] i e\^[U]{} z’\^i \^1 \^A +e\^[2 (U-V)]{} g\^[i |[j]{}]{} \^[AB]{}\_B + W\^[i A B]{} \_B = 0 . $\cM$ is the only $\tha$-dependent term and must be set to zero separately, giving \[Bpfbar\] \^\_ \_ p’\^= 0 .
Combining and , and using standard orthogonality relations between the sections $X^\Lambda$, we conclude that p’\^=0 .
Continuing with , we use again and to obtain \[gauge3\] e\^[-i ]{} e\^[U]{} z’\^i = e\^[2 (U-V)]{} g\^[i |[j]{}]{} D\_ - i g\^[i]{} [|f]{}\_\^ P\_\^xp\^P\_\^x .
Hyperino Variation
------------------
The hyperino variation gives i \_ \^[B]{}\_u e\^U \^1 q’\^u + \^k\^u\_ e\^[-U]{} \^0 -F\^[-1]{}F’ e\^[U-V]{} p\^k\^u\_\^3 \_[AB]{}\^A+ 2 [U]{}\_[u]{}\^A k\^u\_ \^ \_A = 0 . First off, we need to set the $\tha$-dependent part to zero k\_\^u p\^= 0 . \[kqzero1\] The projectors and can be used to simply the remaining equation - e\^[U]{} q’\^u \^[B]{}\_[ u]{} p\^P\^x\_(\^x)\_B\^C \_C + \^[A]{}\_[ u]{} ( 2 k\^u\_ \^ - e\^[-U]{} \^k\^u\_) \^A = 0 , which can then be reduced to a scalar equation \[hyper1\] -i h\_[uv]{} q’\^u +e\^[-2U]{}p\^P\^y\_\^\_v P\^y\_- 2e\^[-U]{} p\^P\^x\_\_v (\^ P\^x\_) = 0 . Using the standard relations (we use the conventions of [@Andrianopoli:1996vr]) -i\_[u]{}\^[xv]{} \_[v]{}\^[A]{}&=& \^[B]{}\_[u]{}(\^x)\_B\^[ A]{} ,\
\^x\_[uw]{} \^[yw]{}\_[ v]{}&=& -\^[xy]{}h\_[uv]{}-\^[xyz]{}\^z\_[uv]{} ,\
k\^u\_\^x\_[uv]{}&=&- \_v P\^x\_ , we can reduce to -i h\_[uv]{} q’\^u +e\^[-2U]{}p\^P\^y\_\^\_v P\^y\_- 2e\^[-U]{} p\^P\^x\_\_v (\^ P\^x\_) =0 . The real and imaginary parts give q’\^u &=&2e\^[-U]{} h\^[uv]{}\_v p\^P\^x\_\_i\^ P\^x\_ ,\
0&=& \^k\^u\_ - 2 e\^U \_r\^k\^u\_ .
Summary of BPS Flow Equations
-----------------------------
It is worthwhile at this point to summarize the BPS equations. The algebraic equations are p’\^&=& 0 \[BPS1\] ,\
( p\^P\_\^x)\^2&=&1 , \[BPS2\]\
k\_\^u p\^&=& 0 , \[BPS3\]\
\^P\^x\_&=& c e\^[2U]{} p\^P\^x\_ , \[BPS4\]\
\^k\^u\_ &=& 2 e\^U \_r\^k\^u\_ , \[BPS5\] while the differential equations are e\^U U’&=& -i \^P\^x\_ p\^P\_\^x + e\^[-i ]{} + i c e\^[U]{} ,\
e\^U V’ &=& -2i \^P\^x\_ p\^P\_\^x + i c e\^[U]{} ,\
e\^[-i ]{} e\^[U]{} z’\^i &=& \^i - i g\^[i]{} [|f]{}\_\^ P\_\^xp\^P\_\^x , \[gaugino\]\
q’\^u&=&2e\^[-U]{} h\^[uv]{}\_v p\^P\^x\_\_i\^ P\^x\_ .
In the case of flat horizon equation (\[BPS2\]) is replaced by $( p^\Lam P_\Lam^x)^2=0$.
Maxwell’s Equation
------------------
Maxwell’s equation is \_ \_ F\^[ ]{} + \_ \^F\^\_= h\_[uv]{} k\^u\_\^q\^v , which gives q’\_-e\^[2(V-U)]{} \_ ’\^+\_ p\^’= 2 e\^[2V-4U]{}h\_[uv]{} k\^u\_k\^v\_\^
In the case of flat horizon we need to replace $\kappa p^\Lam\rightarrow - p^\Lam$.
[10]{}
S. L. Cacciatori and D. Klemm, “[Supersymmetric AdS(4) black holes and attractors]{},” [*JHEP*]{} [**1001**]{} (2010) 085, [[0911.4926]{}](http://arXiv.org/abs/0911.4926).
G. Dall’Agata and A. Gnecchi, “[Flow equations and attractors for black holes in N = 2 U(1) gauged supergravity]{},” [*JHEP*]{} [**1103**]{} (2011) 037, [[1012.3756]{}](http://arXiv.org/abs/1012.3756). K. Hristov and S. Vandoren, “[Static supersymmetric black holes in AdS4 with spherical symmetry]{},” [*JHEP*]{} [**1104**]{} (2011) 047, [[1012.4314]{}](http://arXiv.org/abs/1012.4314). M. Cvetic, M. Duff, P. Hoxha, J. T. Liu, H. Lu, [*et al.*]{}, “[Embedding AdS black holes in ten-dimensions and eleven-dimensions]{},” [*Nucl.Phys.*]{} [ **B558**]{} (1999) 96–126, [[hep-th/9903214]{}](http://arXiv.org/abs/hep-th/9903214). M. J. Duff and J. T. Liu, “Anti-de sitter black holes in gauged n = 8 supergravity,” [*Nucl. Phys.*]{} [**B554**]{} (1999) 237–253, [[hep-th/9901149]{}](http://arXiv.org/abs/hep-th/9901149). B. de Wit and H. Nicolai, “N=8 supergravity with local so(8) x su(8) invariance,” [*Phys. Lett.*]{} [**B108**]{} (1982) 285. J. P. Gauntlett and O. Varela, “[Consistent Kaluza-Klein reductions for general supersymmetric AdS solutions]{},” [*Phys.Rev.*]{} [**D76**]{} (2007) 126007, [[0707.2315]{}](http://arXiv.org/abs/0707.2315).
J. P. Gauntlett, S. Kim, O. Varela, and D. Waldram, “[Consistent supersymmetric Kaluza–Klein truncations with massive modes]{},” [*JHEP*]{} [**04**]{} (2009) 102, [[arXiv:0901.0676]{}](http://arXiv.org/abs/arXiv:0901.0676). D. Cassani, P. Koerber, and O. Varela, “[All homogeneous N=2 M-theory truncations with supersymmetric AdS4 vacua]{},” [*JHEP*]{} [**1211**]{} (2012) 173, [[1208.1262]{}](http://arXiv.org/abs/1208.1262). A. Donos, J. P. Gauntlett, N. Kim, and O. Varela, “[Wrapped M5-branes, consistent truncations and AdS/CMT]{},” [*JHEP*]{} [**1012**]{} (2010) 003, [[1009.3805]{}](http://arXiv.org/abs/1009.3805).
D. Cassani and P. Koerber, “[Tri-Sasakian consistent reduction]{},” [*JHEP*]{} [**1201**]{} (2012) 086, [[1110.5327]{}](http://arXiv.org/abs/1110.5327). A.-K. Kashani-Poor and R. Minasian, “Towards reduction of type ii theories on su(3) structure manifolds,” [*JHEP*]{} [**03**]{} (2007) 109, [[hep-th/0611106]{}](http://arXiv.org/abs/hep-th/0611106). A.-K. Kashani-Poor, “[Nearly Kaehler Reduction]{},” [*JHEP*]{} [**11**]{} (2007) 026, [[arXiv:0709.4482]{}](http://arXiv.org/abs/arXiv:0709.4482). J. P. Gauntlett and O. Varela, “[Universal Kaluza-Klein reductions of type IIB to N=4 supergravity in five dimensions]{},” [*JHEP*]{} [**1006**]{} (2010) 081, [[1003.5642]{}](http://arXiv.org/abs/1003.5642).
K. Skenderis, M. Taylor, and D. Tsimpis, “[A Consistent truncation of IIB supergravity on manifolds admitting a Sasaki-Einstein structure]{},” [ *JHEP*]{} [**1006**]{} (2010) 025, [[ 1003.5657]{}](http://arXiv.org/abs/1003.5657).
D. Cassani, G. Dall’Agata, and A. F. Faedo, “[Type IIB supergravity on squashed Sasaki-Einstein manifolds]{},” [*JHEP*]{} [**1005**]{} (2010) 094, [[1003.4283]{}](http://arXiv.org/abs/1003.4283).
J. T. Liu, P. Szepietowski, and Z. Zhao, “[Supersymmetric massive truncations of IIb supergravity on Sasaki-Einstein manifolds]{},” [*Phys.Rev.*]{} [ **D82**]{} (2010) 124022, [[1009.4210]{}](http://arXiv.org/abs/1009.4210).
I. Bena, G. Giecold, M. Grana, N. Halmagyi, and F. Orsi, “[Supersymmetric Consistent Truncations of IIB on $T^{1,1}$]{},” [*JHEP*]{} [**1104**]{} (2011) 021, [[arXiv:1008.0983]{}](http://arXiv.org/abs/arXiv:1008.0983).
D. Cassani and A. F. Faedo, “[A Supersymmetric consistent truncation for conifold solutions]{},” [*Nucl.Phys.*]{} [**B843**]{} (2011) 455–484, [[1008.0883]{}](http://arXiv.org/abs/1008.0883).
J. M. Maldacena and C. Nunez, “Supergravity description of field theories on curved manifolds and a no go theorem,” [*Int. J. Mod. Phys.*]{} [**A16**]{} (2001) 822–855, [[hep-th/0007018]{}](http://arXiv.org/abs/hep-th/0007018). E. Witten, “Topological sigma models,” [*Commun. Math. Phys.*]{} [**118**]{} (1988) 411. D. Gaiotto, “[N=2 dualities]{},” [*JHEP*]{} [**1208**]{} (2012) 034, [[0904.2715]{}](http://arXiv.org/abs/0904.2715). D. Gaiotto, G. W. Moore, and A. Neitzke, “[Wall-crossing, Hitchin Systems, and the WKB Approximation]{},” [[0907.3987]{}](http://arXiv.org/abs/0907.3987). F. Benini, Y. Tachikawa, and B. Wecht, “[Sicilian gauge theories and N=1 dualities]{},” [*JHEP*]{} [**1001**]{} (2010) 088, [[0909.1327]{}](http://arXiv.org/abs/0909.1327). I. Bah, C. Beem, N. Bobev, and B. Wecht, “[Four-Dimensional SCFTs from M5-Branes]{},” [*JHEP*]{} [**1206**]{} (2012) 005, [[1203.0303]{}](http://arXiv.org/abs/1203.0303). M. M. Caldarelli and D. Klemm, “[Supersymmetry of Anti-de Sitter black holes]{},” [*Nucl.Phys.*]{} [**B545**]{} (1999) 434–460, [[hep-th/9808097]{}](http://arXiv.org/abs/hep-th/9808097). J. P. Gauntlett, N. Kim, S. Pakis, and D. Waldram, “[Membranes wrapped on holomorphic curves]{},” [*Phys.Rev.*]{} [**D65**]{} (2002) 026003, [[hep-th/0105250]{}](http://arXiv.org/abs/hep-th/0105250). A. Donos, J. P. Gauntlett, and N. Kim, “[AdS Solutions Through Transgression]{},” [*JHEP*]{} [**0809**]{} (2008) 021, [[0807.4375]{}](http://arXiv.org/abs/0807.4375).
A. Donos and J. P. Gauntlett, “[Supersymmetric quantum criticality supported by baryonic charges]{},” [*JHEP*]{} [**1210**]{} (2012) 120, [[1208.1494]{}](http://arXiv.org/abs/1208.1494). O. Aharony, O. Bergman, D. L. Jafferis, and J. Maldacena, “[N=6 Superconformal Chern-Simons-matter Theories, M2-branes and Their Gravity Duals]{},” [ *JHEP*]{} [**0810**]{} (2008) 091, [[0806.1218]{}](http://arXiv.org/abs/0806.1218). D. Fabbri [*et al.*]{}, “3d superconformal theories from sasakian seven-manifolds: New nontrivial evidences for ads(4)/cft(3),” [*Nucl. Phys.*]{} [**B577**]{} (2000) 547–608, [[hep-th/9907219]{}](http://arXiv.org/abs/hep-th/9907219). Tomasiello. D. L. Jafferis and A. Tomasiello, “[A simple class of N=3 gauge/gravity duals]{},” [[arXiv:0808.0864]{}](http://arXiv.org/abs/arXiv:0808.0864). A. Hanany and A. Zaffaroni, “[Tilings, Chern-Simons Theories and M2 Branes]{},” [*JHEP*]{} [**0810**]{} (2008) 111, [[ 0808.1244]{}](http://arXiv.org/abs/0808.1244).
D. Martelli and J. Sparks, “[Notes on toric Sasaki-Einstein seven-manifolds and $AdS_4/CFT_3$]{},” [[arXiv:0808.0904]{}](http://arXiv.org/abs/arXiv:0808.0904). A. Hanany, D. Vegh, and A. Zaffaroni, “[Brane Tilings and M2 Branes]{},” [ *JHEP*]{} [**0903**]{} (2009) 012, [[ 0809.1440]{}](http://arXiv.org/abs/0809.1440).
D. Martelli and J. Sparks, “[AdS4/CFT3 duals from M2-branes at hypersurface singularities and their deformations]{},” [*JHEP*]{} [**12**]{} (2009) 017, [[0909.2036]{}](http://arXiv.org/abs/0909.2036). S. Franco, I. R. Klebanov, and D. Rodriguez-Gomez, “[M2-branes on Orbifolds of the Cone over Q\*\*1,1,1]{},” [*JHEP*]{} [**0908**]{} (2009) 033, [[0903.3231]{}](http://arXiv.org/abs/0903.3231). F. Benini, C. Closset, and S. Cremonesi, “[Chiral flavors and M2-branes at toric CY4 singularities]{},” [*JHEP*]{} [**1002**]{} (2010) 036, [[0911.4127]{}](http://arXiv.org/abs/0911.4127). K. Hristov, A. Tomasiello, and A. Zaffaroni, “[Supersymmetry on Three-dimensional Lorentzian Curved Spaces and Black Hole Holography]{},” [[1302.5228]{}](http://arXiv.org/abs/1302.5228). L. Andrianopoli [*et al.*]{}, “[General Matter Coupled N=2 Supergravity]{},” [*Nucl. Phys.*]{} [**B476**]{} (1996) 397–417, [[hep-th/9603004]{}](http://arXiv.org/abs/hep-th/9603004). L. Andrianopoli [*et al.*]{}, “[N = 2 supergravity and N = 2 super Yang-Mills theory on general scalar manifolds: Symplectic covariance, gaugings and the momentum map]{},” [*J. Geom. Phys.*]{} [**23**]{} (1997) 111–189, [[hep-th/9605032]{}](http://arXiv.org/abs/hep-th/9605032). B. de Wit, H. Samtleben, and M. Trigiante, “[Magnetic charges in local field theory]{},” [*JHEP*]{} [**09**]{} (2005) 016, [[hep-th/0507289]{}](http://arXiv.org/abs/hep-th/0507289). S. Ferrara and S. Sabharwal, “Quaternionic manifolds for type ii superstring vacua of calabi-yau spaces,” [*Nucl. Phys.*]{} [**B332**]{} (1990) 317. K. Hristov, C. Toldo, and S. Vandoren, “[On BPS bounds in D=4 N=2 gauged supergravity]{},” [*JHEP*]{} [**1112**]{} (2011) 014, [[1110.2688]{}](http://arXiv.org/abs/1110.2688). B. de Wit, H. Nicolai, and N. P. Warner, “[The Embedding Of Gauged N=8 Supergravity Into d = 11 Supergravity]{},” [*Nucl. Phys.*]{} [**B255**]{} (1985) 29. H. Nicolai and K. Pilch, “[Consistent Truncation of d = 11 Supergravity on AdS$_4 \times S^7$]{},” [*JHEP*]{} [**1203**]{} (2012) 099, [[1112.6131]{}](http://arXiv.org/abs/1112.6131). F. Benini and N. Bobev, “[Two-dimensional SCFTs from wrapped branes and c-extremization]{},” [[1302.4451]{}](http://arXiv.org/abs/1302.4451). P. Szepietowski, “[Comments on a-maximization from gauged supergravity]{},” [*JHEP*]{} [**1212**]{} (2012) 018, [[1209.3025]{}](http://arXiv.org/abs/1209.3025).
P. Karndumri and E. O Colgain, “[Supergravity dual of c-extremization]{},” [*Phys. Rev.*]{} [**D 87**]{} (2013) 101902, [[1302.6532]{}](http://arXiv.org/abs/1302.6532).
N. Halmagyi, M. Petrini, and A. Zaffaroni, “[work in progress]{},”.
[^1]: To be precise, the black holes we are discussing will asymptotically approach $AdS_4$ in the UV but will differ by non-normalizable terms corresponding to some magnetic charge. We will nevertheless refer to them as asymptotically $AdS_4$ black holes.
[^2]: Other M-theory reductions have been studied in [@Donos:2010ax; @Cassani:2011fu] and similar reductions have been performed in type IIA/IIB, see for example [@Kashani-Poor:2006si; @KashaniPoor:2007tr; @Gauntlett:2010vu; @Skenderis:2010vz; @Cassani:2010uw; @Liu:2010pq; @Bena:2010pr; @Cassani:2010na]
[^3]: For a discussion of these compactifications from the point of view of holography and recent results in identifying the dual field theories see[@Fabbri:1999hw; @Jafferis:2008qz; @Hanany:2008cd; @Martelli:2008rt; @Hanany:2008fj; @Martelli:2009ga; @Franco:2009sp; @Benini:2009qs].
[^4]: For a recent discussion from the point of view of holography see [@Hristov:2013spa].
[^5]: For the models studied in this paper, this also implies $\omhat_\mu^x=0$ in
[^6]: This is always possible when the gauging is abelian [@deWit:2005ub].
[^7]: We slightly abuse notation by often refering to the components of $z^i$ as $(v_i,b_i)$. This is not meant to imply that the metric has been used to lower the index.
[^8]: One can also identify holographically the exact R-symmetry [@Szepietowski:2012tb; @Karndumri:2013iqa].
[^9]: There is a factor of $\sqrt{2}$ between $A^\Lam$ here and in [@Cassani:2012pj], see footnote 10 of that paper.
[^10]: For example $\sig(v_1^2b_2)=v_1^2b_2+v_2^2b_1+v_3^2b_2+v_1^2b_3+v_2^2b_3+v_3^2b_1$ and $\sig(v_1v_2)=2(v_1v_2+v_2v_3+v_1v_3)$
[^11]: An $Sp( 2 + 2 n_{\rm V}, \mathbb{R})$ transformation of the sections (X\^, F\_) (\^, \_) = A& B\
C & D (X\^, F\_) , acts on the period matrix $\cN_{\Lambda \Sigma}$ by a fractional transformation \_ (X, F) \_ (, ) = ( C + D \_ (X, F)) (A + B \_ (X, F))\^[-1]{} .
|
{
"pile_set_name": "arxiv"
}
|
Dave Davis (bowler)
Dave Davis (born April 28, 1942) is a former American professional ten-pin bowler and former member of the Professional Bowlers Association (PBA). He grew up in Sweet Valley, Pennsylvania, and now resides in Lake Placid, Florida.
Beginning his PBA career in 1964, the left-hander won 18 PBA Tour titles, including four majors. He was inducted into the PBA Hall of Fame in 1978. Davis won multiple titles in a season four times, including six titles in the 1967 season alone. The 1967 season would see him win the PBA National Championship on his way to Player of the Year honors. He also won the PBA National Championship in 1965, plus two PBA Tournament of Champions titles (1968 and 1975). As a PBA Senior Tour bowler, Davis won back-to-back titles in the USBC Senior Masters (1995 and 1996).
In addition, Davis served the PBA in various positions on the Executive Board and Tournament Committee. He was ranked #19 on the PBA's 2008 list of "50 Greatest Players of the Last 50 Years."
For a brief period, Davis spent time in the TV broadcast booth, alongside play-by-play announcer Chris Schenkel. After the death of Schenkel's long-time broadcast partner, Billy Welu, in 1974, Davis and Dick Weber shared analyst duties on ABC-TV's Professional Bowlers Tour until Nelson Burton Jr. was hired as a full-time replacement in 1975.
Davis also appeared regularly on the 1970s version of Celebrity Bowling as an analyst and cohost.
References
Category:1942 births
Category:Living people
Category:American ten-pin bowling players
Category:Sportspeople from Hackensack, New Jersey
Category:Bowling broadcasters
|
{
"pile_set_name": "wikipedia_en"
}
|
---
abstract: 'Unprecedentedly precise cosmic microwave background (CMB) data are expected from ongoing and near-future CMB Stage-III and IV surveys, which will yield reconstructed CMB lensing maps with effective resolution approaching several arcminutes. The small-scale CMB lensing fluctuations receive non-negligible contributions from nonlinear structure in the late-time density field. These fluctuations are not fully characterized by traditional two-point statistics, such as the power spectrum. Here, we use $N$-body ray-tracing simulations of CMB lensing maps to examine two higher-order statistics: the lensing convergence one-point probability distribution function (PDF) and peak counts. We show that these statistics contain significant information not captured by the two-point function, and provide specific forecasts for the ongoing Stage-III Advanced Atacama Cosmology Telescope (AdvACT) experiment. Considering only the temperature-based reconstruction estimator, we forecast 9$\sigma$ (PDF) and 6$\sigma$ (peaks) detections of these statistics with AdvACT. Our simulation pipeline fully accounts for the non-Gaussianity of the lensing reconstruction noise, which is significant and cannot be neglected. Combining the power spectrum, PDF, and peak counts for AdvACT will tighten cosmological constraints in the $\Omega_m$-$\sigma_8$ plane by $\approx 30\%$, compared to using the power spectrum alone.'
author:
- 'Jia Liu$^{1,2}$'
- 'J. Colin Hill$^{2}$'
- 'Blake D. Sherwin$^{3}$'
- 'Andrea Petri$^{4}$'
- 'Vanessa Böhm$^{5}$'
- 'Zoltán Haiman$^{2,6}$'
bibliography:
- 'paper.bib'
title: 'CMB Lensing Beyond the Power Spectrum: Cosmological Constraints from the One-Point PDF and Peak Counts'
---
Introduction {#sec:intro}
============
After its first detection in cross-correlation nearly a decade ago [@Smith2007; @Hirata2008] and subsequent detection in auto-correlation five years ago [@das2011; @sherwin2011], weak gravitational lensing of the cosmic microwave background (CMB) is now reaching maturity as a cosmological probe [@Hanson2013; @Das2013; @PolarBear2014a; @PolarBear2014b; @BICEPKeck2016; @Story2014; @Ade2014; @vanEngelen2014; @vanEngelen2015; @planck2015xv]. On their way to the Earth, CMB photons emitted at redshift $z=1100$ are deflected by the intervening matter, producing new correlations in maps of CMB temperature and polarization anisotropies. Estimators based on these correlations can be applied to the observed anisotropy maps to reconstruct a noisy estimate of the CMB lensing potential [@Zaldarriaga1998; @Zaldarriaga1999; @HuOkamoto2002; @Okamoto2003]. CMB lensing can probe fundamental physical quantities, such as the dark energy equation of state and neutrino masses, through its sensitivity to the geometry of the universe and the growth of structure (see Refs. [@Lewis2006; @Hanson2010] for a review).
In this paper, we study the non-Gaussian information stored in CMB lensing observations. The Gaussian approximation to the density field breaks down due to nonlinear evolution on small scales at late times. Thus, non-Gaussian statistics (i.e., statistics beyond the power spectrum) are necessary to capture the full information in the density field. Such work has been previously performed (theoretically and observationally) on weak gravitational lensing of galaxies, where galaxy shapes, instead of CMB temperature/polarization patterns, are distorted (hereafter “galaxy lensing”). Several research groups have found independently that non-Gaussian statistics can tighten cosmological constraints when they are combined with the two-point correlation function or angular power spectrum.[^1] Such non-Gaussian statistics have also been applied in the CMB context to the Sunyaev-Zel’dovich signal, including higher-order moments [@Wilson2012; @Hill2013; @Planck2013tSZ; @Planck2015tSZ], the bispectrum [@Bhattacharya2012; @Crawford2014; @Planck2013tSZ; @Planck2015tSZ], and the one-point probability distribution function (PDF) [@Hill2014b; @Planck2013tSZ; @Planck2015tSZ]. In all cases, substantial non-Gaussian information was found, yielding improved cosmological constraints.
The motivation to study non-Gaussian statistics of CMB lensing maps is three-fold. First, the CMB lensing kernel is sensitive to structures at high redshift ($z\approx2.0$, compared to $z\approx0.4$ for typical galaxy lensing samples); hence CMB lensing non-Gaussian statistics probe early nonlinearity that is beyond the reach of galaxy surveys. Second, CMB lensing does not suffer from some challenging systematics that are relevant to galaxy lensing, including intrinsic alignments of galaxies, photometric redshift uncertainties, and shape measurement biases. Therefore, a combined analysis of galaxy lensing and CMB lensing will be useful to build a tomographic outlook on nonlinear structure evolution, as well as to calibrate systematics in both galaxy and CMB lensing surveys [@Liu2016; @Baxter2016; @Schaan2016; @Singh2016; @Nicola2016]. Finally, CMB lensing measurements have recently entered a regime of sufficient sensitivity and resolution to detect the (stacked) lensing signals of halos [@Madhavacheril2014; @Baxter2016; @Planck2015cluster]. This suggests that statistics sensitive to the nonlinear growth of structure, i.e., non-Gaussian statistics, will also soon be detectable. We demonstrate below that this is indeed the case, taking as a reference experiment the ongoing Advanced Atacama Cosmology Telescope (AdvACT) survey [@Henderson2016].
Non-Gaussian aspects of the CMB lensing field have recently attracted attention, both as a potential signal and a source of bias in CMB lensing power spectrum estimates. Considering the lensing non-Gaussianity as a signal, a recent analytical study of the CMB lensing bispectrum by Ref. [@Namikawa2016] forecasted its detectability to be 40$\sigma$ with a CMB Stage-IV experiment. Ref. [@Bohm2016] performed the first calculation of the bias induced in CMB lensing power spectrum estimates by the lensing bispectrum, finding non-negligible biases for Stage-III and IV CMB experiments. Refs. [@Pratten2016] and [@Marozzi2016] considered CMB lensing effects arising from the breakdown of the Born approximation, with the former study finding that post-Born terms substantially alter the predicted CMB lensing bispectrum, compared to the contributions from nonlinear structure formation alone. We emphasize that the $N$-body ray-tracing simulations used in this work naturally capture such effects — we do not use the Born approximation. However, we consider only the lensing potential $\phi$ or convergence $\kappa$ here (related by $\kappa = -\nabla^2 \phi/2$), leaving a treatment of the curl potential or image rotation for future work (Ref. [@Pratten2016] has demonstrated that the curl potential possesses non-trivial higher-order statistics). In a follow-up paper, the simulations described here are used to more precisely characterize CMB lensing power spectrum biases arising from the bispectrum and higher-order correlations [@Sherwin2016].
We consider the non-Gaussianity in the CMB lensing field as a potential signal. We use a suite of 46 $N$-body ray-tracing simulations to investigate two non-Gaussian statistics applied to CMB lensing convergence maps — the one-point PDF and peak counts. We examine the deviation of the convergence PDF and peak counts from those of Gaussian random fields. We then quantify the power of these statistics to constrain cosmological models, compared with using the power spectrum alone.
The paper is structured as follows. We first introduce CMB lensing in Sec. \[sec:formalism\]. We then describe our simulation pipeline in Sec. \[sec:sim\] and analysis procedures in Sec. \[sec:analysis\]. We show our results for the power spectrum, PDF, peak counts, and the derived cosmological constraints in Sec. \[sec:results\]. We conclude in Sec. \[sec:conclude\].
CMB lensing formalism {#sec:formalism}
=====================
To lowest order, the lensing convergence ($\kappa$) is a weighted projection of the three-dimensional matter overdensity $\delta=\delta\rho/\bar{\rho}$ along the line of sight,
$$\label{eq.kappadef}
\kappa(\thetaB) = \int_0^{\infty} dz W(z) \delta(\chi(z)\thetaB, z),$$
where $\chi(z)$ is the comoving distance and the kernel $W(z)$ indicates the lensing strength at redshift $z$ for sources with a redshift distribution $p(z_s)=dn(z_s)/dz$. For CMB lensing, there is only one source plane at the last scattering surface $z_\star=1100$; therefore, $p(z_s)=\delta_D(z_s-z_\star)$, where $\delta_D$ is the Dirac delta function. For a flat universe, the CMB lensing kernel is
$$\begin{aligned}
W^{{\kappa_{\rm cmb}}}(z) &=& \frac{3}{2}\Omega_{m}H_0^2 \frac{(1+z)}{H(z)} \frac{\chi(z)}{c} \nonumber\\
&\times& \frac{\chi(z_\star)-\chi(z)}{\chi(z_\star)}.\end{aligned}$$
where $\Omega_{m}$ is the matter density as a fraction of the critical density at $z=0$, $H(z)$ is the Hubble parameter at redshift $z$, with a present-day value $H_0$, and $c$ is the speed of light. $W^{{\kappa_{\rm cmb}}}(z)$ peaks at $z\approx2$ for canonical cosmological parameters ($\Omega_{m}\approx0.3$ and $H_0\approx70$ km/s/Mpc, [@planck2015xiii]). Note that Eq. (\[eq.kappadef\]) assumes the Born approximation, but our simulation approach described below does not — we implement full ray-tracing to calculate $\kappa$.
Simulations {#sec:sim}
===========
Our simulation procedure includes five main steps: (1) the design (parameter sampling) of cosmological models, (2) $N$-body simulations with Gadget-2,[^2] (3) ray-tracing from $z=0$ to $z=1100$ to obtain (noiseless) convergence maps using the Python code LensTools [@Petri2016],[^3] (4) lensing simulated CMB temperature maps by the ray-traced convergence field, and (5) reconstructing (noisy) convergence maps from the CMB temperature maps after including noise and beam effects.
Simulation design
-----------------
We use an irregular grid to sample parameters in the $\Omega_m$-$\sigma_8$ plane, within the range of $\Omega_m \in [0.15, 0.7]$ and $\sigma_8 \in [0.5, 1.0]$, where $\sigma_8$ is the rms amplitude of linear density fluctuations on a scale of 8 Mpc/$h$ at $z=0$. An optimized irregular grid has a smaller average distance between neighboring points than a regular grid, and no parameters are duplicated. Hence, it samples the parameter space more efficiently. The procedure to optimize our sampling is described in detail in Ref. [@Petri2015].
The 46 cosmological models sampled are shown in Fig. \[fig:design\]. Other cosmological parameters are held fixed, with $H_0=72$ km/s/Mpc, dark energy equation of state $w=-1$, spectral index $n_s=0.96$, and baryon density $\Omega_b=0.046$. The design can be improved in the future by posterior sampling, where we first run only a few models to generate a low-resolution probability plane, and then sample more densely in the high-probability region.
We select the model that is closest to the standard concordance values of the cosmological parameters (e.g., [@planck2015xiii]) as our fiducial model, with $\Omega_m=0.296$ and $\sigma_8=0.786$. We create two sets of realizations for this model, one for covariance matrix estimation, and another one for parameter interpolation. This fiducial model is circled in red in Fig. \[fig:design\].
![\[fig:design\] The design of cosmological parameters used in our simulations (46 models in total). The fiducial cosmology ($\Omega_m=0.296, \sigma_8=0.786$) is circled in red. The models for which AdvACT-like lensing reconstruction is performed are circled in blue. Other cosmological parameters are fixed at $H_0=72$ km/s/Mpc, $w=-1$, $n_s=0.96$, and $\Omega_b=0.046$.](plot/plot_design.pdf){width="48.00000%"}
$N$-body simulation and ray-tracing {#sec:nbody}
-----------------------------------
We use the public code Gadget-2 to run $N$-body simulations with $N_{\rm particles}=1024^3$ and box size = 600 Mpc/$h$ (corresponding to a mass resolution of $1.4\times10^{10} M_\odot/h$). To initialize each simulation, we first obtain the linear matter power spectrum with the Einstein-Boltzmann code CAMB.[^4] The power spectrum is then fed into the initial condition generator N-GenIC, which generates initial snapshots (the input of Gadget-2) of particle positions at $z=100$. The $N$-body simulation is then run from $z=100$ to $z=0$, and we record snapshots at every 144 Mpc$/h$ in comoving distance between $z\approx45$ and $z=0$. The choice of $z\approx45$ is determined by requiring that the redshift range covers 99% of the $W^{\kappa_{cmb}}D(z)$ kernel, where we use the linear growth factor $D(z)\sim 1/(1+z)$.
We then use the Python code LensTools [@Petri2016] to generate CMB lensing convergence maps. We first slice the simulation boxes to create potential planes (3 planes per box, 200 Mpc/$h$ in thickness), where particle density is converted into gravitational potential using the Poisson equation. We track the trajectories of 4096$^2$ light rays from $z=0$ to $z=1100$, where the deflection angle and convergence are calculated at each potential plane. This procedure automatically captures so-called “post-Born” effects, as we never assume that the deflection angle is small or that the light rays follow unperturbed geodesics.[^5] Finally, we create 1,000 convergence map realizations for each cosmology by randomly rotating/shifting the potential planes [@Petri2016b]. For the fiducial cosmology only, we generate 10,000 realizations for the purpose of estimating the covariance matrix. The convergence maps are 2048$^2$ pixels and 12.25 deg$^2$ in size, with square pixels of side length 0.1025 arcmin. The maps generated at this step correspond to the physical lensing convergence field only, i.e., they have no noise from CMB lensing reconstruction. Therefore, they are labeled as “noiseless” in the following sections and figures.
![\[fig:theory\_ps\] Comparison of the CMB lensing convergence power spectrum from the HaloFit model and that from our simulation (1024$^3$ particles, box size 600 Mpc/$h$, map size 12.25 deg$^2$), for our fiducial cosmology. We also show the prediction from linear theory. Error bars are the standard deviation of 10,000 realizations.](plot/plot_theory_comparison.pdf){width="48.00000%"}
We test the power spectra from our simulated maps against standard theoretical predictions. Fig. \[fig:theory\_ps\] shows the power spectrum from our simulated maps versus that from the HaloFit model [@Smith2003; @Takahashi2012] for our fiducial cosmology. We also show the linear-theory prediction, which deviates from the nonlinear HaloFit result at $\ell \gtrsim 700$. The simulation error bars are estimated using the standard deviation of 10,000 realizations. The simulated and (nonlinear) theoretical results are consistent within the error bars for multipoles $\ell<2,000$, which is sufficient for this work, as current and near-future CMB lensing surveys are limited to roughly this $\ell$ range due to their beam size and noise level (the filtering applied in our analysis below effectively removes all information on smaller angular scales). We find similar consistency between theory and simulation for the other 45 simulated models. We test the impact of particle resolution using a smaller box of 300 Mpc/$h$, while keeping the same number of particles (i.e. 8 times higher resolution), and obtain excellent agreement at scales up to $\ell=3,000$. The lack of power on large angular scales is due to the limited size of our convergence maps, while the missing power on small scales is due to our particle resolution. On very small scales ($\ell \gtrsim 5 \times 10^4$), excess power due to finite-pixelization shot noise arises, but this effect is negligible on the scales considered in our analysis.
CMB lensing reconstruction {#sec:recon}
--------------------------
{width="\textwidth"}
In order to obtain CMB lensing convergence maps with realistic noise properties, we generate lensed CMB temperature maps and reconstruct noisy estimates of the convergence field. First, we generate Gaussian random field CMB temperature maps based on a $\Lambda$CDM concordance model temperature power spectrum computed with CAMB. We compute deflection field maps from the ray-traced convergence maps described in the previous sub-section, after applying a filter that removes power in the convergence maps above $
\ell \approx 4,000$.[^6] These deflection maps are then used to lens the simulated primary CMB temperature maps. The lensing simulation procedure is described in detail in Ref. [@Louis2013].
After obtaining the lensed temperature maps, we apply instrumental effects consistent with specifications for the ongoing AdvACT survey [@Henderson2016]. In particular, the maps are smoothed with a FWHM $=1.4$ arcmin beam, and Gaussian white noise of amplitude 6$\mu$K-arcmin is then added.
We subsequently perform lensing reconstruction on these beam-convolved, noisy temperature maps using the quadratic estimator of Ref. [@HuOkamoto2002], but with the replacement of unlensed with lensed CMB temperature power spectra in the filters, which gives an unbiased reconstruction to higher order [@Hanson2010]. The final result is a noisy estimate of the CMB lensing convergence field, with 1,000 realizations for each cosmological model (10,000 for the fiducial model).
We consider only temperature-based reconstruction in this work, leaving polarization estimators for future consideration. The temperature estimator is still expected to contribute more significantly than the polarization to the signal-to-noise for Stage-III CMB experiments like AdvACT, but polarization will dominate for Stage-IV (via $EB$ reconstruction). For the AdvACT-like experiment considered here, including polarization would increase the predicted signal-to-noise on the lensing power spectrum by $\approx 35$%. More importantly, polarization reconstruction allows the lensing field to be mapped out to smaller scales than temperature reconstruction [@HuOkamoto2002], and is more immune to foreground-related biases at high-$\ell$ [@vanEngelen2014b]. Thus, it could prove extremely useful for higher-order CMB lensing statistics, which are sourced by non-Gaussian structure on small scales. Clearly these points are worthy of future analysis, but we restrict this work to temperature reconstruction for simplicity.
In addition to the fiducial model, we select the nearest eight points in the sampled parameter space (points circled in blue in Fig. \[fig:design\]) for the reconstruction analysis. We determine this selection by first reconstructing the nearest models in parameter space, and then broadening the sampled points until the interpolation is stable and the forecasted contours (see Sec. \[sec:constraints\]) are converged for AdvACT-level noise. At this noise level, the other points in model space are sufficiently distant to contribute negligibly to the forecasted contours. In total, nine models are used to derive parameter constraints from the reconstructed, noisy maps. For completeness, we perform a similar convergence test using forecasted constraints from the noiseless maps, finding excellent agreement between contours derived using all 46 models and using only these nine models.
In Fig. \[fig:sample\_maps\], we show an example of a convergence map from the fiducial cosmology before (“noiseless”) and after (“noisy”) reconstruction. Prominent structures seen in the noiseless maps remain obvious in the reconstructed, noisy maps.
Gaussian random field
---------------------
We also reconstruct a set of Gaussian random fields (GRF) in the fiducial model. We generate a set of GRFs using the average power spectrum of the noiseless $\kappa$ maps. We then lens simulated CMB maps using these GRFs, following the same procedure as outlined above, and subsequently perform lensing reconstruction, just as for the reconstructed $N$-body $\kappa$ maps. These noisy GRF-only reconstructions allow us to examine the effect of reconstruction (in particular the non-Gaussianity of the reconstruction noise itself), as well as to determine the level of non-Gaussianity in the noisy $\kappa$ maps.
Interpolation
-------------
![\[fig:interp\] Fractional differences between interpolated and “true” results for the fiducial power spectrum (top), PDF (middle), and peak counts (bottom). Here, we have built the interpolator using results for the other 45 cosmologies, and then compared the interpolated prediction at the fiducial parameter values to the actual simulated results for the fiducial cosmology. The error bars are scaled by $1/\sqrt{N_{\rm sims}}$, where the number of simulations $N_{\rm sims}=1,000$. The agreement for all three statistics is excellent.](plot/plot_interp.pdf){width="48.00000%"}
To build a model at points where we do not have simulations, we interpolate from the simulated points in parameter space using the Clough-Tocher interpolation scheme [@alfeld1984; @farin1986], which triangulates the input points and then minimizes the curvature of the interpolating surface; the interpolated points are guaranteed to be continuously differentiable. In Fig. \[fig:interp\], we show a test of the interpolation using the noiseless $\kappa$ maps: we build the interpolator using all of the simulated cosmologies except for the fiducial model (i.e., 45 cosmologies), and then compare the interpolated results at the fiducial parameter values with the true, simulated results for that cosmology. The agreement for all three statistics is excellent, with deviations $\lesssim$ few percent (and well within the statistical precision). Finally, to check the robustness of the interpolation scheme, we also run our analysis using linear interpolation, and obtain consistent results.[^7]
Analysis {#sec:analysis}
========
In this section, we describe the analysis of the simulated CMB lensing maps, including the computation of the power spectrum, peak counts, and PDF, and the likelihood estimation for cosmological parameters. These procedures are applied in the same way to the noiseless and noisy (reconstructed) maps.
Power spectrum, PDF, and peak counts
------------------------------------
To compute the power spectrum, we first estimate the two-dimensional (2D) power spectrum of CMB lensing maps ($M_{\kappa}$) using $$\begin{aligned}
\label{eq: ps2d}
C^{\kappa \kappa}(\ellB) = \hat M_{\kappa}(\ellB)^*\hat M_{\kappa}(\ellB) \,,\end{aligned}$$ where $\ellB$ is the 2D multipole with components $\ell_1$ and $\ell_2$, $\hat M_{\kappa}$ is the Fourier transform of $M_{\kappa}$, and the asterisk denotes complex conjugation. We then average over all the pixels within each $|\ellB|\in[\ell-\Delta\ell, \ell+\Delta\ell)$ bin, for 20 log-spaced bins in the range of $100<\ell<2,000$, to obtain the one-dimensional power spectrum.
The one-point PDF is the number of pixels with values between \[$\kappa-\Delta\kappa$, $\kappa+\Delta\kappa$) as a function of $\kappa$. We use 50 linear bins with edges listed in Table \[tab: bins\], and normalize the resulting PDF such that its integral is unity. The PDF is a simple observable (a histogram of the data), but captures the amplitude of all (zero-lag) higher-order moments in the map. Thus, it provides a potentially powerful characterization of the non-Gaussian information.
Peaks are defined as local maxima in a $\kappa$ map. In a pixelized map, they are pixels with values higher than the surrounding 8 (square) pixels. Similar to cluster counts, peak counts are sensitive to the most nonlinear structures in the Universe. For galaxy lensing, they have been found to associate with halos along the line of sight both with simulations [@Yang2011] and observations [@LiuHaiman2016]. We record peaks on smoothed $\kappa$ maps, in 25 linearly spaced bins with edges listed in Table \[tab: bins\].
----------------------- ------------------ -----------------------
Smoothing scale PDF bins edges Peak counts bin edges
(arcmin) (50 linear bins) (25 linear bins)
0.5 (noiseless) \[-0.50, +0.50\] \[-0.18, +0.36\]
1.0 (noiseless) \[-0.22, +0.22\] \[-0.15, +0.30\]
2.0 (noiseless) \[-0.18, +0.18\] \[-0.12, +0.24\]
5.0 (noiseless) \[-0.10, +0.10\] \[-0.09, +0.18\]
8.0 (noiseless) \[-0.08, +0.08\] \[-0.06, +0.12\]
1.0, 5.0, 8.0 (noisy) \[-0.12, +0.12\] \[-0.06, +0.14\]
----------------------- ------------------ -----------------------
: \[tab: bins\] PDF and peak counts bin edges for each smoothing scale (the full-width-half-maximum of the Gaussian smoothing kernel applied to the maps).
Cosmological constraints
------------------------
We estimate cosmological parameter confidence level (C.L.) contours assuming a constant (cosmology-independent) covariance and Gaussian likelihood, $$\begin{aligned}
P (\DB | \pB) = \frac{1}{2\pi|\CB|^{1/2}} \exp\left[-\frac{1}{2}(\DB-\muB)\CB^{-1}(\DB-\muB)\right],\end{aligned}$$ where $\DB$ is the data array, $\pB$ is the input parameter array, $\muB=\muB(\pB)$ is the interpolated model, and $\CB$ is the covariance matrix estimated using the fiducial cosmology, with determinant $|\CB|$. The correction factor for an unbiased inverse covariance estimator [@dietrich2010] is negligible in our case, with $(N_{\rm sims}-N_{\rm bins}-2)/(N_{\rm sims}-1) = 0.99$ for $N_{\rm sims} =10,000$ and $N_{\rm bins}=95$. We leave an investigation of the impact of cosmology-dependent covariance matrices and a non-Gaussian likelihood for future work.
Due to the limited size of our simulated maps, we must rescale the final error contour by a ratio ($r_{\rm sky}$) of simulated map size (12.25 deg$^2$) to the survey coverage (20,000 deg$^2$ for AdvACT). Two methods allow us to achieve this — rescaling the covariance matrix by $r_{\rm sky}$ before computing the likelihood plane, or rescaling the final C.L. contour by $r_{\rm sky}$. These two methods yield consistent results. In our final analysis, we choose the former method.
Results {#sec:results}
=======
Non-Gaussianity in noiseless maps {#sec:non-gauss}
---------------------------------
{width="48.00000%"} {width="48.00000%"}
{width="48.00000%"} {width="48.00000%"}
We show the PDF of noiseless $N$-body $\kappa$ maps (PDF$^\kappa$) for the fiducial cosmology in Fig. \[fig:noiseless\_PDF\], as well as that of GRF $\kappa$ maps (PDF$^{\rm GRF}$) generated from a power spectrum matching that of the $N$-body-derived maps. To better demonstrate the level of non-Gaussianity, we also show the fractional difference of PDF$^\kappa$ from PDF$^{\rm GRF}$. The error bars are scaled to AdvACT sky coverage (20,000 deg$^2$), though note that no noise is present here.
The departure of PDF$^\kappa$ from the Gaussian case is significant for all smoothing scales examined (FWHM = 0.5–8.0 arcmin), with increasing significance towards smaller smoothing scales, as expected. The excess in high $\kappa$ bins is expected as the result of nonlinear gravitational evolution, echoed by the deficit in low $\kappa$ bins.
We show the comparison of the peak counts of $N$-body $\kappa$ maps (${\rm N}^\kappa_{\rm peaks}$) versus that of GRFs (${\rm N}^{\rm GRF}_{\rm peaks}$) in Fig. \[fig:noiseless\_pk\]. The difference between ${\rm N}^\kappa_{\rm peaks}$ and ${\rm N}^{\rm GRF}_{\rm peaks}$ is less significant than the PDF, because the number of peaks is much smaller than the number of pixels — hence, the peak counts have larger Poisson noise. A similar trend of excess (deficit) of high (low) peaks is also seen in $\kappa$ peaks, when compared to the GRF peaks.
Covariance matrix {#sec:covariance}
-----------------
![\[fig:corr\_mat\] Correlation coefficients determined from the full noiseless (top) and noisy (bottom) covariance matrices. Bins 1-20 are for the power spectrum (labeled “PS”); bins 21-70 are for the PDF; and bins 71-95 are for peak counts.](plot/corr_mat.pdf "fig:"){width="48.00000%"} ![\[fig:corr\_mat\] Correlation coefficients determined from the full noiseless (top) and noisy (bottom) covariance matrices. Bins 1-20 are for the power spectrum (labeled “PS”); bins 21-70 are for the PDF; and bins 71-95 are for peak counts.](plot/corr_mat_noisy.pdf "fig:"){width="48.00000%"}
Fig. \[fig:corr\_mat\] shows the correlation coefficients of the total covariance matrix for both the noiseless and noisy maps, $$\begin{aligned}
\rhoB_{ij} = \frac{\CB_{ij}}{\sqrt{\CB_{ii}\CB_{jj}}}\end{aligned}$$ where $i$ and $j$ denote the bin number, with the first 20 bins for the power spectrum, the next 50 bins for the PDF, and the last 25 bins for peak counts.
In the noiseless case, the power spectrum shows little covariance in both its own off-diagonal terms ($<10\%$) and cross-covariance with the PDF and peaks ($<20\%$), hinting that the PDF and peaks contain independent information that is beyond the power spectrum. In contrast, the PDF and peak statistics show higher correlation in both self-covariance (i.e., the covariance within the sub-matrix for that statistic only) and cross-covariance, with strength almost comparable to the diagonal components. They both show strong correlation between nearby $\kappa$ bins (especially in the moderate-$|\kappa|$ regions), which arises from contributions due to common structures amongst the bins (e.g., galaxy clusters). Both statistics show anti-correlation between positive and negative $\kappa$ bins. The anti-correlation may be due to mass conservation — e.g., large amounts of mass falling into halos would result in large voids in surrounding regions.
In the noisy case, the off-diagonal terms are generally smaller than in the noiseless case. Moreover, the anti-correlation seen previously between the far positive and negative $\kappa$ tails in the PDF is now a weak positive correlation — we attribute this difference to the complex non-Gaussianity of the reconstruction noise. Interestingly, the self-covariance of the peak counts is significantly reduced compared to the noiseless case, while the self-covariance of the PDF persists to a reasonable degree.
Effect of reconstruction noise {#sec:recon_noise}
------------------------------
![\[fig:recon\] We demonstrate the effect of reconstruction noise on the power spectrum (top), the PDF (middle), and peak counts (bottom) by using Gaussian random field $\kappa$ maps (rather than $N$-body-derived maps) as input to the reconstruction pipeline. The noiseless (solid curves) and noisy/reconstructed (dashed curves) statistics are shown. All maps used here have been smoothed with a Gaussian kernel of FWHM $= 8$ arcmin.](plot/plot_reconstruction.pdf){width="48.00000%"}
To disentangle the effect of reconstruction noise from that of nonlinear structure growth, we compare the three statistics before (noiseless) and after (noisy) reconstruction, using only the GRF $\kappa$ fields. Fig. \[fig:recon\] shows the power spectra, PDFs, and peak counts for both the noiseless (solid curves) and noisy (dashed curves) GRFs, all smoothed with a FWHM $= 8$ arcmin Gaussian window. The reconstructed power spectrum has significant noise on small scales, as expected (this is dominated by the usual “$N^{(0)}$” noise bias).
The post-reconstruction PDF shows skewness, defined as $$\label{eq.skewdef}
S=\left\langle
\left( \frac {\kappa-\bar{\kappa}}{\sigma_\kappa}\right)^3 \right\rangle,$$ which is not present in the input GRFs. In other words, the reconstructed maps have a non-zero three-point function, even though the input GRF $\kappa$ maps in this case do not. While this may seem surprising at first, we recall that the three-point function of the reconstructed map corresponds to a six-point function of the CMB temperature map (in the quadratic estimator formalism). Even for a Gaussian random field, the six-point function contains non-zero Wick contractions (those that reduce to products of two-point functions). Propagating such terms into the three-point function of the quadratic estimator for $\kappa$, we find that they do not cancel to zero. This result is precisely analogous to the usual “$N^{(0)}$ bias” on the CMB lensing power spectrum, in which the two-point function of the (Gaussian) primary CMB temperature gives a non-zero contribution to the temperature four-point function. The result in Fig. \[fig:recon\] indicates that the similar PDF “$N^{(0)}$ bias” contains a negative skewness (in addition to non-zero kurtosis and higher moments). While it should be possible to derive this result analytically, we defer the full calculation to future work. If we filter the reconstructed $\kappa$ maps with a large smoothing kernel, the skewness in the reconstructed PDF is significantly decreased (see Fig. \[fig:skew\]). We briefly investigate the PDF of the Planck 2015 CMB lensing map [@planck2015xv] and do not see clear evidence of such skewness — we attribute this to the low effective resolution of the Planck map (FWHM $\sim$ few degrees). Finally, we note that a non-zero three-point function of the reconstruction noise could potentially alter the forecasted $\kappa$ bispectrum results of Ref. [@Namikawa2016] (where the reconstruction noise was taken to be Gaussian). The non-Gaussian properties of the small-scale reconstruction noise were noted in Ref. [@HuOkamoto2002], who pointed out that the quadratic estimator at high-$\ell$ is constructed from progressively fewer arcminute-scale CMB fluctuations.
Similarly, the $\kappa$ peak count distribution also displays skewness after reconstruction, although it is less dramatic than that seen in the PDF. The peak of the distribution shifts to a higher $\kappa$ value due to the additional noise in the reconstructed maps. We note that the shape of the peak count distribution becomes somewhat rough when large smoothing kernels are applied to the maps, due to the small number of peaks present in this situation (e.g., $\approx 29$ peaks in a 12.25 deg$^2$ map with FWHM = 8 arcmin Gaussian window).
Non-Gaussianity in reconstructed maps {#sec:non-gauss_recon}
-------------------------------------
{width="48.00000%"} {width="48.00000%"}
{width="48.00000%"} {width="48.00000%"}
We show the PDF and peak counts of the reconstructed $\kappa$ maps in Figs. \[fig:noisyPDF\] and \[fig:noisypk\], respectively. The left panels of these figures show the results using maps with an 8 arcmin Gaussian smoothing window. We further consider a Wiener filter, which is often used to filter out noise based on some known information in a signal (i.e., the noiseless power spectrum in our case). The right panels show the Wiener-filtered results, where we inverse-variance weight each pixel in Fourier space, i.e., each Fourier mode is weighted by the ratio of the noiseless power spectrum to the noisy power spectrum (c.f. Fig. \[fig:recon\]), $$\begin{aligned}
f^{\rm Wiener} (\ell) = \frac{C_\ell^{\rm noiseless}}{C_\ell^{\rm noisy}} \,.\end{aligned}$$
Compared to the noiseless results shown in Figs. \[fig:noiseless\_PDF\] and \[fig:noiseless\_pk\], the differences between the PDF and peaks from the $N$-body-derived $\kappa$ maps and those from the GRF-derived $\kappa$ maps persist, but with less significance. For the Wiener-filtered maps, the deviations of the $N$-body-derived $\kappa$ statistics from the GRF case are 9$\sigma$ (PDF) and 6$\sigma$ (peaks), where we derived the significances using the simulated covariance from the $N$-body maps [^8]. These deviations capture the influence of both nonlinear evolution and post-Born effects.
![\[fig:skew\] Top panel: the skewness of the noiseless (triangles) and reconstructed, noisy (diamonds: $N$-body $\kappa$ maps; circles: GRF) PDFs. Bottom panel: the fractional difference between the skewness of the reconstructed $N$-body $\kappa$ and the reconstructed GRF. The error bars are for our map size (12.25 deg$^2$), and are only shown in the top panel for clarity.](plot/plot_skewness3.pdf){width="48.00000%"}
While the differences between the $N$-body and GRF cases in Figs. \[fig:noisyPDF\] and \[fig:noisypk\] are clear, understanding their detailed structure is more complex. First, note that the GRF cases exhibit the skewness discussed in Sec. \[sec:recon\_noise\], which arises from the reconstruction noise itself. We show the skewness of the reconstructed PDF (for both the $N$-body and GRF cases) compared with that of the noiseless ($N$-body) PDF for various smoothing scales in Fig. \[fig:skew\]. The noiseless $N$-body maps are positively skewed, as physically expected. The reconstructed, noisy maps are negatively skewed, for both the $N$-body and GRF cases. However, the reconstructed $N$-body results are less negatively skewed than the reconstructed GRF results (bottom panel of Fig. \[fig:skew\]), presumably because the $N$-body PDF (and peaks) contain contributions from the physical skewness, which is positive (see Figs. \[fig:noiseless\_PDF\] and \[fig:noiseless\_pk\]). However, the physical skewness is not large enough to overcome the negative “$N^{(0)}$”-type skewness coming from the reconstruction noise. We attribute the somewhat-outlying point at FWHM $=8$ arcmin in the bottom panel of Fig. \[fig:skew\] to a noise fluctuation, as the number of pixels at this smoothing scale is quite low (the deviation is consistent with zero). The decrease in $|S|$ between the FWHM $=2$ arcmin and 1 arcmin cases in the top panel of Fig. \[fig:skew\] for the noisy maps is due to the large increase in $\sigma_{\kappa}$ between these smoothing scales, as the noise is blowing up on small scales. The denominator of Eq. (\[eq.skewdef\]) thus increases dramatically, compared to the numerator.
Comparisons between the reconstructed PDF in the $N$-body case and GRF case are further complicated by the fact that higher-order “biases” arise due to the reconstruction. For example, the skewness of the reconstructed $N$-body $\kappa$ receives contributions from many other terms besides the physical skewness and the “$N^{(0)}$ bias” described above — there will also be Wick contractions involving combinations of two- and four-point functions of the CMB temperature and $\kappa$ (and perhaps an additional bias coming from a different contraction of the three-point function of $\kappa$, analogous to the “$N^{(1)}$” bias for the power spectrum [@Hanson2011]). So the overall “bias” on the reconstructed skewness will differ from that in the simple GRF case. This likely explains why we do not see an excess of positive $\kappa$ values over the GRF case in the PDFs shown in Fig. \[fig:noisyPDF\]. While this excess is clearly present in the noiseless case (Fig. \[fig:noiseless\_PDF\]), and it matches physical intuition there, the picture in the reconstructed case is not simple, because there is no guarantee that the reconstruction biases in the $N$-body and GRF cases are exactly the same. Thus, a comparison of the reconstructed $N$-body and GRF PDFs contains a mixture of the difference in the biases and the physical difference that we expect to see. Similar statements hold for comparisons of the peak counts.
Clearly, a full accounting of all such individual biases would be quite involved, but the key point here is that all these effects are fully present in our end-to-end simulation pipeline. While an analytic understanding would be helpful, it is not necessary for the forecasts we present below.
Cosmological constraints {#sec:constraints}
------------------------
Before we proceed to present the cosmological constraints from non-Gaussian statistics, it is necessary to do a sanity check by comparing the forecasted contour from our simulated power spectra to that from an analytic Fisher estimate, $$\begin{aligned}
\FB_{\alpha \beta}=\frac{1}{2} {\rm Tr}
\left\{\CB^{-1}_{\rm Gauss}
\left[\left(\frac {\partial C_\ell}{\partial p_\alpha} \right)
\left(\frac {\partial C_\ell}{\partial p_\beta}\right)^T+
\left(\alpha\leftrightarrow\beta \right)
\right]\right\},\end{aligned}$$ where $\left\{ \alpha,\beta \right\} = \left\{ \Omega_m,\sigma_8 \right\}$ and the trace is over $\ell$ bins. $\CB_{\rm Gauss}$ is the Gaussian covariance matrix, with off-diagonal terms set to zero, and diagonal terms equal to the Gaussian variance, $$\begin{aligned}
\sigma^2_\ell=\frac{2(C_\ell+N_\ell)^2}{f_{\rm sky}(2\ell+1)\Delta\ell}\end{aligned}$$
We compute the theoretical power spectrum $C_\ell$ using the HaloFit model [@Smith2003; @Takahashi2012], with fractional parameter variations of $+1$% to numerically obtain $\partial C_\ell / \partial p$. $N_\ell$ is the reconstruction noise power spectrum, originating from primordial CMB fluctuations and instrumental/atmospheric noise (note that we only consider white noise here). The sky fraction $f_{\rm sky}=0.485$ corresponds to the 20,000 deg$^2$ coverage expected for AdvACT. $(F^{-1}_{\alpha\alpha})^{\frac{1}{2}}$ is the marginalized error on parameter $\alpha$. Both theoretical and simulated contours use the power spectrum within the $\ell$ range of \[100, 2,000\]. The comparison is shown in Fig. \[fig:contour\_fisher\]. The contour from full $N$-body simulations shows good agreement with the analytical Fisher contour. This result indicates that approximations made in current analytical CMB lensing power spectrum forecasts are accurate, in particular the neglect of non-Gaussian covariances from nonlinear growth. A comparison of the analytic and reconstructed power spectra will be presented in Ref. [@Sherwin2016].
![\[fig:contour\_fisher\] 68% C.L. contours from an AdvACT-like CMB lensing power spectrum measurement. The excellent agreement between the simulated and analytic results confirms that non-Gaussian covariances arising from nonlinear growth and reconstruction noise do not strongly bias current analytic CMB lensing power spectrum forecasts (up to $\ell = 2,000$).](plot/plot_contour_fisher.pdf){width="48.00000%"}
Fig. \[fig:contour\_noiseless\] shows contours derived using noiseless maps for the PDF and peak count statistics, compared with that from the noiseless power spectrum. We compare three different smoothing scales (1.0, 5.0, 8.0 arcmin), and find that smaller smoothing scales have stronger constraining power. However, even with the smallest smoothing scale (1.0 arcmin), the PDF contour is still significantly larger than that of the power spectrum. Peak counts using 1.0 arcmin smoothing show almost equivalent constraining power as the power spectrum. However, we note that 1.0 arcmin smoothing is not a fair comparison to the power spectrum with cutoff at $\ell<2,000$, because in reality, the beam size and instrument noise is likely to smear out signals smaller than a few arcmin scale (see below).
At first, it may seem surprising that the PDF is not at least as constraining as the power spectrum in Fig. \[fig:contour\_noiseless\], since the PDF contains the information in the variance. However, this only captures an overall amplitude of the two-point function, whereas the power spectrum contains scale-dependent information.[^9] We illustrate this in Fig. \[fig:cell\_diff\], where we compare the fiducial power spectrum to that with a 1% increase in $\Omega_m$ or $\sigma_8$ (while keeping other parameters fixed). While $\sigma_8$ essentially re-scales the power spectrum by a factor $\sigma_8^2$, apart from a steeper dependence at high-$\ell$ due to nonlinear growth, $\Omega_m$ has a strong shape dependence. This is related to the change in the scale of matter-radiation equality [@planck2015xv]. Thus, for a noiseless measurement, the shape of the power spectrum contains significant additional information about these parameters, which is not captured by a simple change in the overall amplitude of the two-point function. This is the primary reason that the power spectrum is much more constraining than the PDF in Fig. \[fig:contour\_noiseless\].
{width="48.00000%"} {width="48.00000%"}
![\[fig:cell\_diff\] Fractional difference of the CMB lensing power spectrum after a 1% increase in $\Omega_m$ (thick solid line) or $\sigma_8$ (thin solid line), compared to the fiducial power spectrum. Other parameters are fixed at their fiducial values.](plot/plot_Cell_diff.pdf){width="48.00000%"}
{width="48.00000%"} {width="48.00000%"}
![\[fig:contour\_comb\] 68% C.L. contours derived using two combinations of the power spectrum, PDF, and peak counts, compared to using the power spectrum alone. Reconstruction noise corresponding to an AdvACT-like survey is included. The contours are scaled to AdvACT sky coverage of 20,000 deg$^2$.](plot/plot_contour_noisy_comb_clough.pdf){width="48.00000%"}
Fig. \[fig:contour\_noisy\] shows contours derived using the reconstructed, noisy $\kappa$ maps. We show results for three different filters — Gaussian windows of 1.0 and 5.0 arcmin and the Wiener filter. The 1.0 arcmin contour is the worst among all, as noise dominates at this scale. The 5.0 arcmin-smoothed and Wiener-filtered contours show similar constraining power. Using the PDF or peak counts alone, we do not achieve better constraints than using the power spectrum alone, but the parameter degeneracy directions for the statistics are slightly different. This is likely due to the fact that the PDF and peak counts probe non-linear structure, and thus they have a different dependence on the combination $\sigma_8(\Omega_m)^\gamma$ than the power spectrum does, where $\gamma$ specifies the degeneracy direction.
Combination $\Delta \Omega_m$ $\Delta \sigma_8 $
------------------ ------------------- --------------------
PS only 0.0065 0.0044
PDF + Peaks 0.0076 0.0035
PS + PDF + Peaks 0.0045 0.0030
: \[tab: constraints\] Marginalized constraints on $\Omega_m$ and $\sigma_8$ for an AdvACT-like survey from combinations of the power spectrum (PS), PDF, and peak counts, as shown in Fig. \[fig:contour\_comb\].
The error contour derived using all three statistics is shown in Fig. \[fig:contour\_comb\], where we use the 5.0 arcmin Gaussian smoothed maps. The one-dimensional marginalized errors are listed in Table \[tab: constraints\]. The combined contour shows moderate improvement ($\approx 30\%$ smaller error contour area) compared to the power spectrum alone. The improvement is due to the slightly different parameter degeneracy directions for the statistics, which break the $\sigma_8$-$\Omega_m$ degeneracy somewhat more effectively when combined. It is worth noting that we have not included information from external probes that constrain $\Omega_m$ (e.g., baryon acoustic oscillations), which can further break the $\Omega_m$-$\sigma_8$ degeneracy.
Conclusion {#sec:conclude}
==========
In this paper, we use $N$-body ray-tracing simulations to explore the additional information in CMB lensing maps beyond the traditional power spectrum. In particular, we investigate the one-point PDF and peak counts (local maxima in the convergence map). We also apply realistic reconstruction procedures that take into account primordial CMB fluctuations and instrumental noise for an AdvACT-like survey, with sky coverage of 20,000 deg$^2$, noise level 6 $\mu$K-arcmin, and $1.4$ arcmin beam. Our main findings are:
1. We find significant deviations of the PDF and peak counts of $N$-body-derived $\kappa$ maps from those of Gaussian random field $\kappa$ maps, both in the noiseless and noisy reconstructed cases (see Figs. \[fig:noiseless\_PDF\], \[fig:noiseless\_pk\], \[fig:noisyPDF\], and \[fig:noisypk\]). For AdvACT, we forecast the detection of non-Gaussianity to be $\approx$ 9$\sigma$ (PDF) and 6$\sigma$ (peak counts), after accounting for the non-Gaussianity of the reconstruction noise itself. The non-Gaussianity of the noise has been neglected in previous estimates, but we show that it is non-negligible (Fig. \[fig:recon\]).
2. We confirm that current analytic forecasts for CMB lensing power spectrum constraints are accurate when confronted with constraints derived from our $N$-body pipeline that include the full non-Gaussian covariance (Fig. \[fig:contour\_fisher\]).
3. An improvement of $\approx 30\%$ in the forecasted $\Omega_m$-$\sigma_8$ error contour is seen when the power spectrum is combined with PDF and peak counts (assuming AdvACT-level noise), compared to using the power spectrum alone. The covariance between the power spectrum and the other two non-Gaussian statistics is relatively small (with cross-covariance $< 20\%$ of the diagonal components), meaning the latter is complementary to the power spectrum.
4. For noiseless $\kappa$ maps (i.e., ignoring primordial CMB fluctuations and instrumental/atmospheric noise), a smaller smoothing kernel can help extract the most information from the PDF and peak counts (Fig. \[fig:contour\_noiseless\]). For example, peak counts of 1.0 arcmin Gaussian smoothed maps alone can provide equally tight constraints as from the power spectrum.
5. We find non-zero skewness in the PDF and peak counts of reconstructed GRFs, which is absent from the input noiseless GRFs by definition. This skewness is the result of the quadratic estimator used for CMB lensing reconstruction from the temperature or polarization maps. Future forecasts for non-Gaussian CMB lensing statistics should include these effects, as we have here, or else the expected signal-to-noise could be overestimated.
In this work, we have only considered temperature-based reconstruction estimators, but in the near future polarization-based estimators will have equally (and, eventually, higher) signal-to-noise. Moreover, the polarization estimators allow the lensing field to be mapped out to smaller scales, which suggests that they could be even more useful for non-Gaussian statistics.
In summary, there is rich information in CMB lensing maps that is not captured by two-point statistics, especially on small scales where nonlinear evolution is significant. In order to extract this information from future data from ongoing CMB Stage-III and near-future Stage-IV surveys, such as AdvACT, SPT-3G [@Benson2014], Simons Observatory[^10], and CMB-S4 [@Abazajian2015], non-Gaussian statistics must be studied and modeled carefully. We have shown that non-Gaussian statistics will already contain useful information for Stage-III surveys, which suggests that their role in Stage-IV analyses will be even more important. The payoff of these efforts could be significant, such as a quicker route to a neutrino mass detection.
We thank Nick Battaglia, Francois Bouchet, Simone Ferraro, Antony Lewis, Mark Neyrinck, Emmanuel Schaan, and Marcel Schmittfull for useful discussions. We acknowledge helpful comments from an anonymous referee. JL is supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-1602663. This work is partially supported by a Junior Fellowship from the Simons Foundation to JCH and a Simons Fellowship to ZH. BDS is supported by a Fellowship from the Miller Institute for Basic Research in Science at the University of California, Berkeley. This work is partially supported by NSF grant AST-1210877 (to ZH) and by a ROADS award at Columbia University. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by NSF grant ACI-1053575. Computations were performed on the GPC supercomputer at the SciNet HPC consortium. SciNet is funded by the Canada Foundation for Innovation under the auspices of Compute Canada, the Government of Ontario, the Ontario Research Fund — Research Excellence, and the Univ. of Toronto.
[^1]: For example, higher order moments [@Bernardeau1997; @Hui1999; @vanWaerbeke2001; @Takada2002; @Zaldarriaga2003; @Kilbinger2005; @Petri2015], three-point functions [@Takada2003; @Vafaei2010], bispectra [@Takada2004; @DZ05; @Sefusatti2006; @Berge2010], peak counts , Minkowski functionals [@Kratochvil2012; @Shirasakiyoshida2014; @Petri2013; @Petri2015], and Gaussianized power spectrum [@Neyrinck2009; @Neyrinck2014; @Yu2012].
[^2]: <http://wwwmpa.mpa-garching.mpg.de/gadget/>
[^3]: <https://pypi.python.org/pypi/lenstools/>
[^4]: <http://camb.info/>
[^5]: While the number of potential planes could be a limiting factor in our sensitivity to these effects, we note that our procedure uses $\approx 40$-70 planes for each ray-tracing calculation (depending on the cosmology), which closely matches the typical number of lensing deflections experienced by a CMB photon.
[^6]: We find that this filter is necessary for numerical stability (and also because our simulated $\kappa$ maps do not recover all structure on these small scales, as seen in Fig. \[fig:theory\_ps\]), but our results are unchanged for moderate perturbations to the filter scale.
[^7]: Due to our limited number of models, linear interpolation is slightly more vulnerable to sampling artifacts than the Clough-Tocher method, because the linear method only utilizes the nearest points in parameter space. The Clough-Tocher method also uses the derivative information. Therefore, we choose Clough-Tocher for our analysis.
[^8]: We note that the signal-to-noise ratios predicted here are comparable to the $\approx 7\sigma$ bispectrum prediction that would be obtained by rescaling the SPT-3G result from Table I of Ref. [@Pratten2016] to the AdvACT sky coverage (which is a slight overestimate given AdvACT’s higher noise level). The higher significance for the PDF found here could be due to several reasons: (i) additional contributions to the signal-to-noise for the PDF from higher-order polyspectra beyond the bispectrum; (ii) inaccuracy of the nonlinear fitting formula used in Ref. [@Pratten2016] on small scales, as compared to the N-body methods used here; (iii) reduced cancellation between the nonlinear growth and post-Born effects in higher-order polyspectra (for the bispectrum, these contributions cancel to a large extent, reducing the signal-to-noise [@Pratten2016]).
[^9]: Note that measuring the PDF or peak counts for different smoothing scales can recover additional scale-dependent information as well.
[^10]: <http://www.simonsobservatory.org/>
|
{
"pile_set_name": "arxiv"
}
|
<UIView; frame = (0 0; 1112 834); autoresize = W+H; layer = <CALayer>>
| <UILabel; frame = (528.333 20; 55.6667 20.3333); text = 'What's'; userInteractionEnabled = NO; layer = <_UILabelLayer>>
| <UILabel; frame = (0 417; 25 20.3333); text = 'the'; userInteractionEnabled = NO; layer = <_UILabelLayer>>
| <UILabel; frame = (1073 417; 39 20.3333); text = 'point'; userInteractionEnabled = NO; layer = <_UILabelLayer>>
| <UILabel; frame = (552.333 816; 7.66667 18); text = '?'; userInteractionEnabled = NO; layer = <_UILabelLayer>>
|
{
"pile_set_name": "github"
}
|
---
abstract: 'We are carrying out a search for all radio loud Active Galactic Nuclei observed with [*XMM-Newton*]{}, including targeted and field sources to perform a multi-wavelength study of these objects. We have cross-correlated the Verón-Cetty & Verón (2010) catalogue with the [*XMM-Newton*]{} Serendipitous Source Catalogue (2XMMi) and found around 4000 sources. A literature search provided radio, optical, and X-ray data for 403 sources. This poster summarizes the first results of our study.'
---
Introduction and Sample
=======================
![Radio loud (green) and radio quiet (blue and red) AGN. Rx is the ratio between 5 GHz and 2-10 keV emission. R is the ratio between 5 GHz and B-band emission. Boundares from Panessa et al. 2007. Blue points are sources classified as radio quiet by R and Rx parameters Red points are sources classified as radio quiet by the R parameter and radio loud by the Rx parameter []{data-label="loudquiet"}](logr_logrx.ps){width="3.4in"}
Bianchi et al. (2009a,b) presented the Catalogue of Active Galactic Nuclei (AGN) in the [*XMM-Newton*]{} Archive (CAIXA). They focused on the radio-quiet, X-ray unobscured (NH $< 2\times\ 10^{22}$ cm$^{-2}$) AGN observed by XMM-Newton in targeted observations. We are carrying out a similar multiwavelength study, for both targeted and field radio-loud AGN observed by [*XMM Newton*]{}. We cross-correlated the Verón-Cetty & Verón (2010) catalogue (Quasars and Active Galactic Nuclei, 13th edition) with the [*XMM-Newton*]{} Serendipitous Source Catalogue (2XMMi, Watson et al., 2009) Third Data Release, and obtained a list of around 4000 sources. However, only 10% of the sources have published optical and radio data. Our sample consists of all AGN (403 total, Figures \[loudquiet\] and \[lxlb\]) with available X-ray (2-10 keV), optical (B-band) and radio (5 GHz) data.
![X-ray versus B-band luminosities. For optical luminosities higher than 10$^{-6}$ erg/s/Hz radio loud (green) AGN are brighter in X-rays than radio quiet (red and blue). This effect is stronger at higher luminosities (10$^{-5}$-10$^{-4}$ erg/s/Hz), where radio loud AGN deviate from the low luminosities correlation. X-ray emission in radio loud sources could have larger contributions from the jet.[]{data-label="lxlb"}](lumin210_luminb.ps){width="3.4in"}
First results and ongoing work
==============================
Radio loud sources show jet contribution to optical and X-ray emission, and are brighter in X-rays than radio quiet. Optical and X-rays are AGN dominated with small contribution from host. For optical luminosities higher than 10$^{-6}$ erg/s/Hz radio loud AGN are brighter in X-rays than radio quiet. This effect increases for higher luminosities ($10^{-5}-10^{-4}$), where loud AGN deviate from the low luminosities correlation. X-rays in radio loud sources could have higher contributions from the jet. The sample seems to be missing faint, radio loud AGN, although at this point it is not clear if this is due to selection or astrophysical effects.
While X-rays in radio loud AGN seem to come mainly from jets, other mechanisms of X-ray emission are being studied (e.g. ADAF)? A complete spectral optical and X-ray analysis, including also the 0.2-2 kev band will bring light to the origin of the X-rays in these sources. We are currently studying the sample properties according to different classifications (Seyfert and QSO or FR I and FR II morphologies), and will include IR data when available, with the goal of carrying out a systematic analysis in as many wavelengths as possible.
[4]{} natexlab\#1[\#1]{}
, S., [Bonilla]{}, N. F., [Guainazzi]{}, M., [Matt]{}, G., & [Ponti]{}, G. 2009, [A&A]{}, 501, 915
, S., [Guainazzi]{}, M., [Matt]{}, G., [Fonseca Bonilla]{}, N., & [Ponti]{}, G. 2009, [A&A]{}, 495, 421
, F., [Barcons]{}, X., & [Bassani]{} L. et al., 2009, å, 467, 519
, M. & [V[é]{}ron]{}, P. 2010, [A&A]{}, 518, A10
, M. G., [Schr[ö]{}der]{}, A. C., [Fyfe]{}, D., [et al.]{} 2009, [A&A]{}, 493, 339
|
{
"pile_set_name": "arxiv"
}
|
Senior Precision Rotary Microtome (latest Spencer 320 Type) which we trade, supply, export and manufacture is used for precise sectioning of tissues up to the thickness of 1 micron. Its interior mechanism rests on a heavy cast iron base that is covered with a full swing protective cover for easy cleaning and lubrication.Senior Precision Rotary Microtome (latest Spencer 320 Type) is known for independent feed mechanism with automatic safety device, universal knife holder with lateral movements permitting use of the entire knife edge and universal vice type object holder for accurate centering of the specimer. It is fabricated using fine grain steel tested for micro-structure, heat treated for optimum rigidity and sharpness and has a long service life.
|
{
"pile_set_name": "pile-cc"
}
|
---
author:
- 'A. Gallenne'
- 'A. Mérand'
- 'P. Kervella'
- 'O. Chesneau'
- 'J. Breitfelder'
- 'W. Gieren'
bibliography:
- './bibliographie.bib'
date: 'Received July 11, 2013; accepted August 30, 2013'
subtitle: 'IV. T Monocerotis and X Sagittarii from mid-infrared interferometry with VLTI/MIDI[^1]'
title: Extended envelopes around Galactic Cepheids
---
[We study the close environment of nearby Cepheids using high spatial resolution observations in the mid-infrared with the VLTI/MIDI instrument, a two-beam interferometric recombiner.]{} [We obtained spectra and visibilities for the classical Cepheids X Sgr and T Mon. We fitted the MIDI measurements, supplemented by $B, V, J, H, K$ literature photometry, with the numerical transfer code `DUSTY` to determine the dust shell parameters. We used a typical dust composition for circumstellar environments.]{} [We detect an extended dusty environment in the spectra and visibilities for both stars, although T Mon might suffer from thermal background contamination. We attribute this to the presence of a circumstellar envelope (CSE) surrounding the Cepheids. This is optically thin for X Sgr ($\tau_\mathrm{0.55\mathrm{\mu m}} = 0.008$), while it appears to be thicker for T Mon ($\tau_\mathrm{0.55\mathrm{\mu m}} = 0.15$). They are located at about 15–20 stellar radii. Following our previous work, we derived a likely period-excess relation in the VISIR PAH1 filter, $ f_\mathrm{8.6\,\mu m}$\[%\]$ = 0.81(\pm0.04)P$\[day\]. We argue that the impact of CSEs on the mid-IR period–luminosity (P–L) relation cannot be negligible because they can bias the Cepheid brightness by up to about 30%. For the $K$-band P–L relation, the CSE contribution seems to be lower ($< 5$%), but the sample needs to be enlarged to firmly conclude that the impact of the CSEs is negligible in this band.]{}
Introduction
============
A significant fraction of classical Cepheids exhibits an infrared excess, that is probably caused by a circumstellar envelope (CSE). The discovery of the first CSE around the Cepheid $\ell$ Car made use of near- and mid-infrared interferometric observations [@Kervella_2006_03_0]. Similar detections were subsequently reported for other Cepheids [@Merand_2007_08_0; @Merand_2006_07_0; @Barmby_2011_11_0; @Gallenne_2011_11_0], leading to the hypothesis that maybe all Cepheids are surrounded by a CSE. These envelopes are interesting from several aspects. Firstly, they might be related to past or ongoing stellar mass loss and might be used to trace the Cepheid evolution history. Secondly, their presence might induce a bias to distance determinations made with Baade-Wesselink methods and bias the calibration of the IR period–luminosity (P–L) relation. Our previous works [@Gallenne_2011_11_0; @Merand_2007_08_0; @Merand_2006_07_0; @Kervella_2006_03_0] showed that these CSEs have an angular size of a few stellar radii and a flux contribution to the photosphere ranging from a few percent to several tens of percent. While in the near-IR the CSE flux emission might be negligible compared with the photospheric continuum, this is not the case in the mid- and far-IR, where the CSE emission dominates [@Gallenne_2011_11_0; @Kervella_2009_05_0].
Interestingly, a correlation starts to appear between the pulsation period and the CSE brightness in the near- and mid-IR bands: long-period Cepheids seem to show relatively brighter CSEs than short-period Cepheids, indicating that the mass-loss mechanism could be linked to stellar pulsation [@Gallenne_2011_11_0; @Merand_2007_08_0]. Cepheids with long periods have higher masses and larger radii, therefore if we assume that the CSE IR brightness is an indicator of the mass-loss rate, this would mean that heavier stars experience higher mass-loss rates. This behavior could be explained by the stronger velocity fields in longer-period Cepheids and shock waves at certain pulsation phases [@Nardetto_2008_10_0; @Nardetto_2006_07_0]. Studying this correlation between the pulsation period and the IR excess is vital for calibrating relations between the Cepheid’ fundamental parameters with respect to their pulsation periods. If CSEs substantially influence the observational estimation of these fundamental parameters (luminosity, mass, radius, etc.), this a correlation will lead to a biased calibration. It is therefore essential to continue studying and characterizing these CSEs and to increase the statistical sample to confirm their properties.
We present new spatially resolved VLTI/MIDI interferometric observations of the classical Cepheids (HD 161592, $P = 7.01$days) and (HD 44990, $P = 27.02$days). The paper is organized as follows. Observations and data reduction procedures are presented in Sect. \[section\_\_observation\]. The data modeling and results are reported in Sect. \[section\_\_cse\_modeling\]. In Sect. \[section\_\_period\_excess\_relation\] we address the possible relation between the pulsation period and the IR excess. We then discuss our results in Sect. \[section\_\_discussion\] and conclude in Sect. \[section\_\_conclusion\].
VLTI/MIDI observations {#section__observation}
======================
Observations
------------
The observations were carried out in 2008 and 2009 with the VLT Unit Telescopes and the MIDI instrument [@Leinert_2003__0]. MIDI combines the coherent light coming from two telescopes in the $N$ band ($\lambda = 8-13\,\mu$m) and provides the spectrum and spectrally dispersed fringes with two possible spectral resolutions ($R = \Delta \lambda / \lambda = 30, 230$). For the observations presented here, we used the prism that provides the lowest spectral resolution. During the observations, the secondary mirrors of the two Unit Telescopes (UT1-UT4) were chopped with a frequency of 2 Hz to properly sample the sky background. MIDI has two photometric calibration modes: HIGH\_SENS, in which the flux is measured separately after the interferometric observations, and SCI\_PHOT, in which the photometry is measured simultaneously with the interferences fringes. These reported observations were obtained in HIGH\_SENS mode because of a relatively low thermal IR brightness of our Cepheids.
To remove the instrumental and atmospheric signatures, calibrators of known intrinsic visibility were observed immediately before or after the Cepheid. They were chosen from the @Cohen_1999_04_0 catalog, and are almost unresolved at our projected baselines ($V > 95$%, except for HD 169916, for which $V = 87$%). The systematic uncertainty associated with their a priori angular diameter error bars is negligible compared with the typical precision of the MIDI visibilities (10–15%). The uniform-disk angular diameters for the calibrators as well as the corresponding IRAS 12$\mu$m flux and the spectral type are given in Table \[table\_\_calibrators\].
The log of the MIDI observations is given in Table \[table\_\_journal\]. Observations \#1, \#2, and \#5–\#10 were not used because of low interferometric or/and photometric flux, possibly due to a temporary burst of very bad seeing or thin cirrus clouds.
-------- ---------------------- ----------------- -------------------- ----------
HD $\theta_\mathrm{UD}$ $f_\mathrm{W3}$ $f_\mathrm{Cohen}$ Sp. Type
(mas) (Jy) (Jy)
49293 $1.91 \pm 0.02$ $4.3 \pm 0.1$ $4.7 \pm 0.1$ K0IIIa
48433 $2.07 \pm 0.03$ $6.5 \pm 0.1$ $5.5 \pm 0.1$ K0.5III
168592 $2.66 \pm 0.05$ $8.3 \pm 0.1$ $7.4 \pm 0.1$ K4-5III
169916 $4.24 \pm 0.05$ $25.9 \pm 0.4$ $21.1 \pm 0.1$ K1IIIb
-------- ---------------------- ----------------- -------------------- ----------
: Properties of our calibrator stars.
\[table\_\_calibrators\]
---- ----------- -------- ---------- ---------------- ----------- ------
\# MJD $\phi$ Target $B_\mathrm{p}$ PA AM
(m) ($\degr$)
1 54 813.26 0.12 T Mon 129.6 63.7 1.20
2 54 813.27 0.12 T Mon 130.0 63.5 1.21
3 54 813.27 130.0 63.4 1.15
4 54 813.29 0.12 T Mon 130.0 62.6 1.26
5 54 813.30 0.12 T Mon 129.6 62.2 1.29
6 54 813.31 130.0 61.6 1.40
7 54 842.10 0.18 T Mon 108.7 62.6 1.26
8 54 842.11 HD 49293 110.1 59.7 1.21
9 54 842.13 0.18 T Mon 118.8 64.4 1.19
10 54 842.14 HD 49293 120.8 62.1 1.14
11 54 900.07 0.33 T Mon 128.5 61.4 1.33
12 54 900.09 HD 48433 128.8 59.9 1.49
13 54 905.39 126.6 39.3 1.18
14 54 905.40 0.76 X Sgr 126.4 49.2 1.06
15 54 905.41 122.6 45.7 1.11
16 54 905.43 0.76 X Sgr 129.4 55.3 1.02
---- ----------- -------- ---------- ---------------- ----------- ------
: Log of the observations.
\[table\_\_journal\]
Data reduction
--------------
To reduce these data we used two different reduction packages, MIA and EWS[^2]. MIA, developed at the Max-Planck-Institut f$\mathrm{\ddot{u}}$r Astronomie, implements an incoherent method where the power spectral density function of each scan is integrated to obtain the squared visibility amplitudes, which are then integrated over time. EWS, developed at the Leiden Observatory, is a coherent analysis that first aligns the interferograms before co-adding them, which result in a better signal-to-noise ratio of the visibility amplitudes. The data reduction results obtained with the MIA and EWS packages agree well within the uncertainties.
The choice of the detector mask for extracting the source spectrum and estimating the background can be critical for the data quality. The latest version of the software uses adaptive masks, where shifts in positions and the width of the mask can be adjusted by fitting the mask for each target. To achieve the best data quality, we first used MIA to fit a specific mask for each target (also allowing a visual check of the data and the mask), and then applied it in the EWS reduction.
Photometric templates from @Cohen_1999_04_0 were employed to perform an absolute calibration of the flux density. We finally averaged the data for a given target. This is justified because the MIDI uncertainties are on the order of 7-15% [@Chesneau_2007_10_0], and the projected baseline and PA are not significantly different for separate observing dates. The uncertainties of the visibilities are mainly dominated by the photometric calibration errors, which are common to all spectral channels ; we accordingly chose the standard deviation over a $1\,\mu$m range as error bars.
Flux and visibility fluctuations between datasets {#subsection__flux_and_visibility_fluctuations_between_datasets}
-------------------------------------------------
MIDI is strongly sensitive to the atmospheric conditions and can provide mis-estimates of the thermal flux density and visibility. This can be even worse for datasets combined from different observing nights, for instance for T Mon in our case. Another source of variance between different datasets can appear from the calibration process, that is, from a poor absolute flux and visibility calibration. In our case, each Cepheid observation was calibrated with a different calibrator (i.e., \#3-4 and \#11-12 for T Mon and \#13-14 and \#15-16 for X Sgr), which enabled us to check the calibrated data.
To quantify the fluctuations, we estimated the spectral relative variation for the flux density and visibility, that is, the ratio of the standard deviation to the mean value for each wavelength between two different calibrated observations. For X Sgr, the average variation (over all $\lambda$) is lower than 5% on the spectral flux and lower than 1.5% on the visibility. This is a sightly higher for T Mon because the data were acquired on separate nights ; we measured an average variation lower than 8% on the spectral flux and lower than 4% on the visibility.
Circumstellar envelope modeling {#section__cse_modeling}
===============================
Visibility and spectral energy distribution {#subsection__visibility_and_sed}
-------------------------------------------
The averaged calibrated visibility and spectral energy distribution (SED) are shown with blue dots in Figs. \[graph\_\_visibility\_xsgr\] and \[graph\_\_visibility\_tmon\]. The quality of the data in the window $9.4 < \lambda < 10\,\mu$m deteriorates significantly because of the water and ozone absorption in the Earth’s atmosphere. Wavelengths longer than $12\,\mathrm{\mu m}$ were not used because of low sensitivity. We therefore only used the spectra outside these wavelengths.
The photosphere of the stars is considered to be unresolved by the interferometer ($V > 98\,\%$), therefore the visibility profile is expected to be equal to unity for all wavelengths. However, we noticed a decreasing profile for both stars. This behavior is typical of emission from a circumstellar envelope (or disk), where the size of the emitting region grows with wavelength. This effect can be interpreted as emission at longer wavelengths coming from cooler material that is located at larger distances from the Cepheid than the warmer material emitted at shorter wavelengths. @Kervella_2009_05_0 previously observed the same trend for $\ell$ Car and RS Pup.
Assuming that the CSE is resolved by MIDI, the flux contribution of the dust shell is estimated to be about 50% at $10.5\,\mathrm{\mu m}$ for T Mon and 7% for X Sgr. It is worth mentioning that the excess is significantly higher for the longer-period Cepheid, T Mon, adding additional evidence about the correlation between the pulsation period and the CSE brightness suspected previously [@Gallenne_2011_11_0; @Merand_2007_08_0]. The CSE is also detected in the SED, with a contribution progressively increasing with wavelength. Compared with Kurucz atmosphere models [@Castelli_2003__0 solid black curve in Fig. \[graph\_\_visibility\_xsgr\] and \[graph\_\_visibility\_tmon\]], we notice that the CSE contribution becomes significant around $8\,\mathrm{\mu m}$ for X Sgr, while for T Mon it seems to start at shorter wavelengths. The Kurucz models were interpolated at $T_\mathrm{eff} = 5900$K , $\log g = 2$ and $V_\mathrm{t} = 4\,\mathrm{km~s^{-1}}$ for X Sgr [@Usenko_2012__0]. For T Mon observed at two different pulsation phases, the stellar temperature only varies from $\sim 5050$K ($\phi = 0.33$) to $\sim 5450$K ($\phi = 0.12$), we therefore chose the stellar parameters $T_\mathrm{eff} = 5200$K , $\log g = 1$, and $V_\mathrm{t} = 4\,\mathrm{km~s^{-1}}$ [@Kovtyukh_2005_01_0] for an average phase of 0.22. This has an effect of a few percent in the following fitted parameters (see Sect. \[subsubsection\_\_tmon\]).
Given the limited amount of data and the lack of feature that could be easily identified (apart from the alumina shoulder, see below), the investigation of the dust content and the dust grains geometrical properties is therefore limited by the high level of degeneracy. We restricted ourself to the range of dust compound to the refractory ones or the most frequently encountered around evolved stars.
The wind launched by Cepheids is not supposed to be enriched compared with the native composition of the star. Therefore, the formation of carbon grains in the vicinity of these stars is highly unprobable. The polycyclic aromatic hydrocarbons (PAHs) detected around some Cepheids by Spitzer/IRAC and MIPS have an interstellar origin and result from a density enhancement at the interface between the wind and the interstellar medium that leads to a bow shock [@Marengo_2010_12_0]. It is noteworthy that no signature of PAHs is observed in the MIDI spectrum or the MIDI visibilities (see Fig. \[graph\_\_visibility\_xsgr\] and \[graph\_\_visibility\_tmon\]).
The sublimation temperature of iron is higher than that of alumina and rapidly increases with density. Hence, iron is the most likely dust species expected to form in dense (shocked) regions with temperatures higher 1500 K [@Pollack_1994_02_0]. Moreover, alumina has a high sublimation temperature in the range of 1200-2000K (depending of the local density), and its presence is generally inferred by a shoulder of emission between 10 and $15\,\mathrm{\mu m}$ [@Chesneau_2005_06_0; @Verhoelst_2009_04_0]. Such a shoulder is identified in the spectrum and visibility of X Sgr, suggesting that this compound is definitely present. Yet, it must be kept in mind that the low aluminum abundance at solar metallicity prevents the formation of a large amount of this type of dust. No marked shoulder is observed in the spectrum and visibilities from T Mon, which is indicative of a lower content. The silicates are easily identified owing to their signature at $10\,\mathrm{\mu m}$. This signature is not clearly detected in the MIDI data.
Radiative transfer code: `DUSTY`
--------------------------------
To model the thermal-IR SED and visibility, we performed radiative transfer calculations for a spherical dust shell. We used the public-domain simulation code `DUSTY` [@Ivezic_1997_06_0; @Ivezic_1999_11_0], which solves the radiative transfer problem in a circumstellar dusty environment by analytically integrating the radiative-transfer equation in planar or spherical geometries. The method is based on a self-consistent equation for the spectral energy density, including dust scattering, absorption, and emission. To solve the radiative transfer problem, the following parameters for the central source and the dusty region are required:
- the spectral shape of the central source’s radiation,
- the dust grain properties: chemical composition, grain size distribution, and dust temperature at the inner radius,
- the density distribution of the dust and the relative thickness, and
- the radial optical depth at a reference wavelength.
`DUSTY` then provides the SED, the surface brightness at specified wavelengths, the radial profiles of density, optical depth and dust temperature, and the visibility profile as a function of the spatial frequency for the specified wavelengths.
Single dust shell model {#subsection__single_dust_shell_model}
-----------------------
We performed a simultaneous fit of the MIDI spectrum and visibilities with various `DUSTY` models to check the consistency with our data. The central source was represented with Kurucz atmosphere models [@Castelli_2003__0] with the stellar parameters listed in Sect. \[subsection\_\_visibility\_and\_sed\]. In the absence of strong dust features, we focused on typical dust species encountered in circumstellar envelopes and according to the typical abundances of Cepheid atmospheres, that is, amorphous alumina [Al$_2$O$_3$ compact, @Begemann_1997_02_0], iron [Fe, @Henning_1995_07_0], warm silicate [W-S, @Ossenkopf_1994_11_0], olivine [MgFeSiO$_4$, @Dorschner_1995_08_0], and forsterite [Mg$_2$SiO$_4$, @Jager_2003_09_0]. We present in Fig. \[graph\_\_dust\_efficiency\] the optical efficiency of these species for the MIDI wavelength region. We see in this plot for instance that the amorphous alumina is optically more efficient around $11\,\mathrm{\mu m}$. We also notice that forsterite, olivine, and warm silicate have a similar optical efficiency, but as we cannot differentiate these dust species with our data, we decided to use warm silicates only.
We used a grain size distribution following a standard Mathis-Rumpl-Nordsieck (MRN) relation [@Mathis_1977_10_0], that is, $n(a) \propto a^{3.5}$ for $0.005 \leqslant a \leqslant 0.25\,\mathrm{\mu m}$. We chose a spherical density distribution in the shell following a radiatively driven wind, because Cepheids are giant stars and might lose mass via stellar winds [@Neilson_2008_09_0]. In this case, `DUSTY` computes the density structure by solving the hydrodynamics equations, coupled to the radiative transfer equations. The shell thickness is the only input parameter required. It is worth mentioning that we do not know the dust density profile in the Cepheid outflow, and we chose the hydrodynamic calculation in DUSTY as a good assumption.
For both stars, we also added $B, V, J, H$ and $K$ photometric light curves from the literature to our mid-IR data to better constrain the stellar parameters (@Moffett_1984_07_0 [@Berdnikov_2008_04_0; @Feast_2008_06_0] for X Sgr, @Moffett_1984_07_0 [@Coulson_1985__0; @Berdnikov_2008_04_0; @Laney_1992_04_0] for T Mon). To avoid phase mismatch, the curves were fitted with a cubic spline function and were interpolated at our pulsation phase. We then used these values in the fitting process. The conversion from magnitude to flux takes into account the photometric system and the filter bandpass. During the fitting procedure, all flux densities $< 3\,\mathrm{\mu m}$ were corrected for interstellar extinction $A_\lambda = R_\lambda E(B - V)$ using the total-to-selective absorption ratios $R_\lambda$ from @Fouque_2003__0 and @Hindsley_1989_06_0. The mid-IR data were not corrected for the interstellar extinction, which we assumed to be negligible.
The free parameters are the stellar luminosity ($L_\star$), the dust temperature at the inner radius ($T_\mathrm{in}$), the optical depth at $0.55\,\mathrm{\mu m}$ ($\tau_\mathrm{0.55\mu m}$), and the color excess $E(B - V)$. Then we extracted from the output files of the best-fitted `DUSTY` model the shell internal diameter ($\theta_\mathrm{in}$), the stellar diameter ($\theta_\mathrm{LD}$), and the mass-loss rate $\dot{M}$. The stellar temperature of the Kurucz model ($T_\mathrm{eff}$), the shell’s relative thickness and the dust abundances were fixed during the fit. We chose $R_\mathrm{out}/R_\mathrm{in} = 500$ for the relative thickness as it is not constrained with our mid-IR data. The distance of the star was also fixed to 333.3pc for X Sgr [@Benedict_2007_04_0] and 1309.2pc for T Mon [@Storm_2011_10_0].
Results
-------
### X Sgr
The increase of the SED around $11\,\mathrm{\mu m}$ made us investigate in the direction of a CSE composed of Al$_2$O$_3$ material, which is optically efficient at this wavelength. After trying several dust species, we finally found a good agreement with a CSE composed of 100% amorphous alumina (model \#1 in Table \[table\_\_fit\_result\]). The fitted parameters are listed in Table \[table\_\_fit\_result\] and are plotted in Fig. \[graph\_\_visibility\_xsgr\]. However, a dust composed of 70% Al$_2$O$_3$ + 30% W-S (model \#4), or dust including some iron (model \#5), are also statistically consistent with our observations. Consequently, we chose to take as final parameters and uncertainties the average values and standard deviations (including their own statistical errors added quadratically) between models \#1, \#4, and \#5. The final adopted parameters are listed in Table \[table\_\_fit\_results\_final\]. It is worth mentioning that for these models all parameters have the same order of magnitude. The error on the stellar angular diameter was estimated from the luminosity and distance uncertainties.
The CSE of X Sgr is optically thin ($\tau_\mathrm{0.55\mu m} = 0.0079 \pm 0.0021$) and has an internal shell diameter of $\theta_\mathrm{in} = 15.6 \pm 2.9$mas. The condensation temperature we found is in the range of what is expected for this dust composition (1200-1900K). The stellar angular diameter (and in turn the luminosity) is also consistent with the value estimated from the surface-brightness method at that pulsation phase [@Storm_2011_10_0 $1.34 \pm 0.03$mas] and agrees with the average diameter measured by @Kervella_2004_03_0 [$1.47 \pm 0.03\,\mathrm{\mu m}$]. The relative CSE excess in the VISIR PAH1 filter of $13.3 \pm 0.5$% also agrees with the one estimated by @Gallenne_2011_11_0 [$11.7 \pm 4.7\,\%$]. Our derived color excess $E(B-V)$ is within $1\sigma$ of the average value $0.227 \pm 0.013$ estimated from photometric, spectroscopic, and space reddenings [@Fouque_2007_12_0; @Benedict_2007_04_0; @Kovtyukh_2008_09_0].
### T Mon {#subsubsection__tmon}
The CSE around this Cepheid has a stronger contribution than X Sgr. The large excess around $8\,\mathrm{\mu m}$ enables us to exclude a CSE composed of 100% Al$_2$O$_3$, because of its low efficiency in this wavelength range. We first considered dust composed of iron. However, other species probably contribute to the opacity enhancement. As showed in Fig. \[graph\_\_visibility\_tmon\], a 100% Fe dust composition is not consistent with our observations. We therefore used a mixture of W-S, Al$_2$O$_3$ and Fe to take into account the optical efficiency at all wavelengths. The best model that agrees with the visibility profile and the SED is model \#5, including 90% Fe + 5% Al$_2$O$_3$ + 5% W-S. The fitted parameters are listed in Table \[table\_\_fit\_result\] and are plotted in Fig. \[graph\_\_visibility\_tmon\]. However, because no specific dust features are present to constrain the models, other dust compositions are also consistent with the observations. Therefore we have chosen the average values and standard deviations (including their own statistical errors added quadratically) between models \#2, \#4 and \#5 as final parameters and uncertainties. The final adopted parameters are listed in Table \[table\_\_fit\_results\_final\].
The choice of a stellar temperature at $\phi = 0.33$ or 0.12 in the fitting procedure (instead of an average pulsation phase as cited in Sect. \[subsection\_\_visibility\_and\_sed\]) changes the derived parameters by at most 10% (the variation of the temperature is lower in the mid-IR). To be conservative, we added quadratically this relative error to all parameters of Table \[table\_\_fit\_results\_final\].
The CSE of T Mon appears to be thicker than that of X Sgr, with ($\tau_\mathrm{0.55\mu m} = 0.151 \pm 0.042$), and an internal shell diameter of $\theta_\mathrm{in} = 15.9 \pm 1.7$mas. The derived stellar diameter agrees well with the $1.01 \pm 0.03$mas estimated by @Storm_2011_10_0 [at $\phi = 0.22$]. The deduced color excess $E(B-V)$ agrees within $1\sigma$ with the average value $0.181 \pm 0.010$ estimated from photometric, spectroscopic and space reddenings [@Fouque_2007_12_0; @Benedict_2007_04_0; @Kovtyukh_2008_09_0]. We derived a particularly high IR excess in the VISIR PAH1 filter of $87.8 \pm 9.9$%, which might make this Cepheid a special case. It is worth mentioning that we were at the sensitivity limit of MIDI for this Cepheid, and the flux might be biased by a poor subtraction of the thermal sky background. However, the clear decreasing trend in the visibility profile as a function of wavelength cannot be attributed to a background emission, and we argue that this is the signature of a CSE. In Sect. \[section\_\_discussion\] we make a comparative study to remove the thermal sky background and qualitatively estimate the unbiased IR excess.
------------------------------------ ------------------ ------------------ ---------------------- ------------------- ----------------- ---------------------- --------------------------- ---------------------- ---------- --------------------- ---- --
Model $L_\star$ $T_\mathrm{eff}$ $\theta_\mathrm{LD}$ $E(B - V)$ $T_\mathrm{in}$ $\theta_\mathrm{in}$ $\tau_\mathrm{0.55\mu m}$ $\dot{M}$ $\alpha$ $\chi^2_\mathrm{r}$ \#
$(L_\odot)$ (K) (mas) (K) (mas) ($\times10^{-3}$) ($M_\odot\,yr^{-1}$) (%)
Al$_2$O$_3$ $2151 \pm 34$ 5900 $1.24 \pm 0.08$ $0.199 \pm 0.009$ $1732 \pm 152$ 13.2 $6.5 \pm 0.8$ $5.1\times10^{-8}$ 13.0 0.78 1
W-S $2155 \pm 40$ 5900 $1.24 \pm 0.08$ $0.200 \pm 0.011$ $1831 \pm 141$ 14.7 $11.8 \pm 0.2$ $6.4\times10^{-8}$ 15.0 1.53 2
Fe $2306 \pm 154$ 5900 $1.28 \pm 0.09$ $0.230 \pm 0.031$ $1456 \pm 605$ 29.3 $8.9 \pm 4.4$ $6.2\times10^{-8}$ 9.5 4.30 3
70% Al$_2$O$_3$ + 30% W-S $2153 \pm 31$ 5900 $1.24 \pm 0.08$ $0.200 \pm 0.009$ $1519 \pm 117$ 19.6 $6.6 \pm 0.7$ $5.6\times10^{-8}$ 13\. 7 0.58 4
60% Al$_2$O$_3$ + 20% W-S + 20% Fe $2160 \pm 35$ 5900 $1.24 \pm 0.08$ $0.201 \pm 0.009$ $1802 \pm 130$ 13.9 $10.6 \pm 1.2$ $6.1\times10^{-8}$ 13.3 0.68 5
Fe $12~453 \pm 775$ 5200 $0.98 \pm 0.03$ $0.183 \pm 0.049$ $1190 \pm 59$ 29.4 $113 \pm 21$ $5.0\times10^{-7}$ 99.5 5.36 1
80% Fe + 10% W-S + 10% Al$_2$O$_3$ $11~606 \pm 434$ 5200 $0.94 \pm 0.03$ $0.144 \pm 0.028$ $1418 \pm 42$ 16.6 $126 \pm 13$ $4.4\times10^{-7}$ 82.2 1.52 2
90% Fe + 10% W-S $11~696 \pm 667$ 5200 $0.95 \pm 0.03$ $0.149 \pm 0.044$ $1389 \pm 54$ 18.0 $147 \pm 23$ $5.0\times10^{-7}$ 94.6 3.04 3
90% Fe + 10% Al$_2$O$_3$ $11~455 \pm 580$ 5200 $0.94 \pm 0.03$ $0.137 \pm 0.040$ $1439 \pm 49$ 15.9 $158 \pm 21$ $5.0\times10^{-7}$ 87.5 2.21 4
90% Fe + 5% Al$_2$O$_3$ + 5% W-S $11~278 \pm 597$ 5200 $0.93 \pm 0.04$ $0.125 \pm 0.042$ $1458 \pm 48$ 15.3 $170 \pm 23$ $5.2\times10^{-7}$ 93.6 2.22 5
------------------------------------ ------------------ ------------------ ---------------------- ------------------- ----------------- ---------------------- --------------------------- ---------------------- ---------- --------------------- ---- --
\[table\_\_fit\_result\]
X Sgr T Mon
---------------------------------------------- ------------------- ------------------- -- -- -- -- -- --
$L_\star$ $(L_\odot)$ $2155 \pm 58$ $11~446 \pm 1486$
$T_\mathrm{eff}$ (K) 5900 5200
$\theta_\mathrm{LD}$ (mas) $1.24 \pm 0.14$ $0.94 \pm 0.11$
$E(B - V)$ $0.200 \pm 0.032$ $0.135 \pm 0.066$
$T_\mathrm{in}$ (K) $1684 \pm 225$ $1438 \pm 166$
$\theta_\mathrm{in}$ (mas) $15.6 \pm 2.9$ $15.9 \pm 1.7$
$\tau_\mathrm{0.55\mu m}$ ($\times10^{-3}$) $7.9 \pm 2.1$ $151 \pm 42$
$\dot{M}$ ($\times10^{-8} M_\odot\,yr^{-1}$) $5.6 \pm 0.6$ $48.7 \pm 5.9$
$\alpha$ (%) $13.3 \pm 0.7$ $87.8 \pm 9.9$
: Final adopted parameters.
\[table\_\_fit\_results\_final\]
Period-excess relation {#section__period_excess_relation}
======================
@Gallenne_2011_11_0 presented a probable correlation between the pulsation period and the CSE relative excess in the VISIR PAH1 filter. From our fitted `DUSTY` model, we estimated the CSE relative excess by integrating over the PAH1 filter profile. This allowed us another point of view on the trend of this correlation. X Sgr was part of the sample of @Gallenne_2011_11_0 and can be directly compared with our result, while T Mon is a new case. This correlation is plotted in Fig. \[graph\_\_excess\], with the measurements of this work as red triangles. The IR excess for X Sgr agrees very well with our previous measurements [@Gallenne_2011_11_0]. The excess for T Mon is extremely high, and does not seem to follow the suspected linear correlation.
Fig. \[graph\_\_excess\] shows that longer-period Cepheids have higher IR excesses. This excess is probably linked to past or ongoing mass-loss phenomena. Consequently, this correlation shows that long-period Cepheids have a larger mass-loss than shorter-period, less massive stars. This behavior might be explained by the stronger velocity fields in longer-period Cepheids, and the presence of shock waves at certain pulsation phases [@Nardetto_2006_07_0; @Nardetto_2008_10_0]. This scenario is consistent with the theoretically predicted range, $10^{-10}$–$10^{-7} M_\odot\,yr^{-1}$, of @Neilson_2008_09_0, based on a pulsation-driven mass-loss model. @Neilson_2011_05_0 also found that a pulsation-driven mass-loss model combined with moderate convective-core overshooting provides an explanation for the Cepheid mass discrepancy, where stellar evolution masses differ by 10-20% from stellar pulsation calculations.
We fitted the measured mi-IR excess with a linear function of the form $$f_\mathrm{8.6\,\mu m} = \alpha_\mathrm{8.6\,\mu m}P,$$ with $f$ in % and $P$ in day. We used a general weighted least-squares minimization, using errors on each measurements as weights . We found a slope $\alpha_\mathrm{8.6\,\mu m} = 0.83 \pm 0.04\,\mathrm{\%.d^{-1}}$, including T Mon, and $\alpha_\mathrm{8.6\,\mu m} = 0.81 \pm 0.04\,\mathrm{\%.d^{-1}}$ without. The linear relation is plotted in Fig. \[graph\_\_excess\].
Discussion {#section__discussion}
==========
Since the first detection around $\ell$ Car [@Kervella_2006_03_0], CSEs have been detected around many other Cepheids [@Gallenne_2011_11_0; @Merand_2007_08_0; @Merand_2006_07_0]. Our works, using IR and mid-IR high angular resolution techniques, lead to the hypothesis that all Cepheids might be surrounded by a CSE. The mechanism for their formation is still unknown, but it is very likely a consequence of mass loss during the pre-Cepheid evolution stage or during the multiple crossings of the instability strip. The period–excess relation favors the last scenario, because long-period Cepheids have higher masses and cross the instability strip up to three times.
Other mid- and far-IR extended emissions have also been reported by @Barmby_2011_11_0 around a significant fraction of their sample (29 Cepheids), based on Spitzer telescope observations. The case of $\delta$ Cep was extensively discussed in @Marengo_2010_12_0. From IRAS observations, @Deasy_1988_04_0 also detected IR excesses and estimated mass-loss rate ranging from $10^{-10}$ to $10^{-6}M_\odot\,yr^{-1}$. The values given by our `DUSTY` models agree. They are also consistent with the predicted mass-loss rate from @Neilson_2008_09_0, ranging from $10^{-10}$ to $10^{-7}M_\odot\,yr^{-1}$.
These CSEs might have an impact on the Cepheid distance scale through the photometric contribution of the envelopes. While at visible and near-IR wavelengths the CSE flux contribution might be negligible ($< 5$%), this is not the case in the mid-IR domain [see @Kervella_2013_02_0 for a more detailed discussion]. This is particularly critical because near- and mid-IR P-L relation are preferred due to the diminished impact of dust extinction. Recently, @Majaess_2013_08_0 re-examined the 3.6 and 4.5$\mathrm{\mu m}$ Spitzer observations and observed a nonlinear trend on the period-magnitude diagrams for LMC and SMC Cepheids. They found that longer-period Cepheids are slightly brighter than short-period ones. This trend is compatible with our period-excess relation observed for Galactic Cepheids. @Monson_2012_11_0 derived Galactic P–L relations at 3.6 and 4.5$\mathrm{\mu m}$ and found a strong color variation for Cepheids with $P > 10$days, but they attributed this to enhanced CO absorption at $4.5\,\mathrm{\mu m}$. From their light curves, we estimated the magnitudes expected at our observation phase for X Sgr and T Mon [using the ephemeris from @Samus_2009_01_0] to check the consistency with the values given by our `DUSTY` models (integrated over the filter bandpass). For X Sgr, our models give averaged magnitudes $m_\mathrm{3.6\,\mathrm{\mu m}} = 2.55 \pm 0.06$ and $m_\mathrm{4.5\,\mathrm{\mu m}} = 2.58 \pm 0.05$ (taking into account the 5% flux variations of Sect. \[subsection\_\_flux\_and\_visibility\_fluctuations\_between\_datasets\]), to be compared with $2.54 \pm 0.02$ and $2.52 \pm 0.02$ from @Monson_2012_11_0. For T Mon, we have $m_\mathrm{3.6\,\mathrm{\mu m}} = 2.94 \pm 0.14$ and $m_\mathrm{4.5\,\mathrm{\mu m}} = 2.94 \pm 0.14$ from the models (taking into account the 8% flux variations of Sect. \[subsection\_\_flux\_and\_visibility\_fluctuations\_between\_datasets\] and a 10% flux error for the phase mismatch), to be compared with $3.29 \pm 0.08$ and $3.28 \pm 0.05$ (with the rms between phase 0.12 and 0.33 as uncertainty). Our estimated magnitudes are consistent for X Sgr, while we differ by about $2\sigma$ for T Mon. As we describe below, we suspect a sky background contamination in the MIDI data. The estimated excesses from the model at 3.6 and $4.5\,\mathrm{\mu m}$ are $6.0 \pm 0.5$% and $6.3 \pm 0.5$% for X Sgr, and $46 \pm 5$% and $58 \pm 6$% for T Mon (errors estimated from the standard deviation of each model). This substantial photometric contribution probably affects the Spitzer/IRAC P–L relation derived by @Monson_2012_11_0 and the calibration of the Hubble constant by @Freedman_2012_10_0.
We also compared our models with the Spitzer 5.8 and 8.0$\mathrm{\mu m}$ magnitudes of @Marengo_2010_01_0 which are only available for T Mon. However, their measurements correspond to the pulsation phase 0.65, so we have to take a phase mismatch into account. According to the light curves of @Monson_2012_11_0, the maximum amplitude at 3.6 and 4.5$\mathrm{\mu m}$ is decreasing from 0.42 to 0.40mag, respectively. As the light curve amplitude is decreasing with wavelength, we can safely assume a maximum amplitude at 5.8 and 8.0$\mathrm{\mu m}$ of 0.25mag. We take this value as the highest uncertainty, which we added quadratically to the measurements of @Marengo_2010_01_0, which leads to $m_\mathrm{5.8\,\mathrm{\mu m}} = 3.43 \pm 0.25$ and $m_\mathrm{8.0\,\mathrm{\mu m}} = 3.32 \pm 0.25$. Integrating our models on the Spitzer filter profiles, we obtained $m_\mathrm{5.8\,\mathrm{\mu m}} = 2.85 \pm 0.14$ and $m_\mathrm{8.0\,\mathrm{\mu m}} = 2.67 \pm 0.14$, which differ by about $2\sigma = 0.5\,$mag from the empirical values at $8\,\mathrm{\mu m}$. A possible explanation of this discrepancy would be a background contamination in our MIDI measurements. Indeed, due to its faintness, T Mon is at the sensitivity limit of the instrument, and the sky background can contribute to the measured IR flux (only contributes to the incoherent flux). Assuming that this $2\sigma$ discrepancy is due to the sky background emission, we can estimate the contribution of the CSE with the following approach. The flux measured by @Marengo_2010_01_0 corresponds to $f_\star + f_\mathrm{env}$, that is, the contribution of the star and the CSE, while MIDI measured an additional term corresponding to the background emission, $f_\star + f_\mathrm{env} + f_\mathrm{sky}$. From our derived `DUSTY` flux ratio (Table \[table\_\_fit\_results\_final\]) and the magnitude difference between MIDI and Spitzer, we have the following equations: $$\label{eq__1}
\dfrac{f_\mathrm{env} + f_\mathrm{sky}}{f_\star} = \alpha,\ \mathrm{and}$$ $$\label{eq__2}
2\sigma = -2.5\,\log \left( \dfrac{f_\star + f_\mathrm{env}}{f_\star + f_\mathrm{env} + f_\mathrm{sky}} \right),$$ where $2\sigma$ is the magnitude difference between the Spitzer and MIDI observations. Combining Eqs. \[eq\_\_1\] and \[eq\_\_2\], we estimate the real flux ratio to be $f_\mathrm{env}/f_\star \sim 19\,$%. Interestingly, this is also more consistent with the expected period-excess relation plotted in Fig. \[graph\_\_excess\], although in a different filter.
We also derived the IR excess in the $K$ band to check the possible impact on the usual P–L relation for those two stars. Our models gives a relative excess of $\sim 24.3 \pm 2.7$% for T Mon, and for X Sgr we found $4.3 \pm 0.3$%. However, caution is required with the excess of T Mon since it might suffer from sky-background contamination. Therefore, we conclude that the bias on the $K$-band P–L relation might be negligible compared with the intrinsic dispersion of the P–L relation itself.
Conclusion {#section__conclusion}
==========
Based on mid-IR observations with the MIDI instrument of the VLTI, we have detected the circumstellar envelope around the Cepheids X Sgr and T Mon. We used the numerical radiative transfer code `DUSTY` to simultaneously fit the SED and visibility profile to determine physical parameters related to the stars and their dust shells. We confirm the previous IR emission detected by @Gallenne_2011_11_0 for X Sgr with an excess of 13.3%, and we estimate a $\sim 19\,$% excess for T Mon at $8\,\mathrm{\mu m}$.
As the investigation of the dust content and the dust grains geometrical properties are limited by a high level of degeneracy, we restricted ourselves to typical dust composition for circumstellar environment. We found optically thin envelopes with an internal dust shell radius in the range 15-20mas. The relative CSE excess seems to be significant from $8\,\mathrm{\mu m}$ ($> 10$%), depending on the pulsation period, while for shorter wavelengths, the photometric contribution might be negligible. Therefore, the impact on the $K$-band P–L relation is low ($\lesssim 5$%), but it is considerable for the mid-IR P–L relation [@Ngeow_2012_09_0; @Monson_2012_11_0], where the bias due to the presence of a CSE can reach more than 30%. Although still not statistically significant, we derived a linear period-excess relation, showing that longer-period Cepheids exhibit a higher IR excess than shorter-period Cepheids.
It is now necessary to increase the statistical sample and investigate whether CSEs are a global phenomena for Cepheids. Interferometric imaging with the second-generation instrument VLTI/MATISSE [@Lopez_2006_07_0] will also be useful for imaging and probing possible asymmetry of these CSEs.
The authors thank the ESO-Paranal VLTI team for supporting the MIDI observations. We also thank the referee for the comments that helped to improve the quality of this paper. A.G. acknowledges support from FONDECYT grant 3130361. W.G. gratefully acknowledge financial support for this work from the BASAL Centro de Astrofísica y Tecnologías Afines (CATA) PFB-06/2007. This research received the support of PHASE, the high angular resolution partnership between ONERA, Observatoire de Paris, CNRS, and University Denis Diderot Paris 7. This work made use of the SIMBAD and VIZIER astrophysical database from CDS, Strasbourg, France and the bibliographic informations from the NASA Astrophysics Data System.
[^1]: Based on observations made with ESO telescopes at Paranal observatory under program ID 082.D-0066
[^2]: The MIA+EWS software package is available at http://www.strw.leidenuniv.nl/$\sim$nevec/MIDI/index.html.
|
{
"pile_set_name": "arxiv"
}
|
VIOLENT/NON-CONSENSUAL SEX WARNING/DISCLAIMER: It is a story portraying a Conqueror/slave relationship, so it would appear non-consensual at first. As for sexual violence, there are scenes (In parts 3 and 4) which are detailed and graphic, and may not suite some readers.
Lord Conqueror of the Realm
Written by WarriorJudge
Part 19
In northern Greece , in the tavern on the border between Philippi and Macedonia , Nobleman Verosus and Nobleman Marton met with Domitia, in a room they had rented. The two Noblemen could not afford being overheard or even being seen in public with the lass.
"I don't understand. What did you do wrong?" the frustrated Nobleman Marton shouted at poor Domitia, who of no fault of her own found herself in this impossible and dangerous position. It was all Nobleman Marton could do not to resort to physical violence.
"I did exactly as I'd been told…" the young woman tried to defend herself.
Nobleman Verosus sent his fist through the wall. "Then the Conqueror should have been all over you… in and out of you!" he yelled and his eyebrows nearly touched together.
"The Conqueror wouldn't touch me," said Domitia.
Both Noblemen were still waiting for a reasonable explanation for this brilliant failure.
"Perhaps the Conqueror loves the Queen," she suggested quietly and shrugged.
Both men burst into laughter.
"Young women… All soft in the head… some of them never learn…" said Nobleman Verosus .
"Silly child," said Nobleman Marton, "the Conqueror doesn't love. The Conqueror lusts, lusts after power, lusts after blood and lusts after women, that is all. That is the source of her power. That's what sets her ever so highly above the rest of her sex. She feels no emotions and so she isn't governed by them."
"Well, the Lord Conqueror did marry the Queen," argued Domitia.
"She only married her concubine to spite us, to show us who truly rules the Empire. It is common knowledge even amongst complete idiots!"
Nobleman Marton turned to Nobleman Verosus and said, "We must consider the possibility that the Conqueror didn't take this silly girl over here because she realized it was all a ploy."
"By the Gods… what shall we do? Should we run?" Terror began to tighten its grip over Nobleman Verosus and he began fidgeting like a skittish horse.
"We are governors, we can't just disappear. Besides, there is no escaping the Conqueror. There is no place to hide, no place out of the Conqueror's reach. If we run now, the Conqueror will know we're guilty. Let me think…" Nobleman Marton said.
After some time had elapsed in silence with both men pacing restlessly from wall to wall, Nobleman Marton continued: "Lady Messalina won't say anything. She's neck deep in this and she has too much to lose."
"The Lord Conqueror knows nothing more than my name, and I am hardly the only Domitia in the Realm," she said. "And I wore nothing that would imply my station."
"That's very good. We might just come out of it alive," he said.
***
Two days had gone by. The Conqueror and the Queen were taking a stroll in the Imperial gardens, near the lily pond that the Queen adored so much. As they walked together side by side, enjoying the morning sun, the odor of blossoms and the light exercise, Gabrielle recalled the days when she had been a slave. How she used to walk in these magnificent gardens, trying to understand her Lord's moods and actions. It felt like a lifetime ago. As if to remind herself that she was in a different place now, that those days were over, Gabrielle reached for her Lord and interlaced her arm with the Conqueror's.
"They are all waiting for us in the Great Hall," Gabrielle said.
"Let them wait," the Conqueror smiled and looked at her Queen, while pressing a gentle hand over the pregnant Queen's back for support.
"There is one thing that isn't clear to me, why didn't Lady Messalina wait until after nightfall to tell me about the girl?"
"Whoever set this entire subterfuge didn't take two things into account. I wasn't familiar with the informant that disclosed Perous' whereabouts. I wasn't sure whether I could trust him or not, and I wasn't about to march blindly into a trap on the 'say so' of an informant I knew nothing about. First, I sent a scout to check the area and to confirm that Perous was indeed there and that he was alone. That took time," explained the Conqueror.
"And the second thing?"
"That I would return from Cyra alone and leave my forces behind… My desire to see you was too great. I couldn't wait."
The Queen rose to stand on her toes and placed a warm heartfelt kiss on the Conqueror's jaw, the highest place she could reach.
"You know, my Lady, you are the Realm's Sovereign."
"I know, my Lord," the Queen said and wondered why her Lord chose this time to remind her of that fact.
"And Lady Messalina is one of your ladies in waiting. She is your responsibility," the Conqueror said.
The reason for the Conqueror's words began to become apparent and clear to her. "I assume treason is punishable by death, my Lord?"
"It is, my Lady."
As they were nearing the gates of the palace, the Queen turned to the Conqueror, "My Lord?"
"Hmmm…?"
"Death is the most sever penalty for treason, is it not?" the Queen asked.
The Conqueror smiled for she understood the meaning and the reason for the Queen's question.
"It is, my Lady."
***
"The Lord Conqueror and her Majesty the Queen," the ceremony master announced as the Conqueror and the Queen entered the Great Hall.
As the Conqueror and the Queen made their way to their thrones, all present in the Great Hall bowed before them until they reached their destination and seated themselves.
"Noblemen and Ladies of the Realm," the Conqueror exclaimed, "We have summoned you all here due to a grave matter which has come to our attention and requires further investigation."
The noblemen and the ladies of the Realm began to look at one another agitatedly to see if anyone had any idea as to what the Conqueror was referring to.
"Lady Messalina," the Queen called.
Lady Messalina approached the thrones. "Your Majesties," she said and bowed before them.
As she stood before them, the Conqueror leaned over and whispered something in the Queen's ear.
"Lady Messalina, is it not true that just before noon on the day of my Lord's return from Cyra, you informed me that a young lass had been seen entering the Imperial tent?"
Lady Messalina's blood drained from her face and she grew as pale as a sheet. "It is true, your Majesty," she admitted.
"And how did you come by this bit of information?" the Queen inquired further.
"I… I can't remember, your Majesty," replied the nervous lady.
"Is it not true, that the lass in question is your very own daughter?"
Lady Messalina nearly fainted. The crowd around her gasped in surprise and walked backwards away from her, as if trying to disassociate themselves from her.
"It is, your Majesty." At this stage, lady Messalina had already realized there was no point in lying.
"Was it not your intention to cause dispute between my Lord and myself?"
Lady Messalina threw herself at the Queen's feet and began kissing them.
"You will stand up," the Queen ordered and her assertiveness gave pause to her subjects.
Lady Messalina rose back to her feet.
"You will answer the question."
"I will your Majesty," Lady Messalina replied.
"Did you act on your own volition?"
"No, your Majesty."
"Who put you up to this?" asked the Queen.
"Please, your gracious Majesty, I beg you please don't make me…"
"Nobleman Verosus and Nobleman Marton!" the Conqueror exclaimed.
Both men made their way through the crowd, mortified, joined their accomplice and bowed before the thrones.
"What have you got to say for yourselves?" the Conqueror's voice was ominous.
"Indeed not, but when her Majesty the Queen asked the question, Lady Messalina threw a glance at the two of you," said the Conqueror. "That confirmed my suspicions."
Noblemen Marton and Verosus confessed to the specifics of their scheme for all to hear by orders of the Conqueror, without trying to cast responsibility at one another and minimizing their own involvement in the traitorous conspiracy.
"Is my Lady prepared to render her verdict in the matter of Lady Messalina?" the Conqueror asked.
"I am, my Lord," the Queen replied.
"Lady Messalina, you have handled yourself poorly and reprehensibly. Being a Queen's lady in waiting is a sacred duty. It has been proven to my satisfaction that you have betrayed that duty and my trust. You have been disloyal to me and disloyal to my Lord and to the Realm. You've tried by despicable means to come between my Lord and myself. This offense I cannot and will not pardon. However, I am satisfied that there are mitigating circumstances since you were extorted. Desperation deprives some of rational thought and behooves them to take desperate measures. Therefore, it is my verdict that you should be stripped of your station and be banished from the Realm forthwith for my Lord's pleasure." The Queen voice was steady, firm and confident.
"Noblemen Marton and Verosus, greed and malice are no defense against treason. Your actions solicited, financed and facilitated an act of rebellion against us and against this Realm, which resulted in the death of several subjects and warriors of the Realm. Moreover, you have extorted her Majesty the Queen's lady in waiting and exploited her innocent daughter. You and your families will be stripped of your station and possessions. Marton and Verosus, you shall suffer a quick death in three days time. As for Macedonia , I hereby appoint Lila of Potidaea as the new governor to Macedonia and a Lady of this Realm. As for Philippi, I hereby appoint her Majesty the Queen's lady in waiting, Satrina, as the new governor to Philippi, if it pleases you, your Majesty," the Conqueror asked the Queen.
"It does, my gracious Lord," smiled the Queen.
As the guards came to remove the condemned men from the Great Hall, Lady Satrina scurried to bow before the Conqueror and the Queen.
"Your Majesties, I cannot thank you enough for your infinite kindness, honor and generosity your Majesties have shown me, and I am grateful with all my heart and soul for the great trust you place in me, but I pray you, if I may," she said and her excitement was evident in her voice.
"You may," granted the Queen.
"With your Majesties' permission, and if it pleases you, I wish to remain in her Majesty the Queen's presence and service for I am so very contented and happy with my life here in the palace," she said. "I could not have hoped to serve a kinder, nobler Sovereign than our benevolent Queen."
The Queen glanced over at the Conqueror with questioning eyes and the Conqueror, who was the one who first granted the honor, nodded her consent. Their subjects could not help but notice the silent exchange between them.
"As you wish, Lady Satrina and thank you," the Queen said and did her best to remain formal and regal and not let her own excitement be known in the forum.
"Captain Cornelius of the Imperial Guard," announced the ceremony master.
The Queen wasn't familiar with the name.
With wide determined strides, fitting a military man, Captain Cornelius approached the thrones and bowed before his Rulers.
"Your Majesties," he greeted.
It was then that the Queen recognized whom he was and fought an urge to move uncomfortably on her throne.
"With your permission, your Majesty," he humbly said and turned his attention to the Queen.
"Granted," said the Queen.
"I come before your gracious Majesty, a humble servant, to beg for forgiveness. In the past your Majesty showed me great kindness and granted excellent remedy, which I, I am ashamed to say, repaid with gross disrespect."
He chose this grand forum to offer his genuine remorse, rather than offer his apologies in private. In his mind, since he disrespected the Queen in the presence of the healer and others in the infirmary, it was only just that he should surrender his pride to the Queen in public.
He was also careful not to divulge any specifics of his transgression, including the fact that he was referring to the times back when the Queen had been a slave, so as not to cause the Queen either discomfort or embarrassment.
"I am sorry to say, I was foolish and a proud brute and I know in my heart I am not worthy of your Majesty's pardon. I assure your Majesty that as a result of your Majesty's dignity, generosity and supreme conduct towards me, which I didn't deserve I have mended my ways. I submit myself before you, your Majesty to punish as your Majesty deems fit," he said and knelt before the Queen.
"Stand up, Captain," she ordered and he obeyed.
"Your past misdeeds towards me are pardoned," the Queen said, then covered her mouth and whispered a private question in her Lord's ear, to which the latter nodded her agreement.
"You have exhibited candor and great honor, which leads me to believe your repentance is true and sincere. I hereby appoint you a nobleman to the Realm and a governor to Philippi ," the Queen said.
He lowered his head in humility and thanked his Queen for the bounty she had bestowed upon him.
"That concludes our business here today, Nobleman and Ladies of the Realm," the Conqueror stated, stood up and offered her arm to assist her pregnant Queen to her feet.
Standing in front of their subjects, the Conqueror went on to say, "As I trust you all know, today I have shown great leniency towards Marton and Verosus for their appalling treachery. By no means must you perceive it as any form of a precedent. I shall see no further division in this Realm."
As the Queen and her Lord made their way out of the Great Hall, their subjects bowed before them then began clapping their hands and chanting, " Hail to the Queen. "
Whilst strolling along the corridor that led to the Imperial chambers, the curious Queen asked, "How did my Lord know that the lass in Cyra was Messalina's daughter?"
"They have the same shade of hair color and the shape of their eyes and chins are exactly alike," the Conqueror explicated.
Alone in the privacy of their chambers, the Conqueror turned to her Queen took her small hands in hers and said with bright eyes, "I am so very proud of you, my Lady," and adorned the thin fingers with tender kisses.
***
After three days had passed, Marton and Verosus were brought to the gallows upon a wagon, which resembled one that was fit to carry small livestock. In the square stood a large crowd, as with any execution. The Conqueror always believed that even regular people, non-warriors were fascinated by death and were curious to see life as it was leaving the body. If someone else did the actual killing, then all the better.
Heavily guarded, the two men were escorted up the podium to face their Ruler and executioner. Verosus's neck was first to be stretched out and presented before the Conqueror.
As he was waiting, trembling on his knees and mumbling unintelligible words, the Conqueror unsheathed her sword, which was resting over her chiseled thigh in a leather scabbard. The polished, long and well-whetted blade caught the sun's rays.
The crowed cursed at the condemned men and cheered for their Sovereign, goading her on. It wasn't a novelty. The Conqueror knew that once she would lay the deadly strike, the cheers and the cursing would halt.
With one strike, the Conqueror put an end to his mumbling, and his severed head rolled over the floor of the podium, which was covered with sawdust to absorb the spilt blood, and his headless corpse slumped to the ground next to it.
Then came Marton's turn. Before he was shoved down to his knees by the guards and into his accomplice's pool of freshly spilt blood, the Conqueror leaned slightly towards him and whispered into his ear: “You do realize this is not retribution for some silly, inconsequential rebellion, which could have been handled quickly by a single battalion of my forces. This is mainly for trying to come between me and my Queen.”
His shocked expression was still frozen on his face when the Conqueror removed his head from his shoulders.
As the Conqueror wiped the blood off her sword and looked at Marton's head next to her boots, her mind strayed back to another execution which she had performed, the one of the British Captain, who had raped and killed some body slave, whose name the Conqueror couldn't even remember now.
Before she had sent him to his death, the Conqueror had desired to make it perfectly clear to the Captain the true and exact reason for his chastisement. When he had extended his head forward before her, whilst on his knees, she'd hissed at him, “This is for putting your filthy hands on what's mine. The slave you've raped and killed was just an excuse.”
|
{
"pile_set_name": "pile-cc"
}
|
When Rudy Gay left the game with a left knee injury late in the first quarter, memories of the Sacramento Kings’ (16-22) recent poor play minus a star resurfaced. The thought came to fruition as DeMarcus Cousins joined him on the sidelines in the waning seconds of regulation, and the short-handed Kings fell to the visiting Dallas Mavericks (27-12), 108-104.
The Kings are currently 2-2 on their six-game home stand and return to action on Friday in a contest against the Miami Heat. Join Cowbell Kingdom’sJames Ham as he recaps the action from the floor of Sleep Train Arena.
Golden State Warriors Projected Starters (31-22)
What to watch
1. Can the Kings win without DeMarcus Cousins?
The Kings are 0-7 without their starting center and it looks like Cousins will miss another game on Wednesday with a strained left hip flexor. Andrew Bogut is questionable for the Warrior with left shoulder inflammation, as is reserve Jermaine O’Neal (sore back). This game might turn into a track meet, which doesn’t bode well for Sacramento.
2. Can the Kings defend the 3-point line?
Sacramento ranks 28th in the league against the long ball. The Warriors starting backcourt of Curry and Thompson have already shot close to 800 3-pointers on the season. If the Kings don’t stay with Golden State’s shooters, they have very little chance of pulling off the upset.
3. How do the Kings players handle the trade rumors?
The trade deadline is 12pm PST on Thursday and the rumors are swirling. Do the Kings players crumble under the pressure or do they come out swinging in what might be their last game in Sacramento?
According to an NBA source, Sacramento Kings point guard Isaiah Thomas underwent an MRI earlier Tuesday on his left wrist. Counter to other media reports, the results of the tests were negative and Thomas is not expected to miss any time with the injury.
Since taking over the starting position 35 games ago, Thomas is averaging 21.5 points, 6.9 assists and 1.3 steals per game in 37.5 minutes. But rumors that he was having some discomfort in his wrist began a few weeks back.
Recently, his shooting numbers have taken a dramatic dip, beginning in January when he shot just 41.2 percent from the field and 32.7 percent from long range. Thomas’s overall field goal percentage has bounced back in the month of February, but his 3-point percentage for the seven games this month is 24.1 percent.
Thomas and rookie guard Ben McLemore were the subject of a trade rumor on Monday, but coach Michael Malone and general manager Pete D’Alessandro refuted the reports following practice on Tuesday afternoon.
“The report that was, I think on Yahoo!, about our offer to Boston was so erroneous and I don’t know where it came from,” Malone told reporters on Tuesday. “We dispel the rumors that are out there that we know are not true, but at the same time, this is a business and you have no idea what can happen up until trade deadline. I think all of our players realize that.”
With injuries and possible trade rumors swirling, it should be a wild couple of days in Sacramento.
DeMarcus Cousins Injury Update
Thomas wasn’t the only Kings player to undergo an MRI today. For the second straight day, center DeMarcus Cousins made a trip to the doctors office for testing. Results of the first MRI were inconclusive, but a second test confirmed the Kings medical staff’s earlier diagnosis of a strained left hip flexor.
Cousins has been unable to participate in practice since returning from the All-Star break. He is listed as day-to-day, but considered doubtful for Wednesday’s match-up against the Golden State Warriors.
Hamady Ndiaye out of Rutgers and DeQuan Jones out of Florida are the only late additions. Ndiaye was in camp last season with Sacramento and left a solid impression. After being waived by the Kings, the 26-year old center spent last season playing for Tianjin Ronggang Golden Lions of the Chinese Basketball Association.
Jones played in 63 games last season with the Orlando Magic, including 17 starts. He averaged 3.7 points per game in a little under 13 minutes a game.
Last season it was high ropes courses in Colorado Springs, Co. This year, the Sacramento Kings open training camp away from home again, but instead of the Team USA practice facility in Colorado, it will be on the sandy beaches of Santa Barbara, CA. Camp will run from Oct. 1-6 at the Pavilion Gym on the University of California, Santa Barbara campus.
The team will head back to Northern California for their pre-season opener on the road against the Golden State Warriors on October 7, before heading to Las Vegas to take on the Lakers on Oct. 10.
After the initial week away, the Kings will continue camp in Sacramento at the team’s practice facility in Natomas.
Cowbell Kingdom has grown exponentially since its founding in 2009 and we want to make sure we know our audience. The information you provide in this brief survey will be used to help us better serve you. For your participation, you will be automatically entered into a contest to win a copy of the 2013-14 Sacramento Kings Dancers calendar and a “Blackout” t-shirt commemorating last season’s first home game.
But there’s probably no other player more overlooked and underrated on this season’s roster than the fourth-year guard. Just look no further than ESPN.com’s annual NBA Rank, which appraises the value of the league’s top 500 players. The 25-year-old guard moved up just five spots (no. 136 in 2011 to no. 131 in 2012) in this year’s rankings. These were the five players ranked just ahead of Thornton in the 2012 forecast:
Such is life on a bad team with little to no national exposure. However, those who follow the Kings closely know just how valuable Thornton is, especially his competition.
“He’s become an outstanding scorer in this league,” said Dallas Mavericks guard Darren Collisonback in January of his former New Orleans Hornets teammate. “He’s definitely made a niche in this league as far as (being) a big time scorer.
“He can shoot the ball extremely well and he can do a lot of different things off the pick and roll,” added Collison. “And he’s exceptionally quick too.”
In their rookie year, Collison and Thornton formed an explosive and exciting young backcourt for the Hornets. Though they’ve since gone their separate ways, the two remain close. Thornton worked out last offseason with Collison in Los Angeles during the lockout.
The fourth-year guard out of UCLA thinks Sacramento is a good fit for his old teammate. He believes Thornton will only continue to improve with the Kings’ green nucleus.
“This is a young team that’s going to be good in the near future,” Collison said. “He has a starting role here, so anytime you have a starting role, it’s always a good fit. And he’s one of their best scorers, too.”
Averaging 18.7 points per game, Thornton led the Kings in scoring last season and usually found himself as their go-to-guy in clutch situations. The next step for Thornton, according to another former teammate, is becoming an accomplished defender.
“He’s always been a capable scorer,” said Indiana Pacers big man David West. “Key for him has always been for him to play as hard defensively as he does offensively.”
As explosive as he is with the ball, Thornton could stand to see some improvement on the defensive end. The Louisiana native finished in the bottom three among his 15 teammates in defensive rating.
“We would challenge him to do the same thing on the defensive end,” said West of his days with Thornton in New Orleans. “Make him more of a complete ball player.”
However like Collison, West thinks Thornton will continue to find success in the league.
“He’s a strong-minded, tough-minded kid,” West said. “I knew that once he got an opportunity to just get in a system that worked for him and bring out his best skills, he’d do well.”
The Kings may not belong to Marcus Thornton. But his importance to their success isn’t an understatement.
Twenty-five years ago today, Sacramento Kings Head Coach Keith Smart hit a shot that changed his life forever.
No matter where I go, people talk about it. Once they recognize me or see a nametag on my bag or something like that, they start talking about “The Shot”. So it’s a great moment and I’m glad it went in, but wasn’t just something for me.
We just had our 25 year championship reunion. And we all got together and it wasn’t so much what we all did in the tournament and our careers. It was a friendship and a relationship that we have now that that moment brings us all together.
Diehard Sacramento Kings fan Kevin Fippin wanted to propose to his long-time girlfriend Lydia Nicolaisen. So before he popped the question on New Year’s Eve, he recruited the services of a Sacramento Kings fan favorite.
|
{
"pile_set_name": "pile-cc"
}
|
---
author:
- '$^{1}$Ryosuke Akashi[^1] and $^{1,2}$Ryotaro Arita'
title: 'Density Functional Theory for Plasmon-assisted Superconductivity'
---
Introduction
============
Superconductivity constitutes one of the most fascinating fields in condensed matter physics ever since its discovery in the early twentieth century. After the success of its description by the Bardeen-Cooper-Schrieffer theory,[@BCS] particular attention has been paid to the material dependence of the superconducting transition temperature ($T_{\rm c}$): that is, why some materials such as the celebrated cuprate[@Bednorz-Muller] exhibit high $T_{\rm c}$ but others do not? Since superconductivity emerges as a result of subtle interplay and competition of interactions between atoms and electrons having much larger energy scales, $T_{\rm c}$ is extremely sensitive to details of the electronic and crystal structure. Thus, an accurate quantitative treatment is essential to understand the emergence of high values of $T_{\rm c}$.
For the conventional phonon-mediated mechanism, quantitative calculations have been performed within the Migdal-Eliashberg (ME) theory[@Migdal-Eliashberg] implemented with the first-principles method based on the Kohn-Sham density functional theory[@Kohn-Sham-eq]: In a variety of systems, phonon properties are well reproduced by the density functional perturbation theory[@Baroni-review] or the total-energy method[@Kunc-Martin-frozen] within the local density approximation[@Ceperley-Alder; @PZ81]. By using the calculated phonon spectrum and electron-phonon coupling as inputs, it has been shown that the ME theory explains the qualitative tendency of $T_{\rm c}$ for various materials[@Savrasov-Savrasov; @Choi-MgB2]. However, the ME formalism is not suitable for full *ab initio* calculations since it is difficult to treat electron-electron interaction nonempirically. When we calculate $T_{\rm c}$ by solving the Eliashberg equation or using related approximate formulae such as the McMillan equation,[@McMillan; @AllenDynes] we vary the value of $\mu^{\ast}$ (Ref. ) representing the effective electron-electron Coulomb interaction which suppresses the Cooper-pair formation, and examine whether the range of the resulting $T_{\rm c}$ covers the experimentally observed value. With such a semi-empirical framework, the material dependence of the electron-electron interaction cannot be understood quantitatively.
The recent progress in the density functional theory for superconductors (SCDFT)[@Oliveira; @Kreibich; @GrossI] has changed the situation. There, a non-empirical scheme describing the physics in the ME theory was formulated: Based on the Kohn-Sham orbital, it treats the weak-to-strong electron-phonon coupling, the screened electron-electron interaction within the static approximation, and the retardation effect[@Morel-Anderson] due to the difference in the energy ranges of these interactions. This scheme has been demonstrated to reproduce experimental $T_{\rm c}$s of various conventional phonon-mediated superconductors with deviation less than a few K.[@GrossII; @Floris-MgB2; @Sanna-CaC6; @Bersier-CaBeSi] More recently, it has been employed to examine the validity of the ME theory in fully gapped superconductors with high $T_{\rm c}$ such as layered nitrides[@Akashi-MNCl] and alkali-doped fullerides.[@Akashi-fullerene] Through these applications, the current SCDFT has proved to be an informative method well-suited to investigate the nontrivial effects of the electron-electron interaction behind superconducting phenomena.
Although the electron-electron interaction just suppresses the pairing in the ME theory, possibilities of superconductivity induced by the electron-electron interaction have long been also explored. Since the discovery of the cuprates,[@Bednorz-Muller] superconductivity induced by short-range Coulomb interaction has been extensively investigated.[@Scalapino-review2012] On the other hand, there has been many proposals of superconducting mechanisms concerning long-range Coulomb interaction since the seminal work of Kohn and Luttinger.[@Kohn-Luttinger] In particular, there is a class of mechanisms that exploit the dynamical structure of the screened Coulomb interaction represented by the frequency-dependent dielectric function $\varepsilon(\omega)$: e.g., the plasmon[@Radhakrishnan1965; @Frohlich1968; @Takada1978; @Rietschel-Sham1983] and exciton[@Little1967] mechanisms. Interestingly, such mechanisms can cooperate with the conventional phonon mechanism. Since they usually favor $s$-wave pairing, they have a chance to enhance $s$-wave superconductivity together with the phonon mechanism. Taking this possibility into account, these mechanisms are important even where they do not alone induce superconductivity. Therefore, they are expected to involve a broader range of systems than originally expected in the early studies. In fact, for a variety of systems having low-energy electronic excitations, theoretical model calculations addressing such a cooperation have been performed: SrTiO$_{3}$ [@Koonce-Cohen1967; @Takada-SrTiO3] with small plasmon frequencies due to small electron densities, $s$-$d$ transition metals [@Garland-sd] where “demon" acoustic plasmons have been discussed,[@Pines-demon; @Ihm-Cohen1981] metals sandwiched by small-gap semiconductors [@Ginzburg-HTSC; @ABB1973], and layered systems where two-dimensional acoustic plasmons are proposed to become relevant [@Kresin1987; @Bill2002-2003]. Moreover, recent experimental discoveries of high-temperature superconductivity in doped band insulators have stimulated more quantitative analyses on effects of the cooperation [@Yamanaka1998; @Bill2002-2003; @Taguchi2006; @Taniguchi2012; @Ye2012].
Considering the above grounds, the situation calls for an *ab initio* theory that treats the phonon-mediated interaction and the dynamical screened Coulomb interaction together, with which one can study from the superconductors governed by phonons or the dynamical Coulomb interaction to those by their cooperation on equal footing. The aim of our present study is to establish this by extending the applicability of SCDFT. In this paper, we review the recent theoretical extension to include the plasmon-induced dynamical screened Coulomb interaction.[@Akashi-plasmon] In Sec. \[sec:theory\], we present the theoretical formulation and its practical implementation, and discuss how plasmons can enhance superconductivity. Section \[sec:appl-Li\] describes the application to elemental lithium under high pressures, for which the plasmon effect is expected to be substantial because of its relatively dilute electron density. In Sec. \[sec:summary\] we summarize our results and give concluding remarks.
Formulation {#sec:theory}
===========
General formalism {#subsec:theory-general}
-----------------
Let us start from a brief review of SCDFT.[@GrossI] The current SCDFT employs the gap equation $$\begin{aligned}
\Delta_{n{\bf k}}\!=\!-\mathcal{Z}_{n\!{\bf k}}\!\Delta_{n\!{\bf k}}
\!-\!\frac{1}{2}\!\sum_{n'\!{\bf k'}}\!\mathcal{K}_{n\!{\bf k}\!n'{\bf k}'}
\!\frac{\mathrm{tanh}[(\!\beta/2\!)\!E_{n'{\bf k'}}\!]}{E_{n'{\bf k'}}}\!\Delta_{n'\!{\bf k'}}
\label{eq:gap-eq}\end{aligned}$$ to obtain $T_{\rm c}$, which is specified as the temperature where the calculated value of gap function $\Delta_{n{\bf k}}$ becomes zero. Here, $n$ and ${\bf k}$ denote the band index and crystal momentum, respectively, $\Delta$ is the gap function, and $\beta$ is the inverse temperature. The energy $E_{n {\bf k}}$ is defined as $E_{n {\bf k}}$=$\sqrt{\xi_{n {\bf k}}^{2}+\Delta_{n {\bf k}}^{2}}$ and $\xi_{n {\bf k}}=\epsilon_{n {\bf k}}-\mu$ is the one-electron energy measured from the chemical potential $\mu$, where $\epsilon_{n {\bf k}}$ is obtained by solving the normal Kohn-Sham equation in density functional theory $
\mathcal{H}_{\rm KS}|\varphi_{n{\bf k}}\rangle=\epsilon_{n{\bf k}}
|\varphi_{n{\bf k}}\rangle
$ with $\mathcal{H}_{\rm KS}$ and $|\varphi_{n{\bf k}}\rangle$ being the Kohn-Sham Hamiltonian and the Kohn-Sham state, respectively. The functions $\mathcal{Z}$ and $\mathcal{K}$, which are called as the exchange-correlation kernels, describe the effects of all the interactions involved: They are defined as the second functional derivative of the free energy with respect to the anomalous electron density. A formulation of the free energy based on the Kohn-Sham perturbation theory\cite{} enables us practical calculations of the exchange-correlation functionals using the Kohn-Sham eigenvalues and eigenfunctions derived from standard *ab initio* methods.
The nondiagonal exchange-correlation kernel $\mathcal{K}$ is composed of two parts $\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$$+$$\mathcal{K}^{\rm el}$ representing the electron-phonon and electron-electron interactions, whereas the diagonal kernel $\mathcal{Z}$ consists of one contribution $\mathcal{Z}$$=$$\mathcal{Z}^{\rm ph}$ representing the mass renormalization of the normal-state band structure due to the electron-phonon coupling. The phonon parts, $\mathcal{K}^{\rm ph}$ and $\mathcal{Z}^{\rm ph}$, properly treats the conventional strong-coupling superconductivity. The electron-electron coutribution $\mathcal{K}^{\rm el}$ is the matrix element of the *static* screened Coulomb interaction $\langle \varphi_{n{\bf k}\uparrow}\varphi_{n-{\bf k}\downarrow}|\varepsilon^{-1}(0)V|\varphi_{n'{\bf k}'\uparrow}\varphi_{n'-{\bf k}'\downarrow}\rangle$ with $V$ being the bare Coulomb interaction. Currently, the Thomas-Fermi approximation and the random-phase approximation (RPA) have been applied for the static dielectric function $\varepsilon^{-1}(0)$.[@Massidda] With these settings, the two parts of the nondiagonal kernel have different Kohn-Sham energy dependence: $\mathcal{K}^{\rm ph}$ has large values only for the states within the phonon energy scale, whereas $\mathcal{K}^{\rm el}$ decays slowly with the electronic energy scale. With this Kohn-Sham-state dependence, the retardation effect[@Morel-Anderson] is quantitatively treated. Thus, within the framework of the density functional theory, the SCDFT accurately treats the physics of Migdal-Eliashberg theory (based on the Green’s function).
![(Color online) Diagram corresponding to the electron nondiagonal kernel, $\mathcal{K}^{\rm el}$. The solid line with arrows running in the opposite direction denotes the electronic anomalous propagator [@GrossI]. The blue wavy line denotes the screened electronic Coulomb interaction, which is a product of the inverse dielectric function $\varepsilon^{-1}$ and the bare Coulomb interaction $V$.[]{data-label="fig:diagram"}](diagrams_ed_130926.jpg)
The current setting $\mathcal{K}^{\rm el}$ $=$ $\langle \varphi_{n{\bf k}\uparrow}\varphi_{n-{\bf k}\downarrow}|$ $\varepsilon^{-1}(0)V$ $|\varphi_{n'{\bf k}'\uparrow}\varphi_{n'-{\bf k}'\downarrow}\rangle$ corresponds to the anomalous exchange contribution from the screened Coulomb interaction represented in Fig. \[fig:diagram\] with the $\omega$ dependence of $\varepsilon$ omitted. To incorporate effects of the plasmon on the interaction, we retain its frequency dependence. The diagram thus yields a following form $$\begin{aligned}
\hspace{-30pt}&&
\mathcal{K}^{\rm el, dyn}_{n{\bf k},n'{\bf k}}
\!=\!
\lim_{\{\Delta_{n{\bf k}}\}\rightarrow 0}
\frac{1}{{\rm tanh}[(\beta /2 ) E_{n{\bf k}}]}
\frac{1}{{\rm tanh}[(\beta /2) E_{n'{\bf k}'}]}
\nonumber \\
\hspace{-10pt}&&
\hspace{10pt}\times
\frac{1}{\beta^{2}}
\sum_{\tilde{\omega}_{1}\tilde{\omega}_{2}}
F_{n{\bf k}}({\rm i}\tilde{\omega}_{1})
F_{n'{\bf k}'}({\rm i}\tilde{\omega}_{2})
W_{n{\bf k}n'{\bf k}'}[{\rm i}(\tilde{\omega}_{1}\!\!-\!\!\tilde{\omega}_{2})]
,
\label{eq:kernel-dyn}\end{aligned}$$ where $F_{n{\bf k}}({\rm i}\tilde{\omega})$ $=$ $\frac{1}{{\rm i}\tilde{\omega}\!+\!E_{n{\bf k}}}
\!-\!
\frac{1}{{\rm i}\tilde{\omega}\!-\!E_{n{\bf k}}}
$ and $\tilde{\omega}_{1}$ and $\tilde{\omega}_{2}$ denote the Fermionic Matsubara frequency. Function $W_{n{\bf k}n'{\bf k}'}({\rm i}\omega)$$\equiv$$\langle \varphi_{n{\bf k}\uparrow}\varphi_{n-{\bf k}\downarrow}|\varepsilon^{-1}({\rm i}\omega)V|\varphi_{n'{\bf k}'\uparrow}\varphi_{n'-{\bf k}'\downarrow}\rangle$ is the screened Coulomb interaction. We then apply the RPA[@RPA] to the $\omega$-dependent dielectric function, which is a standard approximation to describe the plasmon under crystal field. Formally, the present RPA kernel can be also derived from the RPA free energy defined by Eq. (13) in Ref. : The set of terms of order $O(FF^{\dagger})$ (i.e., the set of the diagrams having only one anomalous bubble taken from Fig. 2 in Ref. ) corresponds to the present kernel.
The Coulomb interaction $W_{n{\bf k}n'{\bf k}'}({\rm i}\nu)$ is practically calculated using a certain set of basis functions. Let us here summarize the plane-wave representation, which has been employed in our studies: $$\begin{aligned}
&&\hspace{-30pt}W_{n{\bf k}n'{\bf k}'}({\rm i}\nu)
\nonumber \\
&& =
\frac{4\pi}{\Omega}\!
\sum_{{\bf G}\!{\bf G}'}\!
\frac{
\!\rho^{n{\bf k}}_{n'{\bf k}'}(\!{\bf G}\!)\tilde{\varepsilon}^{-1}_{{\bf G}{\bf G}'}(\!{\bf k}\!-\!{\bf k}'\!;{\rm i}\nu)\!\{\rho^{n{\bf k}}_{n'{\bf k}'}(\!{\bf G}'\!)\!\}^*
}
{
|{\bf k}-{\bf k}'+{\bf G}||{\bf k}-{\bf k}'+{\bf G}'|
}\!,
\label{eq:K-el-RPA}\end{aligned}$$ with $\tilde{\varepsilon}_{{\bf G}{\bf G}'}(\!{\bf k}\!-\!{\bf k}'\!;{\rm i}\nu)$ being the symmetrized dielectric matrix,[@Hybertsen-Louie] defined by $$\begin{aligned}
\tilde{\varepsilon}_{{\bf G}{\bf G}'}({\bf K}; {\rm i}\nu)
\!\!\!\!&=&\!\!\!\!\!\!
\delta_{{\bf G}{\bf G}'}
\nonumber \\
&&\!\!\!-4\pi\frac{1}{|{\bf K}\!+\!{\bf G}|}\chi^{0}_{{\bf G}{\bf G}'}({\bf K}; {\rm i}\nu)\frac{1}{|{\bf K}\!+\!{\bf G}'|}
.\end{aligned}$$ The independent-particle polarization $\chi^{0}_{{\bf G}{\bf G}'}({\bf K}; {\rm i}\nu)$ denotes $$\begin{aligned}
\chi^{0}_{{\bf G}{\bf G}'}({\bf K};{\rm i}\nu)
&\!\!\!\!=&\!\!\!\!
\frac{2}{\Omega}
\sum_{{\bf k}}\sum_{\substack{n:{\rm unocc}\\n':{\rm occ}}}
[\rho^{n{\bf k}+{\bf K}}_{n'{\bf k}}({\bf G})]^{\ast}\rho^{n{\bf k}+{\bf K}}_{n'{\bf k}}({\bf G}')
\nonumber \\
&& \hspace{-35pt}\times
[\frac{1}{{\rm i}\nu \!-\! \epsilon_{n {\bf k}+{\bf K}} \!+\! \epsilon_{n' {\bf k}}}
-
\frac{1}{{\rm i}\nu \!+\! \epsilon_{n {\bf k}+{\bf K}} \!-\! \epsilon_{n' {\bf k}}}]
,
\label{eq:chi-def}\end{aligned}$$ where the band indices $n$ and $n'$ run through the unoccupied bands and occupied bands for each **k**, respectively. The matrix $\rho^{n'{\bf k}'}_{n{\bf k}}({\bf G})$ is defined by $$\begin{aligned}
\rho^{n'{\bf k}'}_{n{\bf k}}({\bf G})
&=&
\int_{\Omega} d{\bf r}
\varphi^{\ast}_{n'{\bf k}'}({\bf r})
e^{{\rm i}({\bf k}'-{\bf k}+{\bf G})\cdot{\bf r}}
\varphi_{n{\bf k}}({\bf r}).
\label{eq:rho}\end{aligned}$$ So far, we have ignored the intraband (Drude) contribution to $\tilde{\varepsilon}$ for ${\bf k}-{\bf k}'=0$: The kernel including this contribution diverges as $({\bf k}-{\bf k}')^{-2}$, whereas the total contribution by the small ${\bf k}-{\bf k}'$ to $T_{\rm c}$ should scale as $({\bf k}-{\bf k}')^{1}$ because of the ${\bf k}'$ integration in Eq. (\[eq:gap-eq\]).
![(Color online) (a) Energy dependence of nondiagonal kernels entering the gap equation. Phonon-induced attraction, static Coulomb repulsion, and the plasmon-induced high-energy Coulomb repulsion are indicated in red, green, and blue, respectively. (b) Approximate solution of the gap equation solved with the phonon and static Coulomb parts. (c) Energy dependence of the kernels in a case where the phonon part is negligibly small and the plasmon part is dominant.[]{data-label="fig:interaction"}](interactions_130920.jpg)
The physical meaning of the present dynamical correction to the previous static kernel is as follows. In real systems, screening by charge fluctuations is ineffective for the interaction with large energy exchanges \[i.e., $\varepsilon(\omega) \xrightarrow{\omega \rightarrow \infty} 1$\], whereas it becomes significant as the energy exchange becomes small compared with typical energies of charge excitations. However, the conventional static approximation ignores this energy dependence of the screening by extrapolating the static value of the interaction to the high energy, and underestimates the screened Coulomb repulsion with large energy exchanges. The present extension corrects this underestimation, and gives additional repulsive contribution to the Coulomb matrix elements between the Cooper pairs having much different energies.
Interestingly, this additional contribution can raise $T_{\rm c}$. Let us discuss this point in terms of the interaction kernel entering the energy-averaged gap equation $$\begin{aligned}
\Delta(\xi)
=
-\frac{1}{2}N(0)
\int \!\! d\xi' \!
\mathcal{K}(\xi\!,\xi')\frac{{\rm tanh}[(\beta/2)\xi']}{\xi'}\Delta(\xi')
,
\label{eq:gap-eq-ave}\end{aligned}$$ where we define the averaged nondiagonal kernel as $\mathcal{K}(\xi,\xi')=\frac{1}{N(0)^{2}}\sum_{n{\bf k}n'{\bf k}'}\delta(\xi-\xi_{n{\bf k}})\delta(\xi'-\xi_{n'{\bf k}'})K_{n{\bf k}n'{\bf k}'}$ with $N(0)$ being electronic density of states at the Fermi level and omit the diagonal kernel for simplicity. This equation qualitatively describes coherent Cooper pairs represented by $\Delta(\xi)$ scattered by the pairing interactions. Suppose $\mathcal{K}=\mathcal{K}^{\rm ph}+\mathcal{K}^{\rm el}$, $N(0)\mathcal{K}^{\rm ph}(\xi,\xi')=-\lambda$ within the Debye frequency $\omega_{\rm ph}$ and $N(0)\mathcal{K}^{\rm el}(\xi,\xi')=\mu$ within a certain electronic energy range such as $E_{\rm F}$ (considering red and green parts in panel (a) of Fig. \[fig:interaction\]). Solving this equation by assuming $\Delta(\xi)$ to be nonzero and constant only within $\omega_{\rm ph}$, we obtain the BCS-type $T_{\rm c}$ formula $T_{\rm c}\propto \omega_{\rm ph}$$\times$$ {\rm exp}[-1/(\lambda-\mu)]$ for $\mu-\lambda<0$. However, if we allow $\Delta(\xi)$ to have nonzero constant values for $|\xi|>\omega_{\rm ph}$, we instead obtain $T_{\rm c}\propto \omega_{\rm ph}$$\times$${\rm exp}[-1/(\lambda-\mu^{\ast})]$ with $\mu^{\ast}=\mu/(1+\mu{\rm ln}[E_{\rm F}/\omega_{\rm ph}])<\mu$, and then, the resulting values of $\Delta(\xi)$ have opposite signs for $|\xi|<\omega_{\rm ph}$ and $|\xi|>\omega_{\rm ph}$ \[panel (b) in Fig. \[fig:interaction\]\]. Here, even if the total low-energy interaction $\mu-\lambda$ is repulsive, superconducting state realizes if $\mu^{\ast}-\lambda<0$. This weakening of the effective Coulomb repulsion is the celebrated retardation effect,[@Morel-Anderson] and its origin is the negative values of the high-energy gap function: Since the scattering by repulsion between Cooper pairs having $\Delta$ with opposite signs is equivalent to the scattering by attraction between those with same signs, there is a gain of the condensation energy.[@Kondo-PTP1963] Next, let us add the plasmon contribution \[blue part in panel (a)\], which enhances the repulsion by $\Delta \mu$ for $\xi$ with an energy scale of plasmon frequency $\omega_{\rm pl}$. Then, more condensation energy can be gained by enhancing the high-energy negative gap function, which increases $T_{\rm c}$. As an extreme situation, one can also consider the case where the phonon-induced attraction is negligible and the plasmon-induced repulsion is dominant \[panel (c)\]. Obviously, a superconducting solution exists even in this case because the discussion about the above $T_{\rm c}$ formula is also valid with the transformation $\lambda$$\rightarrow$$\Delta \mu$, $\mu$$\rightarrow$$\mu+\Delta \mu$ and $\omega_{\rm ph}$$\rightarrow$$\omega_{\rm pl}$. These discussions illustrate that the plasmon contribution can increase $T_{\rm c}$ by enhancing the high-energy repulsion.
To the authors’ knowledge, the plasmon mechanism of the above-mentioned type to enhance $T_{\rm c}$ has been originally studied by Takada[@Takada1978] based on the Green’s function formalism for two- and three-dimensional homogeneous electron gas. Using the gap equation derived by himself, he has also performed calculations of $T_{\rm c}$ considering both the phonons and plasmons for doped SrTiO$_{3}$ (Ref. ) and metal-intercalated graphites.[@Takada-graphite1982; @Takada-graphite2009] Our present formalism, which treats the local field effect of inhomogeneous electron distribution behind the phonon and plasmon, is a DFT-based counterpart of his theory.[@comment-counterpart]
Multipole plasmon approximation {#subsec:plasmon-pole}
-------------------------------
Next we present a formulation to calculate $T_{\rm c}$ using the extended kernel. Evaluation of Eq. (\[eq:kernel-dyn\]) requires to perform the double discrete Matsubara summations for electronic energy scale, which is impractically demanding. We then analytically carry out the summations by approximating $W_{n{\bf k}n'{\bf k'}}$ as a simple function. For this purpose, we employ a multipole plasmon approximation $$\begin{aligned}
\tilde{W}_{n{\bf k}n'{\bf k}'}({\rm i}\tilde{\nu}_{m})
\!\!\!\!&=&\!\!\!\!
W_{n{\bf k}n'{\bf k}'}(0)
\nonumber \\
&&+
\sum^{N_{\rm p}}_{i}
a_{i;n{\bf k}n'{\bf k}'}
g_{i;n{\bf k}n'{\bf k}'}(\tilde{\nu}_{m})
,
\label{eq:W-tilde}\end{aligned}$$ with $g_{i;n{\bf k}n'{\bf k}'}$ being $$\begin{aligned}
g_{i;n{\bf k}n'{\bf k}'}(x)
=
\frac{2}{\omega_{i;n{\bf k}n'{\bf k}'}}
-\frac{2\omega_{i;n{\bf k}n'{\bf k}'}}{x^{2}\!+\!\omega^{2}_{i;n{\bf k}n'{\bf k}'}}
.\end{aligned}$$ Here, $\tilde{\nu}_{m}$ denotes the Bosonic Matsubara frequency. In contrast with the case of uniform electron gas, inhomogeneous systems can have a variety of plasmon modes, and our aim is to treat these modes in a unified manner. Substituting Eq. (\[eq:W-tilde\]) in Eq. (\[eq:kernel-dyn\]), we finally obtain $\mathcal{K}^{\rm el,dyn}$$=$$\mathcal{K}^{\rm el,stat}$$+$$\Delta\mathcal{K}^{\rm el}$ with $\mathcal{K}^{\rm el,stat}_{n{\bf k}n'{\bf k}'}$$=$$W_{n{\bf k}n'{\bf k}'}(0)$ and $$\begin{aligned}
\hspace{-10pt}
\Delta\mathcal{K}^{\rm el}_{n{\bf k},n'{\bf k}}
&\!\!\!\!\!\!\!=&\!\!\!\!\!\!
\sum_{i}^{N_{\rm p}}\!2a_{i;n{\bf k}n'{\bf k}'} \!\left[
\frac{1}
{\omega_{i;n{\bf k}n'{\bf k}'}} \right.
\nonumber \\
&&
\hspace{-50pt}
\left.
+
\frac{
I\!(\xi_{n{\bf k}}\!,\!\xi_{n'{\bf k}'}\!,\omega_{i;n{\bf k}n'{\bf k}'}\!)
\!\!-\!\!
I\!(\xi_{n{\bf k}}\!,-\!\xi_{n'{\bf k}'}\!,\omega_{i;n{\bf k}n'{\bf k}'}\!)
}{{\rm tanh}[(\beta/2) \xi_{n{\bf k}}]{\rm tanh}[(\beta/2) \xi_{n'{\bf k}'}]}
\right]
,
\label{eq:Delta-kernel}\end{aligned}$$ where the function $I$ is defined by Eq. (55) in Ref. .
In order to calculate Eq. (\[eq:Delta-kernel\]), we determine the plasmon coupling coefficients $a_{i;n{\bf k}n'{\bf k}'}$ and the plasmon frequencies $\omega_{i;n{\bf k}n'{\bf k}'}$ by the following procedure: (i) Calculate the screened Coulomb interaction for the [*real*]{} frequency grid $W_{n{\bf k}n'{\bf k}'}(\nu_{j}\!+\!{\rm i}\eta)$, where {$\nu_{j}$} ($j=1, 2, . . .N_{\omega}$) specifies the frequency grid on which the numerical calculation is performed, and $\eta$ is a small positive parameter, (ii) determine the plasmon frequencies $\{\omega_{i;n{\bf k}n'{\bf k}'}\}$ by the position of the peaks up to the $N_{\rm p}$-th largest in ${\rm Im}W_{n{\bf k}n'{\bf k}'}(\nu_{j}\!+\!{\rm i}\eta)$, (iii) calculate the screened Coulomb interaction for the [*imaginary*]{} frequency grid $W_{n{\bf k}n'{\bf k}'}({\rm i}\nu_{j})$, and (iv) using the calculated $W_{n{\bf k}n'{\bf k}'}({\rm i}\nu_{j})$, determine the plasmon coupling coefficients $\{a_{i;n{\bf k}n'{\bf k}'}\}$ via the least squares fitting by $\tilde{W}_{n{\bf k}n'{\bf k}'}({\rm i}\nu_{j})$.
For the fitting, the variance to be minimized is defined as $$\begin{aligned}
S_{n{\bf k}n'{\bf k}'}
\!\!\!\!&=&\!\!\!\!
\sum^{N_{\omega }}_{j}
\delta \omega_{j}\biggl[
W_{n{\bf k}n'{\bf k}'}({\rm i}\nu_{j})
-W_{n{\bf k}n'{\bf k}'}(0)
\nonumber \\
&&
-
\sum^{N_{\rm p}}_{i}
a_{i;n{\bf k}n'{\bf k}'}
g_{i;n{\bf k}n'{\bf k}'}(\nu_{j})
\biggr]^{2}
,\end{aligned}$$ and we have introduced a weight $\delta \omega_{j}$ satisfying $\sum^{N_{\omega }}_{j}\delta \omega_{j}$$=$$1$. With all the plasmon frequencies given, the extrema condition $\frac{\partial S}{\partial a_{i}}=0$ ($i=1$, . . . , $N_{\rm p}$) reads $$\begin{aligned}
\begin{pmatrix}
a_{1}\\
a_{2}\\
\vdots
\end{pmatrix}
\!\!\!\!&=&\!\!\!\!
\begin{pmatrix}
V^{gg}_{11} & V^{gg}_{12} & \cdots \\
V^{gg}_{21} & V^{gg}_{22} & \cdots \\
\vdots & \vdots & \ddots
\end{pmatrix}^{-1}
\begin{pmatrix}
V^{Wg}_{1} \\
V^{Wg}_{2} \\
\vdots
\end{pmatrix}
.
\label{eq:fit-coeff}\end{aligned}$$ Here, $V^{Wg}$ and $V^{gg}$ are defined by $$\begin{aligned}
V^{Wg}_{i}
&\!\!\!\!\!=&\!\!\!\!\!
\sum_{j=1}^{N_{\omega}}
\delta\omega_{j}[W_{j}-W(0)]
g_{i}(\nu_{j})
,\\
V^{gg}_{ij}
&\!\!\!\!\!=&\!\!\!\!\!
\sum_{k=1}^{N_{\omega}}
\delta\omega_{k}
g_{i}(\nu_{k})
g_{j}(\nu_{k})
.\end{aligned}$$ For arbitrary frequency grids, we define the weight as $$\begin{aligned}
\delta\omega_{j}\propto
\left\{
\begin{array}{cl}
0 & (j=1, N_{w}) \\
(\nu_{j+1}\!-\!\nu_{j-1})p_{j} & (j\neq 1, N_{w}) \\
\end{array}
\right.
.\end{aligned}$$ The factor $p_{j}$ is the weight for the variance function introduced for generality, and we set $p_{j}=1$ in Secs. \[sec:theory\] and \[sec:appl-Li\]. When a negative plasmon coupling appears, we fix the corresponding coupling to zero, recalculate Eq. (\[eq:fit-coeff\]), and repeat this procedure until all the coupling coefficients becomes nonnegative so that the positive definiteness of the loss function is guaranteed.
![(Color online) Screened Coulomb interaction $W_{n{\bf k}n'{\bf k}'}$ and the corresponding approximate function $\tilde{W}_{n{\bf k}n'{\bf k}'}$ for fcc lithium under 14GPa calculated along the real frequency axis \[(a), (c)\], and the imaginary frequency axis \[(b), (d)\]. The band indices $n$ and $n'$ specify the partially occupied band. ${\bf k}$ and ${\bf k}'$ are $(2\pi/a)(1/7,1/7,1/7)$ and $(0,0,0)$ for (a)–(b), whereas $(2\pi/a)(2/7,2/7,6/7)$ and $(0,0,0)$ for (c)–(d).[]{data-label="fig:fit"}](Li_fcc_14GPa_Wnknk_k7_fit_w_ed_130926_2.jpg)
For the determination of plasmon frequencies, the calculated spectrum of ${\rm Im}W_{n{\bf k}n'{\bf k}'}(\omega+{\rm i}\eta)$ is examined for each {$n,{\bf k},n',{\bf k}'$}. We have implemented a simple algorithm as follows: First, the peaks are specified as the point where the gradient of ${\rm Im}W_{n{\bf k}n'{\bf k}'}(\nu_{j}+{\rm i}\eta)$ turns from negative to positive; next, the specified peaks are sampled in order by their weighted values $p_{j}{\rm Im}W_{n{\bf k}n'{\bf k}'}(\nu_{j}+{\rm i}\eta)$. By increasing $N_{\rm p}$, we can expect all the relevant plasmon modes are properly considered.
We show in Fig. \[fig:fit\] the results of the fitting for fcc Li under 14GPa as typical cases where the fitting is straightforward \[panel (a) and (b)\] and difficult \[(c) and (d)\]. The peaks used for the fitting are indicated by arrows. For the former, an accurate fitting function was obtained with $N_{\rm p}$$=$$2$, where the derived fitting function and its analytic continuation $\tilde{W}_{n{\bf k}n'{\bf k}'}(\omega+{\rm i}\delta)$ indicated by thick blue lines reproduce the numerically calculated $W_{n{\bf k}n'{\bf k}'}({\rm i}\omega)$ and $W_{n{\bf k}n'{\bf k}'}(\omega+{\rm i}\delta)$ quite well, respectively. For the latter, on the other hand, good agreement of $\tilde{W}_{n{\bf k}n'{\bf k}'}({\rm i}\omega)$ and $W_{n{\bf k}n'{\bf k}'}({\rm i}\omega)$ was not achieved with $N_{\rm p}$$\leq$7, where $a_{i;n{\bf k}n'{\bf k}'}$ for the peaks indicated by the smaller arrows were zero. This was because one of the relevant plasmon modes indicated by the larger arrows was the eighth largest with respect to the peak height. The convergence of $T_{\rm c}$ with respect to $N_{\rm p}$ can be slow due to such a feature, though it becomes serious only for {$n, {\bf k}, n', {\bf k}'$} where the dynamical structure is blurred by strong plasmon damping \[see the vertical axes in panels (a) and (c)\].
We here also note possible systematic errors in the present algorithm. First, multiple plasmon peaks in $W_{n{\bf k}n'{\bf k}'}(\omega+{\rm i}\delta)$ may mutually overlap due to their peak broadening. Then, some plasmon modes are hidden by large broad peaks and cannot be specified even if we increase $N_{\rm p}$. We have assumed that these hidden modes are negligible because of their small spectral weight and strong damping. Next, the variance does not exactly converge to zero since the numerically calculated $W_{n{\bf k}n'{\bf k}'}({\rm i}\omega)$ shows a weak cusplike structure at ${\rm i}\omega=0$ \[see panel (d) in Fig. \[fig:fit\]\]. This structure probably originates from the finite lifetime of the plasmon modes. Its effect is not included by the plasmon-pole approximation, and will be examined in future studies.
\[tab:Tc\]
[lccccc]{} & Al &\
& &14GPa & 20GPa & 25GPa& 30GPa\
$\lambda$ &0.417 &0.522 &0.623 &0.722 &0.812\
$\lambda^{a}$ & & 0.49 &0.66 & &0.83\
$\omega_{\rm ln}$ \[K\] & 314 &317&316&308&304\
$r_{s}$ &2.03 &2.71 &2.64 &2.59 &2.55\
$\Omega_{\rm p}$\[eV\] &16.2& 8.23& 8.44 &8.51 &8.58\
$T_{\rm c}^{\rm ph}$ \[K\] &5.9 &10.0 &15.2 &19.0 &23.3\
$T_{\rm c}^{\rm stat}$ \[K\] &0.8 &0.7 &1.8 & 3.2 &5.0\
$T_{\rm c}^{N_{\rm p}=1}$ \[K\] &1.4 &2.2 &4.1 &6.5 &9.1\
$T_{\rm c}^{N_{\rm p}=2}$ \[K\] &1.4 &2.2 &4.4 &6.8 &9.1\
$T_{\rm c}^{\rm expt.}$ \[K\] & 1.20$^{b}$ & $<$4 &\
\
\
\
![(Color online) Our calculated $T_{\rm c}$ (solid squares and circles) for aluminum and fcc lithium under high pressures compared with the experimentally observed values. The open symbols represent the experiments: Ref. (open inverted triangle), Ref. (open squares), Ref. (open circles), Ref. (open regular triangles), and Ref. (open diamonds). []{data-label="fig:Tc-expt"}](Al_Li_fcc_pressure_Tc_compare_expts_ed_130930.jpg)
Application to lithium under pressures {#sec:appl-Li}
======================================
The above formalism, which is based on the plasmon-pole approximation, is expected to be valid for the nearly uniform electron gas. We here present the recent application to an elemental-metal superconductor Li. Lithium has been known to exhibit superconductivity with $T_{\rm c}$$\gtrsim$10 K under high pressure.[@Shimizu2002; @Struzhkin2002; @Deemyad2003; @Lin-Dunn] Early *ab initio* calculations[@Christensen-Novikov; @Tse; @Kusakabe2005; @Kasinathan; @Jishi] including that based on the SCDFT[@Profeta-pressure] reproduced the experimentally observed pressure dependence of $T_{\rm c}$ quantitatively. However, a later sophisticated calculation[@Bazhirov-pressure] using the Wannier interpolation technique[@Giustino-Wannier-elph] has shown that the numerically converged electron-phonon coupling coefficient is far smaller than the previously reported values. On the other hand, the plasmon effect is expected to be substantial because the density of conducting electrons $n$, which determines a typical plasmon frequency by $\propto\sqrt{n}$, is relatively small in Li due to the large radius of the ion and the small number of valence electrons. Therefore, it is interesting to see if the newly included plasmon contribution fills the gap between the theory and experiment. It is also important to examine whether the present *ab initio* method works successfully for conventional superconductors whose $T_{\rm c}$s have already been well reproduced by the conventional SCDFT. For that reason, we also applied the present method to aluminum.

Calculation with small $N_{p}$ {#subsec:small-Np}
------------------------------
In Ref. , we performed calculations for fcc Li under pressures of 14, 20, 25, and 30GPa. All our calculations were carried out within the local-density approximation [@Ceperley-Alder; @PZ81] using [*ab initio*]{} plane-wave pseudopotential calculation codes [Quantum Espresso]{} [@Espresso; @Troullier-Martins] (see Ref. for further details). The phonon contributions to the SCDFT exchange-correlation kernels ($\mathcal{K}^{\rm ph}$ and $\mathcal{Z}^{\rm ph}$) were calculated using the energy-averaged approximation [@GrossII], whereas the electron contributions ($\mathcal{K}^{\rm el,stat}$ and $\Delta\mathcal{K}^{\rm el}$) were calculated by Eq. (13) in Ref. and Eq. (\[eq:Delta-kernel\]) to evaluate the plasmon effect. The SCDFT gap equation was solved with a random sampling scheme given in Ref. , with which the sampling error in the calculated $T_{\rm c}$ was not more than a few percent. In addition to the typical plasmon, an extra plasmon due to a band-structure effect has been discussed for Li[@Karlsson-Aryasetiawan; @Silkin2007] and Al[@Hoo-Hopfield; @Sturm-Oliveira1989]. We therefore carried out the calculation for $N_{\rm p}$$=$$1$ and $2$.
In Table \[tab:Tc\], we summarize our calculated $T_{\rm c}$ values with $\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$ ($T_{\rm c}^{\rm ph}$), $\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$$+$$\mathcal{K}^{\rm el,stat}$ ($T_{\rm c}^{\rm stat}$), and $\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$$+$$\mathcal{K}^{\rm el,stat}$$+$$\Delta\mathcal{K}^{\rm el}$ ($T_{\rm c}^{N_{\rm p}=1}$ and $T_{\rm c}^{N_{\rm p}=2}$). The estimated electron-phonon coupling coefficient $\lambda$, the logarithmic average of phonon frequencies $\omega_{\rm ln}$, the density parameter $r_{s}$, and typical plasma frequency $\Omega_{\rm p}$ are also given. Instead of using the Wannier-interpolation technique, we carried out the Fermi surface integration for the input Eliashberg functions[@Migdal-Eliashberg] with broad smearing functions, [@Akashi-plasmon] and we obtained $\lambda$ consistent with the latest calculation [@Bazhirov-pressure], which is smaller than the earlier estimates [@Tse; @Kusakabe2005; @Profeta-pressure; @Kasinathan; @Christensen-Novikov; @Jishi]. The material and pressure dependence of the theoretical $T_{\rm c}$ follows that of $\lambda$. With $\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$, $T_{\rm c}$ is estimated to be of order of 10K. While it is significantly suppressed by including $\mathcal{K}^{\rm el,stat}$, it is again increased by introducing $\Delta\mathcal{K}^{\rm el}$. We here do not see significant $N_{\rm p}$ dependence, which is further examined in Sec. \[subsec:large-Np\].
The calculated values of $T_{\rm c}$ are compared with the experimental values in Fig. \[fig:Tc-expt\]. With the static approximation (red square), the general trend of the experimentally observed $T_{\rm c}$ is well reproduced: Aluminum exhibits the lowest $T_{\rm c}$, and $T_{\rm c}$ in Li increases as the pressure becomes higher. However, the calculated $T_{\rm c}$ for Li is significantly lower than the experimental one, which demonstrates that the conventional phonon theory is quantitatively insufficient to understand the origin of the high $T_{\rm c}$ in Li under high pressures. From the previous *ab initio* calculations, this insufficiency has not been well recognized because either too strong electron-phonon coupling or too weak electron-electron Coulomb interaction was used. With the plasmon contribution (blue circle), the resulting $T_{\rm c}$ systematically increases compared with the static-level one and it becomes quantitatively consistent with the experiment. For Al, in contrast, the accuracy is acceptable with both $T^{\rm stat}_{\rm c}$ and $T^{N_{\rm p}=2}_{\rm c}$, where the increase of $T_{\rm c}$ by $\Delta\mathcal{K}^{\rm el}$ is relatively small. These results indicate the followings: First, the plasmon contribution is essential for the high $T_{\rm c}$ in fcc Li under pressure, and second, our scheme gives accurate estimates of $T_{\rm c}$ regardless of whether their dynamical effects are strong or weak.
![(Color online) (a) Decomposition of the nondiagonal exchange-correlation kernel $\mathcal{K}_{n{\bf k}n'{\bf k}'}$ at $T$$=$$0.01$K calculated for fcc lithium under pressure of 14GPa, averaged by equal-energy surfaces for $n'{\bf k}'$. (b) The corresponding gap function calculated with (darker) and without (lighter) $\Delta\mathcal{K}^{\rm el}$.[]{data-label="fig:kernel-gap"}](Li_fcc_14GPa_Kernel_ph_el_K1_gap_ed_130926_2.jpg)
We discuss the origin of the enhancement of $T_{\rm c}$ by considering the dynamical effect in terms of partially energy-averaged nondiagonal kernels $\mathcal{K}_{n{\bf k}}(\xi)$$\equiv$$\frac{1}{N(\xi)}\sum_{n'{\bf k}'}\mathcal{K}_{n{\bf k}n'{\bf k}'}\delta(\xi-\xi_{n'{\bf k}'})$. With $n{\bf k}$ chosen as a certain point near the Fermi energy, we plotted the averaged kernel for fcc Li under pressure of 14GPa and Al with $N_{\rm p}$$=$$2$ in Fig. \[fig:kernel\]. The total kernel is decomposed into $\mathcal{K}^{\rm ph}$ (solid red line), $\mathcal{K}^{\rm el,stat}$ (dotted green line), and $\Delta\mathcal{K}^{\rm el}$ (dashed blue line). Generally, the total kernel becomes slightly negative within the energy scale of the phonons due to $\mathcal{K}^{\rm ph}$, whereas it becomes positive out of this energy scale mainly because of $\mathcal{K}^{\rm el,stat}$. The $\Delta\mathcal{K}^{\rm el}$ value is positive definite, but nearly zero for a low energy scale. As discussed in Sec. \[sec:theory\], the high-energy enhancement of repulsion increases $T_{\rm c}$ through the retardation effect. Remarkably, $\Delta\mathcal{K}^{\rm el}$ sets in from an energy far smaller than the typical plasmon frequency (see Table \[tab:Tc\]), and its absolute value is of the same order of $\mathcal{K}^{\rm el,stat}$. These features can be also seen in the case of homogeneous electron gas studied by Takada [@Takada1978]. On the difference between Li and Al \[(a) and (b)\], we see that the contribution of $\Delta\mathcal{K}^{\rm el}$ in Al is noticeably smaller than that in Li. Also, the energy scale of the structure of $\Delta\mathcal{K}^{\rm el}$ \[inset of (b)\], which correlates with $\Omega_{\rm p}$ (see Table \[tab:Tc\]), is small (large) for Li (Al). These differences explain why the effect of $\Delta\mathcal{K}^{\rm el}$ is more significant in Li.
The enhanced retardation effect by the plasmon is seen more clearly from the gap functions plotted together with the non-diagonal kernel in Fig. \[fig:kernel-gap\]. Indeed, we observe substantial enhancement of the negative gap value in the high-energy region, where the additional repulsion due to $\Delta\mathcal{K}^{\rm el}$ is strong. This clearly demonstrates that the plasmon mechanism indeed enhances $T_{\rm c}$, as is described in Sec. \[sec:theory\].
We did not find a nonzero solution for the gap equation Eq. (\[eq:gap-eq\]) with only the electron-electron contributions ($\mathcal{K}$$=$$\mathcal{K}^{\rm el,stat}$$+$$\Delta\mathcal{K}^{\rm el}$) down to $T=0.01$ kelvin, but did with the electron-phonon and the static electron-electron contribution ($\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$$+$$\mathcal{K}^{\rm el,stat}$). Hence, while the driving force of the superconducting transition in Li is the phonon effect, the plasmon effect is essential to realize high $T_{\rm c}$.
Finally, we also examined the effect of energy dependence of electronic density of states (DOS) on $\mathcal{Z}^{\rm ph}$. Since the form for $\mathcal{Z}^{\rm ph}$ used above \[Eq. (24) in Ref. \] only treats the constant component of the density of states, we also employed a form generalized for the nonconstant density of states \[Eqs. (40) in Ref. \]. The calculated $T_{\rm c}$ changes by approximately 2% with the nonconstant component, indicating that the constant-DOS approximation for the phonon contributions is valid for the present systems.
![(Color online) $N_{\rm p}$ dependence of the calculated gap function $\Delta_{n{\bf k}}$ at the Fermi level at $T$=0.01 K for (a) fcc lithium under pressure of 25GPa and (b) aluminum.[]{data-label="fig:gap-conv"}](Al_Li_fcc_25GPa_gap_conv_wrt_peak_nointerpol_ed_130928.jpg){width="7cm"}
\[tab:Tc-2\]
--------------------------------- --------- --------- --------- --------- ----------
Al
14GPa 20GPa 25GPa 30GPa
$T_{\rm c}^{N_{\rm p}=1}$ \[K\] 1.4,1.5 2.2,2.8 4.1,5.2 6.5,7.4 9.1,11.1
$T_{\rm c}^{N_{\rm p}=2}$ \[K\] 1.4,1.6 2.2,3.1 4.4,5.5 6.8,8.0 9.1,10.7
$T_{\rm c}^{N_{\rm p}=5}$ \[K\] 1.6 3.8 6.5 9.2 12.0
$T_{\rm c}^{N_{\rm p}=8}$ \[K\] 1.6 3.8 6.5 9.2 12.0
--------------------------------- --------- --------- --------- --------- ----------
: Calculated $T_{\rm c}$ with different $N_{\rm p}$ using the procedure described in the text. For $N_{\rm p}$$=$1 and 2, the calculated values in Table \[tab:Tc\] are given together for comparison (left values).
$N_{\rm p}$ dependence of $T_{\rm c}$ {#subsec:large-Np}
-------------------------------------
Here we investigate the convergence of $T_{\rm c}$ with respect to the number of plasmon peaks $N_{\rm p}$. To address this problem, on top of the procedure described in Secs. \[sec:theory\] and \[subsec:small-Np\], we employed a slightly different algorithm. The difference is as follows. First, in the previous procedure, the plasmon frequencies $\omega_{i;n{\bf k}n'{\bf k}'}$ and coupling coefficients $a_{i;n{\bf k}n'{\bf k}'}$ for a set of sampling points were calculated from linear interpolation using the *ab initio* data on the equal grid, where the interpolation was independently carried out for each $i$-th largest branch. Since such an algorithm becomes unstable for damped peaks, we here did not carry out that, but rather determined $\omega_{i;n{\bf k}n'{\bf k}'}$ and $a_{i;n{\bf k}n'{\bf k}'}$ simply by the *ab initio* values on the neighboring grid point. Second, the weight for the variance and the ordering of the peak $p_{j}$ (see Sec. \[subsec:plasmon-pole\]) was set to unity in the previous procedure, but we here adopted $p_{j}= \nu_{j}^{-(1/3)}$: In an analytic $T_{\rm c}$ formula for three-dimensional electron gas derived by Takada \[Eq. (2.28) in Ref. \], the coefficient $\langle F \rangle$ in the exponent depends on the typical plasmon frequency by $\Omega_{\rm p}^{-(1/3)}$, so that we determined $p_{j}$ accordingly. We have indeed found that this setting of $p_{j}$ accelerates the convergence of the calculated gap function with respect to $N_{\rm p}$, as demonstrated by Fig. \[fig:gap-conv\].[@comment-accelerate]
Carrying out the above procedure,[@comment-recalc] we calculated $T_{\rm c}$ for Al and Li under the pressures. The calculated result for $N_{\rm p}$$=$1, 2, 5 and 8 is summarized in Table \[tab:Tc-2\] together with that of Sec. \[subsec:small-Np\]. For $N_{\rm p}$$=$1 and 2, the previous and present procedures give slightly different values of $T_{\rm c}$, which originates mainly from the difference in the interpolation of $\omega_{i;n{\bf k}n'{\bf k}'}$ and $a_{i;n{\bf k}n'{\bf k}'}$. Within the present results, the caluculated $T_{\rm c}$ for Al shows little $N_{\rm p}$ dependence, whereas $N_{\rm p}$ has to be larger than 5 for Li to achieve the convergence within 0.1K. This indicates that the damped dynamical structure of the Coulomb interaction ignored with $N_{\rm p}$$=$1 and 2 also has a nonnegligible effect. We note that the general numerical trend observed in the results in Sec. \[subsec:small-Np\] is also valid for the calculated values with $N_{\rm p}\geq 5$.
Summary and Conclusion {#sec:summary}
======================
We reviewed the recent progress by the authors in the SCDFT to address non-phonon superconducting mechanisms.[@Akashi-plasmon] An exchange-correlation kernel entering the SCDFT gap equation has been formulated within the dynamical RPA so that the plasmons in solids are considered. Through the retardation effect, plasmons can induce superconductivity, which has been studied for more than 35 years as the plasmon-induced pairing mechanism. A practical method to calculate $T_{\rm c}$ considering the plasmon effect have been implemented and applied to fcc Li. We have shown that the plasmon effect considerably raises $T_{\rm c}$ by cooperating with the conventional phonon-mediated pairing interaction, which is essential to understand the high $T_{\rm c}$ in Li under high pressures.
The recent application suggests a general possibility that plasmons have a substantial effect on $T_{\rm c}$, even in cases where it does not alone induce superconducting transition. It is then interesting to apply the present formalism to “other high-temperature superconductors"[@Pickett-review-other] such as layered nitrides, fullerides, and the bismuth perovskite. Effects of the electron-electron and electron-phonon interactions in these systems have recently been examined from various viewpoints, particularly with *ab initio* calculations.[@Meregalli-Savrasov-BKBO; @Heid-Bohnen2005; @Yin-Kotliar-PRX; @Antropov-Gunnarsson-C60; @Janssen-Cohen-C60; @Akashi-MNCl; @Akashi-fullerene; @Nomura-C60-cRPA] Since they have a nodeless superconducting gap, plasmons may play a crucial role to realize their high $T_{\rm c}$.[@Bill2002-2003] More generally, there can be other situations: (i) the phonon effect does not dominate over the static Coulomb repulsion, but the plasmon effect does (i.e., a superconducting solution is not found with $\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$$+$$\mathcal{K}^{\rm el,stat}$, but is found with $\mathcal{K}$$=$$\mathcal{K}^{\rm el,stat}$$+$$\Delta\mathcal{K}^{\rm el}$), and (ii) either of the two effects does not independently, but their cooperation does (i.e., a superconducting solution is found with $\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$$+$$\mathcal{K}^{\rm el,stat}$$+$$\Delta\mathcal{K}^{\rm el}$). Searching for superconducting systems of such kinds is another interesting future subject, for which our scheme provides a powerful tool based on the density functional theory.
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors thank Kazuma Nakamura and Yoshiro Nohara for providing subroutines for calculating the RPA dielectric functions. This work was supported by Funding Program for World-Leading Innovative R & D on Science and Technology (FIRST Program) on “Quantum Science on Strong Correlation,” JST-PRESTO, Grants-in-Aid for Scientic Research (No. 23340095), and the Next Generation Super Computing Project and Nanoscience Program from MEXT, Japan.
[999]{} J. Bardeen, L. N. Cooper, and J. R. Schrieffer, Phys. Rev. **108**, 1175 (1957). J. G. Bednorz and K. A. Müller, Z. Phys. B **64**, 189 (1986). A. B. Migdal, Sov. Phys. JETP **7**, 996 (1958); G. M. Eliashberg, Sov. Phys. JETP **11**, 696 (1960); D. J. Scalapino, in [*Superconductivity*]{} edited by R. D. Parks, (Marcel Dekker, New York, 1969) VOLUME 1; J. R. Schrieffer,[*Theory of superconductivity; Revised Printing*]{}, (Westview Press, Colorado, 1971); P. B. Allen and B. Mitrović, in *Solid State Physics*, edited by H. Ehrenreich, F. Seitz, and D. Turnbull (Academic, New York, 1982), Vol. 37, p. 1. W. Kohn and L. J. Sham, Phys. Rev. **140**, A1133 (1965). S. Baroni, S. deGironcoli, A. Dal Corso, and P. Giannozzi, Rev. Mod. Phys. **73**, 515(2001). K. Kunc and R. M. Martin, in *Ab initio Calculation of Phonon Spectra*, edited by J. T. Devreese, V. E. van Doren, and P. E. van Camp (Plenum, New York, 1983), p. 65. D. M. Ceperley and B. J. Alder, Phys. Rev. Lett. **45**, 566 (1980). J. P. Perdew and A. Zunger, Phys. Rev. B **23**, 5048 (1981). S. Y. Savrasov and D. Y. Savrasov, Phys. Rev. B **54**, 16487 (1996). H. J. Choi, D. Roundy, H. Sun, M. L. Cohen, and S. G. Louie, Nature (London) **418**, 758 (2002); Phys. Rev. B **66**, 020513(R) (2002). W. L. McMillan, Phys. Rev. [**167**]{}, 331 (1968). P. B. Allen and R. C. Dynes, Phys. Rev. B [**12**]{}, 905 (1975). P. Morel and P. W. Anderson, Phys. Rev. **125**, 1263 (1962); N. N. Bogoliubov, V. V. Tolmachev, and D. V. Shirkov, [*A New Method in the Theory of Superconductivity*]{} (1958) (translated from Russian: Consultants Bureau, Inc., New York, 1959). L. N. Oliveira, E. K. U. Gross, and W. Kohn, Phys. Rev. Lett. **60**, 2430 (1988). T. Kreibich and E. K. U. Gross, Phys. Rev. Lett. **86**, 2984 (2001). M. Lüders, M. A. L. Marques, N. N. Lathiotakis, A. Floris, G. Profeta, L. Fast, A. Continenza, S. Massidda, and E. K. U. Gross, Phys. Rev. B **72**, 024545 (2005). M. A. L. Marques, M. Lüders, N. N. Lathiotakis, G. Profeta, A. Floris, L. Fast, A. Continenza, E. K. U. Gross, and S. Massidda, Phys. Rev. B **72**, 024546 (2005). A. Floris, G. Profeta, N. N. Lathiotakis, M. Lüders, M. A. L. Marques, C. Franchini, E. K. U. Gross, A. Continenza, and S. Massidda, Phys. Rev. Lett. **94**, 037004 (2005). A. Sanna, G. Profeta, A. Floris, A. Marini, E. K. U. Gross, and S. Massidda, Phys. Rev. B **75**, 020511(R) (2007). C. Bersier, A. Floris, A. Sanna, G. Profeta, A. Continenza, E. K. U. Gross, and S. Massidda, Phys. Rev. B **79**, 104503 (2009). R. Akashi, K. Nakamura, R. Arita, and M. Imada, Phys. Rev. B **86**, 054513 (2012). R. Akashi and R. Arita, Phys. Rev. B **88**, 054510 (2013). D. J. Scalapino, Rev. Mod. Phys. **84**, 1383 (2012). W. Kohn and J. M. Luttinger, Phys. Rev. Lett. **15**, 524 (1965). V. Radhakrishnan, Phys. Lett. **16**, 247 (1965). H. Fröhlich, J. Phys. C: Solid State Phys. **1**, 544 (1968). Y. Takada, J. Phys. Soc. Jpn. **45**, 786 (1978). H. Rietschel and L. J. Sham, Phys. Rev. B **28**, 5100 (1983). W. A. Little, Phys. Rev. **134**, A1416 (1967). C. S. Koonce, M. L. Cohen, J. F. Schooley, W. R. Hosler, and E. R. Pfeiffer, Phys. Rev. **163**, 380 (1967). Y. Takada, J. Phys. Soc. Jpn. **49**, 1267 (1980). J. W. Garland, Jr., Phys. Rev. Lett. **11**, 111 (1963). D. Pines, Can. J. Phys. **34**, 1379 (1956). J. Ihm, M. L. Cohen, and S. F. Tuan, Phys. Rev. B **23**, 3258 (1981). V. L. Ginzburg, Sov. Phys. Usp. **13**, 335 (1970). D. Allender, J. Bray, and J. Bardeen, Phys. Rev. B **7**, 1020 (1973). V. Z. Kresin, Phys. Rev. B **35**, 8716 (1987). A. Bill, H. Morawitz, and V. Z. Kresin, Phys. Rev. B **66**, 100501(R) (2002); Phys. Rev. B **68**, 144519 (2003). S. Yamanaka, K. Hotehama, and H. Kawaji, Nature (London) **392**, 580 (1998). Y. Taguchi, A. Kitora, and Y. Iwasa, Phys. Rev. Lett. **97**, 107001 (2006). K. Taniguchi, A. Matsumoto, H. Shimotani, and H. Takagi, Appl. Phys. Lett. **101**, 042603 (2012). J. T. Ye, Y. J. Zhang, R. Akashi, M. S. Bahramy, R. Arita, and Y. Iwasa, Science **338**, 1193 (2012). R. Akashi and R. Arita, Phys. Rev. Lett. **111**, 057006 (2013). S. Massidda, F. Bernardini, C. Bersier, A. Continenza, P. Cudazzo, A. Floris, H. Glawe, M. Monni, S. Pittalis, G. Profeta, A. Sanna, S. Sharma, and E. K. U. Gross, Supercond. Sci. Technol. **22**, 034006 (2009). D. Pines, *Elementary Excitations in Solids* (Benjamin, New York, 1963). S. Kurth, M. Marques, M. Lüders, and E. K. U. Gross, Phys. Rev. Lett. **83**, 2628 (1999). M. S. Hybertsen and S. G. Louie, Phys. Rev. B **35**, 5585 (1987); M. S. Hybertsen and S. G. Louie, Phys. Rev. B **35**, 5602 (1987). J. Kondo, Prog. Theor. Phys. **29**, 1 (1963). Y. Takada, J. Phys. Soc. Jpn, **51**, 63 (1982). Y. Takada, J. Phys. Soc. Jpn, **78**, 013703 (2009). His theory also treats the coupling between phonons and plasmons, which is not considered in our method. K. Shimizu, H. Ishikawa, D. Takao, T. Yagi, and K. Amaya, Nature (London) **419**, 597 (2002). V.V. Struzhkin, M. I. Eremets, W. Gan, H. K. Mao, and R. J. Hemley, Science **298**, 1213 (2002). S. Deemyad and J. S. Schilling, Phys. Rev. Lett. **91**, 167001 (2003). T. H. Lin and K. J. Dunn, Phys. Rev. B **33**, 807 (1986). J. S. Tse, Y. Ma, and H. M. Tütüncü, J. Phys. Condens. Matter **17**, S911 (2005); Y. Yao, J. Tse, K. Tanaka, F. Marsiglio, Y. Ma, Phys. Rev. B **79**, 054524 (2009). S. U. Maheswari, H. Nagara, K. Kusakabe, and N. Suzuki, J. Phys. Soc. Jpn. **74**, 3227 (2005). D. Kasinathan, J. Kuneš, A. Lazicki, H. Rosner, C. S. Yoo, R. T. Scalettar, and W. E. Pickett, Phys. Rev. Lett. **96**, 047004 (2006); D. Kasinathan, K. Koepernik, J. Kuneš, H. Rosner, and W. E. Pickett, Physica C **460-462**, 133 (2007). N. E. Christensen and D. L. Novikov, Phys. Rev. B **73**, 224508 (2006). R. A. Jishi, M. Benkraouda, and J. Bragin, J. Low Temp. Phys. **147**, 549 (2007). G. Profeta, C. Franchini, N. N. Lathiotakis, A. Floris, A. Sanna, M. A. L. Marques, M. Lüders, S. Massidda, E. K. U. Gross, and A. Continenza, Phys. Rev. Lett. **96**, 047003 (2006). T. Bazhirov, J. Noffsinger, and M. L. Cohen, Phys. Rev. B **82**, 184509 (2010). F. Giustino, M. L. Cohen, and S. G. Louie, Phys. Rev. B **76**, 165108 (2007). P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L. Chiarotti, M. Cococcioni, I. Dabo, A. Dal Corso, S. Fabris, G. Fratesi, S. de Gironcoli, R. Gebauer, U. Gerstmann, C. Gougoussis, A. Kokalj, M. Lazzeri, L. Martin-Samos, N. Marzari, F. Mauri, R. Mazzarello, S. Paolini, A. Pasquarello, L. Paulatto, C. Sbraccia, S. Scandolo, G. Sclauzero, A. P. Seitsonen, A. Smogunov, P. Umari, and R. M. Wentzcovitch, J. Phys.: Condens. Matter **21**, 395502 (2009); http://www.quantum-espresso.org/. N. Troullier and J. L. Martins, Phys. Rev. B **43**, 1993 (1991). V. M. Silkin, A. Rodriguez-Prieto, A. Bergara, E. V. Chulkov, and P. M. Echenique, Phys. Rev. B **75**, 172102 (2007). K. Karlsson and F. Aryasetiawan, Phys. Rev. B **52**, 4823 (1995). E. N. Foo and J. J. Hopfield, Phys. Rev. **173**, 635 (1968). K. Sturm and L. E. Oliveira, Phys. Rev. B **40**, 3672 (1989). N. W. Ashcroft and N. D. Mermin, *Solid State Physics* (Thomson Learning, Singapore, 1976). R. Akashi and R. Arita, Phys. Rev. B **88**, 014514 (2013). The two settings of $p_{j}$ give slightly different values of the variance for $N_{\rm p}\rightarrow \infty$ due to the cusplike structure discussed in Sec. \[subsec:plasmon-pole\], though the difference is invisibly small here. Also, we used Kohn-Sham energy eigenvalues on an auxiliary $21^{3}$ k-point grid for the ${\bf k}$ integration in Eq. (\[eq:chi-def\]) when the both $n$ and $n$’ corresponds to partially-occupied bands, with which some plasmon peaks became sharp. W. E. Pickett, Physica B **296**, 112 (2001); Physica C **468**, 126 (2008). V. Meregalli and S. Y. Savrasov, Phys. Rev. B **57**, 14453 (1998). R. Heid and K. P. Bohnen, Phys. Rev. B **72**, 134527 (2005). Z. P. Yin, A. Kutepov, and G. Kotliar, Phys. Rev. X **3**, 021011 (2013). V. P. Antropov, O. Gunnarsson, and A. I. Liechtenstein, Phys. Rev. B **48**, 7651 (1993). J. Laflamme Janssen, M. Côté, S. G. Louie, and M. L. Cohen, Phys. Rev. B **81**, 073106 (2010). Y. Nomura, K. Nakamura, and R. Arita, Phys. Rev. B **85**, 155452 (2012).
[^1]: E-mail address: akashi@solis.t.u-tokyo.ac.jp
|
{
"pile_set_name": "arxiv"
}
|
4 ideas for improving your e-commerce website
2019 is here, and the new year provides an excellent opportunity to refresh your e-commerce website, by adding new features and updating content.
Adding web banners
Web banners are a great way to keep your e-commerce website homepage looking fresh, and making viewers aware of the latest news about products and offers. They can be easily modified to serve a range of purposes, are potentially eye-catching if they are placed in the appropriate area and are a good way of promoting a specific product or offer on a homepage while also retaining the core brand visuals elsewhere.
The image below is an example of a web banner in development. We’ve put together a handy guide for creating your banners – click here.
Adding new features
New features on your e-commerce website can add value through improved functionality, which in turn enhances the usability for customers and users. Features that allow for easy modification to products that they wish to purchase, such as different colours or quantities, or a social login function that enables users to create an account through their Facebook credentials.
Such responsive features make the e-commerce process as painless and easy-to-use as possible, limiting the barriers between browsing and purchase, in turn improving conversion rates and the chances of customers returning for more in the future.
A positive experience can often leave the customer wanting more, and it’s the websites job to ensure that their features and functionality are kept updated and fit for purpose, in response to the ever-changing demands of the modern e-commerce customer.
For example, one of the new features we’ve recently added from Amasty is the Social Login, which allows users to set up their account using login credentials from Facebook. To find out more about this feature, click here.
Improve optimisation
While you’re reading this, grab a smartphone or tablet and have a browse around your website.
How does it look?
Are the images stacked or overlapping, or is there text missing?
These issues mean your website has not been optimised for mobile devices, making it unusable for a large percentage of potential customers browsing with their iPhones or Samsungs. Users are extremely unlikely to want to fight through images and texts to find the products they want, and will quickly become frustrated and depart for a different site.
Don’t neglect these customers! Get your site optimised for different devices to reach as wide an audience as possible.
Data from 2018, shown below in the graph, from Statista.com, shows that 52.2% of all browsing online was done on a mobile device, a trend which has grown exponentially year-upon-year. This graph underlines the importance of ensuring that your website is fit for use for all potential users. You’re potentially missing out on reaching these customers if your site doesn’t meet their demands, and, with the trend of mobile browsing only set to rise, optimising your website to ensure it’s fit for use is quickly become a necessity for online retailers.
Our Liquidshop e-commerce platform is designed to provide the best user experience for your customers, though responsive e-commerce. Optimisation on devices of all sizes allows your website to be user-friendly for as many potential visitors as possible, expanding your reach and enhancing the user experience, leading to increased sales as part of the smooth and responsive overall e-commerce experience.
Keeping branding updated and consistent
There are few things more off-putting when navigating onto an e-commerce website that a poorly designed logo at the top of the page, or old, pixilated imagery taking up the homepage. A consistent brand image across the pages improves brand recognition for customers and gives the impression of a modern, well designed and cared for website and business as a whole.
You can also create special themed logos for holiday times such as Christmas or winter, like we did with our logo below.
What’s most important is too put time and effort into keeping your website updated. Whether that’s imagery, information or branding, putting the time into maintaining an attractive and cohesive e-commerce site means you keep your customers, and new visitors engaged and ensure that there as few barriers as possible between browsing and purchase.
Magento is ending support for version 1 in June 2020. After initially announcing that November 2018 would be the cut-off point, this was revised to the later date, to allow for the vast amount of v1 websites around the world to be upgraded and rebuilt in v2. What...
We closely monitor all areas of e-commerce, always on the lookout for developments that offer additional functionality and improved performance for our Liquidshop clients and their customers. One development that is a growing concern throughout the industry is...
In case you missed the news last week, Liquidshop has become an official partner of Magento feature developer Amasty. Liquidshop has been recognised by Amasty as a company who have significant expertise and skills in Magento web development. This underlines the...
|
{
"pile_set_name": "pile-cc"
}
|
---
abstract: 'In this article we relate word and subgroup growth to certain functions that arise in the quantification of residual finiteness. One consequence of this endeavor is a pair of results that equate the nilpotency of a finitely generated group with the asymptotic behavior of these functions. The second half of this article investigates the asymptotic behavior of two of these functions. Our main result in this arena resolves a question of Bogopolski from the Kourovka notebook concerning lower bounds of one of these functions for nonabelian free groups.'
author:
- 'K. Bou-Rabee[^1] and D. B. McReynolds[^2]'
title: |
**Asymptotic growth and\
least common multiples in groups**
---
1991 MSC classes: 20F32, 20E26 .05in keywords: *free groups, hyperbolic groups, residual finiteness, subgroup growth, word growth.*
Introduction
============
The goals of the present article are to examine the interplay between word and subgroup growth, and to quantify residual finiteness, a topic motivated and described by the first author in [@Bou]. These two goals have an intimate relationship that will be illustrated throughout this article.
Our focus begins with the interplay between word and subgroup growth. Recall that for a fixed finite generating set $X$ of $\Gamma$ with associated word metric ${\left\vert \left\vert \cdot\right\vert\right\vert}_X$, word growth investigates the asymptotic behavior of the function $$\operatorname{w}_{\Gamma,X}(n) = {\left\vert{\left\{\gamma \in \Gamma~:~ {\left\vert \left\vert \gamma\right\vert\right\vert}_X \leq n\right\}}\right\vert},$$ while subgroup growth investigates the asymptotic behavior of the function $$\operatorname{s}_\Gamma(n) = {\left\vert{\left\{\Delta \lhd \Gamma~:~ [\Gamma:\Delta]\leq n\right\}}\right\vert}.$$ To study the interaction between word and subgroup growth we propose the first of a pair of questions:
**Question 1.** *What is the smallest integer $\operatorname{F}_{\Gamma,X}(n)$ such that for every word $\gamma$ in $\Gamma$ of word length at most $n$, there exists a finite index normal subgroup of index at most $\operatorname{F}_{\Gamma,X}(n)$ that fails to contain $\gamma$?*
To see that the asymptotic behavior of $\operatorname{F}_{\Gamma,X}(n)$ measures the interplay between word and subgroup growth, we note the following inequality (see Section \[Preliminary\] for a simple proof): $$\label{BasicInequality}
\log (\operatorname{w}_{\Gamma,X}(n)) \leq \operatorname{s}_\Gamma(\operatorname{F}_{\Gamma,X}(2n))\log (\operatorname{F}_{\Gamma,X}(2n)).$$ Our first result, which relies on Inequality (\[BasicInequality\]), is the following.
\[DivisibilityLogGrowth\] If $\Gamma$ is a finitely generated linear group, then the following are equivalent:
- $\operatorname{F}_{\Gamma,X}(n) \leq (\log(n))^r$ for some $r$.
- $\Gamma$ is virtually nilpotent.
For finitely generated linear groups that is not virtually nilpotent, Theorem \[DivisibilityLogGrowth\] implies $\operatorname{F}_{\Gamma,X}(n) \nleq (\log(n))^r$ for any $r >0$. For this class of groups, we can improve this lower bound. Precisely, we have the following result—see Section \[Preliminary\] for the definition of $\preceq$.
\[basiclowerbound\] Let $\Gamma$ be a group that contains a nonabelian free group of rank $m$. Then $$n^{1/3} \preceq \operatorname{F}_{\Gamma,X}(n).$$
The motivation for the proof of Theorem \[basiclowerbound\] comes from the study of $\operatorname{F}_{{\ensuremath{{\ensuremath{\mathbf{Z}}}}},X}(n)$, where the Prime Number Theorem and least common multiples provide lower and upper bounds for $\operatorname{F}_{{\ensuremath{{\ensuremath{\mathbf{Z}}}}},X}(n)$. In Section \[FreeGroupGrowth\], we extend this approach by generalizing least common multiples to finitely generated groups (a similar approach was also taken in the article of Hadad [@Hadad]). Indeed with this analogy, Theorem \[basiclowerbound\] and the upper bound of $n^3$ established in [@Bou], [@Rivin] can be viewed as a weak Prime Number Theorem for free groups since the Prime Number Theorem yields $\operatorname{F}_{\ensuremath{{\ensuremath{\mathbf{Z}}}}}(n) \simeq \log(n)$. Recently, Kassabov–Matucci [@KM] improved the lower bound of $n^{1/3}$ to $n^{2/3}$. A reasonable guess is that $\operatorname{F}_{F_m,X}(n) \simeq n$, though presently neither the upper or lower bound is known. We refer the reader to [@KM] for additional questions and conjectures.
There are other natural ways to measure the interplay between word and subgroup growth. Let $B_{\Gamma,X}(n)$ denote $n$–ball in $\Gamma$ for the word metric associated to the generating set $X$. Our second measurement is motivated by the following question—in the statement, $B_{\Gamma,X}(n)$ is the metric $n$–ball with respect to the word metric ${\left\vert \left\vert \cdot\right\vert\right\vert}_X$:
**Question 2.** *What is the cardinality $\operatorname{G}_{\Gamma,X}(n)$ of the smallest finite group $Q$ such that there exists a surjective homomorphism ${\varphi}{\ensuremath{\colon}}\Gamma \to Q$ with the property that ${\varphi}$ restricted to $B_{\Gamma,X}(n)$ is injective?*
We call $\operatorname{G}_{\Gamma,X}(n)$ the *residual girth function* and relate $\operatorname{G}_{\Gamma,X}(n)$ to $\operatorname{F}_{\Gamma,X}$ and $\operatorname{w}_{\Gamma,X}(n)$ for a class of groups containing non-elementary hyperbolic groups; Hadad [@Hadad] studied group laws on finite groups of Lie type, a problem that is related to residual girth and the girth of a Cayley graph for a finite group. Specifically, we obtain the following inequality (see Section \[FreeGroupGrowth\] for a precise description of the class of groups for which this inequality holds): $$\label{BasicGirthEquation}
\operatorname{G}_{\Gamma,X}(n/2) \leq \operatorname{F}_{\Gamma,X}{\left( 6n(\operatorname{w}_{\Gamma,X}(n))^{2} \right) }.$$ Our next result shows that residual girth functions enjoy the same growth dichotomy as word and subgroup growth—see [@gromov] and [@lubsegal-2003].
\[GirthPolynomialGrowth\] If $\Gamma$ is a finitely generated group then the following are equivalent.
- $\operatorname{G}_{\Gamma,X}(n) \leq n^r$ for some $r$.
- $\Gamma$ is virtually nilpotent.
The asymptotic growth of $\operatorname{F}_{\Gamma,X}(n)$, $\operatorname{G}_{\Gamma,X}(n)$, and related functions arise in quantifying residual finiteness, a topic introduced in [@Bou] (see also the recent articles of the authors [@BM], Hadad [@Hadad], Kassabov–Mattucci [@KM], and Rivin [@Rivin]). Quantifying residual finiteness amounts to the study of so-called divisibility functions. Given a finitely generated, residually finite group $\Gamma$, we define the *divisibility function* $\operatorname{D}_\Gamma{\ensuremath{\colon}}\Gamma^\bullet {\longrightarrow}{\ensuremath{{\ensuremath{\mathbf{N}}}}}$ by $$\operatorname{D}_\Gamma(\gamma) = \min {\left\{[\Gamma:\Delta] ~:~ \gamma \notin \Delta\right\}}.$$ The associated *normal divisibility function* for normal, finite index subgroups is defined in an identical way and will be denoted by $\operatorname{D}_{\Gamma}^\lhd$. It is a simple matter to see that $\operatorname{F}_{\Gamma,X}(n)$ is the maximum value of $\operatorname{D}_{\Gamma}^\lhd$ over all non-trivial elements in $B_{\Gamma,X}(n)$. We will denote the associated maximum of $\operatorname{D}_\Gamma$ over this set by $\max \operatorname{D}_\Gamma (n)$.
The rest of the introduction is devoted to a question of Oleg Bogopolski, which concerns $\max \operatorname{D}_{\Gamma,X}(n)$. It was established in [@Bou] that $\log(n) \preceq \max \operatorname{D}_{\Gamma,X}(n)$ for any finitely generated group with an element of infinite order (this was also shown by [@Rivin]). For a nonabelian free group $F_m$ of rank $m$, Bogopolski asked whether $\max \operatorname{D}_{F_m,X}(n) \simeq \log(n)$ (see Problem 15.35 in the Kourovka notebook [@TheBook]). Our next result answers Bogopolski’s question in the negative—we again refer the reader to Section \[Preliminary\] for the definition of $\preceq$.
\[toughlowerbound\] If $m>1$, then $\max \operatorname{D}_{F_m,X}(n) \npreceq \log(n)$.
We prove Theorem \[toughlowerbound\] in Section \[toughlowerboundSection\] using results from Section \[FreeGroupGrowth\]. The first part of the proof of Theorem \[toughlowerbound\] utilizes the material established for the derivation of Theorem \[basiclowerbound\]. The second part of the proof of Theorem \[toughlowerbound\] is topological in nature, and involves a careful study of finite covers of the figure eight. It is also worth noting that our proof only barely exceeds the proposed upper bound of $\log(n)$. In particular, at present we cannot rule out the upper bound $(\log(n))^2$. In addition, to our knowledge the current best upper bound is $n/2 + 2$, a result established recently by Buskin [@Bus]. In comparison to our other results, Theorem \[toughlowerbound\] is the most difficult to prove and is also the most surprising. Consequently, the reader should view Theorem \[toughlowerbound\] as our main result.
#### **Acknowledgements.**
Foremost, we are extremely grateful to Benson Farb for his inspiration, comments, and guidance. We would like to thank Oleg Bogopolski, Emmanuel Breuillard, Jason Deblois, Jordan Ellenberg, Tsachik Gelander, Uzy Hadad, Frédéric Haglund, Ilya Kapovich, Martin Kassabov, Larsen Louder, Justin Malestein, Francesco Matucci, and Igor Rivin for several useful conversations and their interest in this article. Finally, we extend thanks to Tom Church, Blair Davey, and Alex Wright for reading over earlier drafts of this paper. The second author was partially supported by an NSF postdoctoral fellowship.
Divisibility and girth functions {#Preliminary}
================================
In this introductory section, we lay out some of the basic results we require in the sequel. For some of this material, we refer the reader to [@Bou Section 1].
#### **Notation.**
Throughout, $\Gamma$ will denote a finitely generated group, $X$ a fixed finite generating set for $\Gamma$, and ${\left\vert \left\vert \cdot\right\vert\right\vert}_X$ will denote the word metric. For $\gamma \in \Gamma$, ${\left< \gamma \right>}$ will denote the cyclic subgroup generated by $\gamma$ and $\overline{{\left< \gamma \right>}}$ the normal closure of ${\left< \gamma \right>}$. For any subset $S \subset \Gamma$ we set $S^\bullet = S-1$.
#### **1. Function comparison and basic facts**.
For a pair of functions $f_1,f_2{\ensuremath{\colon}}{\ensuremath{{\ensuremath{\mathbf{N}}}}}\to {\ensuremath{{\ensuremath{\mathbf{N}}}}}$, by $f_1 \preceq f_2$, we mean that there exists a constant $C$ such that $f_1(n) \leq Cf_2(Cn)$ for all $n$. In the event that $f_1 \preceq f_2$ and $f_2 \preceq f_1$, we will write $f_1 \simeq f_2$.
This notion of comparison is well suited to the functions studied in this paper. We summarize some of the basic results from [@Bou] for completeness.
\[DivisibilityAsymptoticLemma\] Let $\Gamma$ be a finitely generated group.
- If $X,Y$ are finite generating sets for $\Gamma$ then $\operatorname{F}_{\Gamma,X} \simeq \operatorname{F}_{\Gamma,Y}$.
- If $\Delta$ is a finitely generated subgroup of $\Gamma$ and $X,Y$ are finite generating sets for $\Gamma,\Delta$ respectively, then $\operatorname{F}_{\Delta,Y} \preceq \operatorname{F}_{\Gamma,X}$.
- If $\Delta$ is a finite index subgroup of $\Gamma$ with $X,Y$ as in (b), then $\operatorname{F}_{\Gamma,X} \preceq (\operatorname{F}_{\Delta,Y})^{[\Gamma:\Delta]}$.
We also have a version of Lemma \[DivisibilityAsymptoticLemma\] for residual girth functions.
\[GirthAsymptoticLemma\] Let $\Gamma$ be a finitely generated group.
- If $X,Y$ are finite generating sets for $\Gamma$, then $\operatorname{G}_{\Gamma,X} \simeq \operatorname{G}_{\Gamma,Y}$.
- If $\Delta$ is a finitely generated subgroup of $\Gamma$ and $X,Y$ are finite generating sets for $\Gamma,\Delta$ respectively, then $\operatorname{G}_{\Delta,Y} \preceq \operatorname{G}_{\Gamma,X}$.
- If $\Delta$ is a finite index subgroup of $\Gamma$ with $X,Y$ as in (b), then $\operatorname{G}_{\Gamma,X} \preceq (\operatorname{G}_{\Delta,Y})^{[\Gamma:\Delta]}$.
As the proof of Lemma \[GirthAsymptoticLemma\] is straightforward, we have opted to omit it for sake of brevity. As a consequence of Lemmas \[DivisibilityAsymptoticLemma\] and \[GirthAsymptoticLemma\], we occasionally suppress the dependence of the generating set in our notation.
#### **2. The basic inequality.**
We now derive (\[BasicInequality\]) from the introduction. For the reader’s convenience, recall (\[BasicInequality\]) is $$\log (\operatorname{w}_{\Gamma,X}(n)) \leq \operatorname{s}_\Gamma(\operatorname{F}_{\Gamma,X}(2n))\log (\operatorname{F}_{\Gamma,X}(2n)).$$
We may assume that $\Gamma$ is residually finite as otherwise $\operatorname{F}_\Gamma(n)$ is eventually infinite for sufficiently large $n$ and the inequality is trivial. By definition, for each word $\gamma \in B_{\Gamma,X}^\bullet(2n)$, there exists a finite index, normal subgroup $\Delta_\gamma$ in $\Gamma$ such that $\gamma \notin \Delta_\gamma$ and $[\Gamma:\Delta_\gamma] \leq \operatorname{F}_{\Gamma,X}(2n)$. Setting $\Omega_{\operatorname{F}_{\Gamma,X}(2n)}(\Gamma)$ to be the intersection of all finite index, normal subgroup of index at most $\operatorname{F}_{\Gamma,X}(2n)$, we assert that $B_{\Gamma,X}(n)$ injects into quotient $\Gamma/\Omega_{\operatorname{F}_{\Gamma,X}(2n)}(\Gamma)$. Indeed, if two elements $\gamma_1,\gamma_2 \in B_{\Gamma,X}(n)$ had the same image, the element $\gamma_1\gamma_2^{-1}$ would reside in $\Omega_{\operatorname{F}_{\Gamma,X}(2n)}(\Gamma)$. However, by construction, every element of word length at most $2n$ has nontrivial image. In particular, we see that $$\begin{aligned}
\operatorname{w}_{\Gamma,X}(n) &= {\left\vertB_{\Gamma,X}(n)\right\vert} \leq {\left\vert\Gamma/\Omega_{\operatorname{F}_{\Gamma,X}(2n)}(\Gamma)\right\vert} \\
& \leq \prod_{\scriptsize{\begin{matrix} \Delta \lhd \Gamma \\ [\Gamma:\Delta]\leq \operatorname{F}_{\Gamma,X}(2n)\end{matrix}}} {\left\vert\Gamma/\Delta\right\vert} \\
&\leq \prod_{\scriptsize{\begin{matrix} \Delta \lhd \Gamma \\ [\Gamma:\Delta]\leq \operatorname{F}_{\Gamma,X}(2n)\end{matrix}}} \operatorname{F}_{\Gamma,X}(2n) \\
&\leq (\operatorname{F}_{\Gamma,X}(2n))^{\operatorname{s}_\Gamma(\operatorname{F}_{\Gamma,X}(2n))}.\end{aligned}$$ Taking the log of both sides, we obtain $$\log(\operatorname{w}_{\Gamma,X}(n)) \leq \operatorname{s}_\Gamma(\operatorname{F}_{\Gamma,X}(2n))\log(\operatorname{F}_{\Gamma,X}(2n)).$$
In fact, the proof of (\[BasicInequality\]) yields the following.
Let $\Gamma$ be a finitely generated, residually finite group. Then $$\log (\operatorname{G}_{\Gamma,X}(n)) \leq \operatorname{s}_\Gamma(\operatorname{F}_{\Gamma,X}(2n)) \log(\operatorname{F}_{\Gamma,X}(2n)).$$
#### **3. An application of (\[BasicInequality\]).**
We now derive the following as an application of (\[BasicInequality\]).
\[BasicInequalityMainProp\] Let $\Gamma$ be a finitely generated, residually finite group. If there exists $\alpha > 1$ such that $\alpha^n \preceq \operatorname{w}_{\Gamma,X}(n)$, then $\operatorname{F}_{\Gamma,X}(n) \npreceq (\log n)^r$ for any $r \in {\ensuremath{{\ensuremath{\mathbf{R}}}}}$.
Assume on the contrary that there exists $r \in {\ensuremath{{\ensuremath{\mathbf{R}}}}}$ such that $\operatorname{F}_{\Gamma,X} \preceq (\log(n))^r$. In terms of $\preceq$ notation, inequality (\[BasicInequality\]) becomes: $$\log(\operatorname{w}_{\Gamma, X} (n)) \preceq s_\Gamma (\operatorname{F}_{\Gamma, X}(n)) \log(\operatorname{F}_{\Gamma, X}(n)).$$ Taking the log of both sides, we obtain $$\log\log(\operatorname{w}_{\Gamma, X} (n)) \preceq \log(\operatorname{s}_\Gamma (\operatorname{F}_{\Gamma, X}(n)))+ \log(\log(\operatorname{F}_{\Gamma, X}(n))).$$ This inequality, in tandem with the assumptions $$\begin{aligned}
\alpha^n &\preceq \operatorname{w}_{\Gamma,X}(n), \\
\operatorname{F}_{\Gamma,X}(n) &\preceq (\log(n))^r,\end{aligned}$$ and $\log(\operatorname{s}_\Gamma(n)) \preceq (\log(n))^2$ (see [@lubsegal-2003 Corollary 2.8]) gives $$\log(n) \preceq (\log\log(n))^2 + \log\log\log(n),$$ which is impossible.
With Proposition \[BasicInequalityMainProp\], we can now prove Theorem \[DivisibilityLogGrowth\].
For the direct implication, we assume that $\Gamma$ is a finitely generated linear group with $\operatorname{F}_\Gamma \preceq (\log n)^r$ for some $r$. According to the Tits’ alternative, either $\Gamma$ is virtually solvable or $\Gamma$ contains a nonabelian free subgroup. In the latter case, $\Gamma$ visibly has exponential word growth and thus we derive a contradiction via Proposition \[BasicInequalityMainProp\]. In the case $\Gamma$ is virtually solvable, $\Gamma$ must also have exponential word growth unless $\Gamma$ is virtually nilpotent (see [@harpe-2000 Theorem VII.27]). This in tandem with Proposition \[BasicInequalityMainProp\] implies $\Gamma$ is virtually nilpotent.
For the reverse implication, let $\Gamma$ be a finitely generated, virtually nilpotent group with finite index, nilpotent subgroup $\Gamma_0$. According to Theorem 0.2 in [@Bou], $\operatorname{F}_{\Gamma_0} \preceq (\log n)^r$ for some $r$. Combining this with Lemma \[DivisibilityAsymptoticLemma\] (c) yields $\operatorname{F}_\Gamma \preceq (\log n)^{r[\Gamma:\Gamma_0]}$.
In the next two sections, we will prove Theorem \[basiclowerbound\]. In particular, for finitely generated linear groups that are not virtually solvable, we obtain an even better lower bound for $\operatorname{F}_{\Gamma,X}(n)$ than can be obtained using (\[BasicInequality\]). Namely, $n^{1/3} \preceq \operatorname{F}_{\Gamma,X}(n)$ for such groups. The class of non-nilpotent, virtually solvable groups splits into two classes depending on whether the rank of the group is finite or not. This is not the standard notion of rank but instead $$\textrm{rk}(\Gamma) = \max{\left\{ r(\Delta)~:~ \Delta \text{ is a finitely generated subgroup of } \Gamma\right\}},$$ where $$r(\Delta) = \min{\left\{{\left\vertY\right\vert}~:~Y \text{ is a generating set for }\Delta\right\}}.$$ The class of virtually solvable groups with finite rank is known to have polynomial subgroup growth (see [@lubsegal-2003 Chapter 5]) and thus have a polynomial upper bound on normal subgroup growth. Using this upper bound with (\[BasicInequality\]) yields our next result.
If $\Gamma$ is virtually solvable, finite rank, and not nilpotent, then $n^{1/d} \preceq \operatorname{F}_{\Gamma,X}(n)$ for some $d \in {\ensuremath{{\ensuremath{\mathbf{N}}}}}$.
For a non-nilpotent, virtually solvable group of finite rank, we have the inequalities: $$\begin{aligned}
\alpha^n &\preceq \operatorname{w}_{\Gamma,X}(n) \\
\operatorname{s}_{\Gamma,X}(n) &\preceq n^m.\end{aligned}$$ Setting $d=2m$ and assuming $\operatorname{F}_{\Gamma,X}(n) \preceq n^{1/d}$, inequality (\[BasicInequality\]) yields the impossible inequality $$n \simeq \log(\alpha^n) \preceq \log(\operatorname{w}_{\Gamma,X}(n)) \preceq \operatorname{s}_\Gamma(\operatorname{F}_{\Gamma,X}(n))\log(\operatorname{F}_{\Gamma,X}(n)) \preceq (n^{1/d})^m \log(n^{1/d}) \simeq \sqrt{n}\log(n).$$
Virtually solvable group $\Gamma$ with infinite $\textrm{rk}(\Gamma)$ cannot be handled in this way as there exist examples with $c^{n^{1/d}} \preceq \operatorname{s}_{\Gamma,X}(n)$ with $c>1$ and $d \in {\ensuremath{{\ensuremath{\mathbf{N}}}}}$.
Least common multiples
======================
Let $\Gamma$ be a finitely generated group and $S {\subset}\Gamma^\bullet$ a finite subset. Associated to $S$ is the subgroup $L_S$ given by $$L_S = {\bigcap}_{\gamma \in S} \overline{{\left< \gamma \right>}}.$$ We define the *least common multiple of $S$* to be the set $$\operatorname{LCM}_{\Gamma,X}(S) = {\left\{\delta \in L_S^\bullet~:~ {\left\vert \left\vert \delta\right\vert\right\vert}_X \leq {\left\vert \left\vert \eta\right\vert\right\vert}_X \text{ for all }\eta \in L_S^\bullet\right\}}.$$ That is, $\operatorname{LCM}_{\Gamma,X}(S)$ is the set of nontrivial words in $L_S$ of minimal length in a fixed generating set $X$ of $\Gamma$. Finally, we set $$\operatorname{lcm}_{\Gamma,X}(S) =
\begin{cases} {\left\vert \left\vert \delta\right\vert\right\vert}_X& \text{ if there exists }\delta \in \operatorname{LCM}_{\Gamma,X}(S), \\
0 & \text{ if }\operatorname{LCM}_{\Gamma,X}(S) = \emptyset.
\end{cases}$$
The following basic lemma shows the importance of least common multiples in the study of both $\operatorname{F}_\Gamma$ and $\operatorname{G}_\Gamma$.
\[WordLengthForLCM\] Let $S {\subset}\Gamma^\bullet$ be a finite set and $\delta \in \Gamma^\bullet$ have the following property: For any homomorphism ${\varphi}{\ensuremath{\colon}}\Gamma \to Q$, if $\ker {\varphi}\cap S \ne {\emptyset}$, then $\delta \in \ker {\varphi}$. Then $\operatorname{lcm}_{\Gamma,X}(S) \leq {\left\vert \left\vert \delta\right\vert\right\vert}_X$.
To prove this, for each $\gamma \in S$, note that ${\varphi}_\gamma{\ensuremath{\colon}}\Gamma \to \Gamma/\overline{{\left< \gamma \right>}}$ is homomorphism for which $\ker {\varphi}_\gamma \cap S \ne {\emptyset}$. By assumption, $\delta \in \ker {\varphi}_\gamma$ and thus in $\overline{{\left< \gamma \right>}}$ for each $\gamma \in S$. Therefore, $\delta \in L_S$ and the claim now follows from the definition of $\operatorname{lcm}_{\Gamma,X}(S)$.
Lower bounds for free groups {#FreeGroupGrowth}
============================
In this section, using least common multiples, we will prove Theorem \[basiclowerbound\].
#### **1. Construct short least common multiples.**
We begin with the following proposition.
\[FreeCandidateLemma\] Let $\gamma_1,\dots,\gamma_n \in F_m^\bullet$ and ${\left\vert \left\vert \gamma_j\right\vert\right\vert}_X \leq d$ for all $j$. Then $$\operatorname{lcm}_{F_m,X}(\gamma_1,\dots,\gamma_n) \leq 6dn^2.$$
In the proof below, the reader will see that the important fact that we utilize is the following. For a pair of non-trivial elements $\gamma_1,\gamma_2$ in a nonabelian free group, we can conjugate $\gamma_1$ by a generator $\mu \in X$ to ensure that $\mu^{-1}\gamma_1\mu$ and $\gamma_2$ do not commute. This fact will be used repeatedly.
Let $k$ be the smallest natural number such that $n \leq 2^k$ (the inequality $2^k \leq 2n$ also holds). We will construct an element $\gamma$ in $L_{{\left\{\gamma_1,\dots,\gamma_n\right\}}}$ such that $${\left\vert \left\vert \gamma\right\vert\right\vert}_X \leq 6d4^k.$$ By Lemma \[WordLengthForLCM\], this implies the inequality asserted in the statement of the proposition. To this end, we augment the set ${\left\{\gamma_1,\dots,\gamma_n\right\}}$ by adding enough additional elements $\mu \in X$ such that our new set has precisely $2^k$ elements that we label ${\left\{\gamma_1,\dots,\gamma_{2^k}\right\}}$. Note that it does not matter if the elements we add to the set are distinct. For each pair $\gamma_{2i-1},\gamma_{2i}$, we replace $\gamma_{2i}$ by a conjugate $\mu_i\gamma_{2i}\mu_i^{-1}$ for $\mu_i \in X$ such that $[\gamma_{2i-1},\mu_i^{-1}\gamma_{2i}\mu_i]\ne 1$ and in an abuse of notation, continue to denote this by $\gamma_{2i}$. We define a new set of elements ${\left\{\gamma_i^{(1)}\right\}}$ by setting $\gamma_i^{(1)} = [\gamma_{2i-1},\gamma_{2i}]$. Note that ${\left\vert \left\vert \gamma_i^{(1)}\right\vert\right\vert}_X \leq 4(d+2)$. We have $2^{k-1}$ elements in this new set and we repeat the above, again replacing $\gamma_{2i}^{(1)}$ with a conjugate by $\mu_i^{(1)}\in X$ if necessary to ensure that $\gamma_{2i-1}^{(1)}$ and $\gamma_{2i}^{(1)}$ do not commute. This yields $2^{k-2}$ non-trivial elements $\gamma_i^{(2)}=[\gamma_{2i-1}^{(1)},\gamma_{2i}^{(1)}]$ with ${\left\vert \left\vert \gamma_i^{(2)}\right\vert\right\vert}_X \leq 4(4(d+2)+2)$. Continuing this inductively, at the $k$–stage we obtain an element $\gamma_1^{(k)} \in L_S$ such that $${\left\vert \left\vert \gamma_1^{(k)}\right\vert\right\vert}_X \leq 4^kd + a_k,$$ where $a_k$ is defined inductively by $a_0=0$ and $$a_j = 4(a_{j-1}+2).$$ The assertion $$a_j = 2{\left( \sum_{\ell=1}^j 4^\ell \right) },$$ is validated with an inductive proof. Thus, we have $${\left\vert \left\vert \gamma_1^{(k)}\right\vert\right\vert}_X \leq 4^kd+a_k \leq 3{\left( 4^kd + 4^k \right) } \leq 6d(4^k).$$
An immediate corollary of Proposition \[FreeCandidateLemma\] is the following.
\[PrimeNumberTheorem\] $$\operatorname{lcm}_{F_m,X}(B_{F^m,X}^\bullet(n)) \leq 6n(\operatorname{w}_{F_m,X}(n))^2.$$
#### **2. Proof of Theorem \[basiclowerbound\].**
We now give a short proof of Theorem \[basiclowerbound\]. We begin with the following proposition.
\[freelowerbound\] Let $\Gamma$ be a nonabelian free group of rank $m$. Then $n^{1/3} \preceq \operatorname{F}_{\Gamma,X}(n)$.
For $x \in X$, set $$S = {\left\{x,x^2,\dots,x^{n}\right\}}.$$ By Proposition \[FreeCandidateLemma\], if $\delta \in \operatorname{LCM}_{F_m,X}(S)$, then $${\left\vert \left\vert \delta\right\vert\right\vert}_X \leq 6n^3.$$ On the other hand, if ${\varphi}{\ensuremath{\colon}}F_m \to Q$ is a surjective homomorphism with ${\varphi}(\delta) \ne 1$, the restriction of ${\varphi}$ to $S$ is injective. In particular, $$\operatorname{D}_{F_m,X}^\lhd(\delta) \geq n.$$ In total, this shows that $n^{1/3} \preceq \operatorname{F}_{F_m,X}$.
We now prove Theorem \[basiclowerbound\].
Let $\Gamma$ be a finitely generated group with finite generating set $X$. By assumption, $\Gamma$ contains a nonabelian free group $\Delta$. By passing to a subgroup, we may assume that $\Delta$ is finitely generated with free generating set $Y$. According to Lemma \[DivisibilityAsymptoticLemma\] (b), we know that $\operatorname{F}_{\Delta,Y}(n) \preceq \operatorname{F}_{\Gamma,X}(n)$. By Proposition \[freelowerbound\], we also have $n^{1/3} \preceq \operatorname{F}_{\Delta,Y}(n)$. The marriage of these two facts yields Theorem \[basiclowerbound\].
#### **3. The basic girth inequality.**
We are now ready to prove (\[BasicGirthEquation\]) for free groups. Again, for the reader’s convenience, recall that (\[BasicGirthEquation\]) is $$\operatorname{G}_{F_m,X}(n/2) \leq \operatorname{F}_{F_m,X}(n/2){\left( 6n (\operatorname{w}_{F_m,X}(n))^{2} \right) }.$$
Let $\delta \in \operatorname{LCM}(B_{F_m,X}^\bullet(n))$ and let $Q$ be a finite group of order $\operatorname{D}_{F_m,X}^\lhd(\delta)$ such that there exists a homomorphism ${\varphi}{\ensuremath{\colon}}F_m \to Q$ with ${\varphi}(\delta)\ne 1$. Since $\delta \in L_{B_{F_m,X}(n)}$, for each $\gamma$ in $B_{F_m,X}^\bullet(n)$, we also know that ${\varphi}(\gamma) \ne 1$. In particular, it must be that ${\varphi}$ restricted to $B_{F_m,X}^\bullet(n/2)$ is injective. The definitions of $\operatorname{G}_{F_m,X}$ and $\operatorname{F}_{F_m,X}$ with Corollary \[PrimeNumberTheorem\] yields $$\operatorname{G}_{F_m,X}(n/2) \leq \operatorname{D}^\lhd_{F_m,X}(\delta) \leq \operatorname{F}_{F_m,X}({\left\vert \left\vert \delta\right\vert\right\vert}_X)\leq \operatorname{F}_{F_m,X}(6n(\operatorname{w}_{F_m,X}(n))^2),$$ and thus the desired inequality.
#### **4. Proof of Theorem \[GirthPolynomialGrowth\].**
We are also ready to prove Theorem \[GirthPolynomialGrowth\].
We must show that a finitely generated group $\Gamma$ is virtually nilpotent if and only if $\operatorname{G}_{\Gamma,X}$ has at most polynomial growth. If $\operatorname{G}_{\Gamma,X}$ is bounded above by a polynomial in $n$, as $\operatorname{w}_{\Gamma,X} \leq \operatorname{G}_{\Gamma,X}$, it must be that $\operatorname{w}_{\Gamma,X}$ is bounded above by a polynomial in $n$. Hence, by Gromov’s Polynomial Growth Theorem, $G$ is virtually nilpotent.
Suppose now that $\Gamma$ is virtually nilpotent and set $\Gamma_{\textrm{Fitt}}$ to be the Fitting subgroup of $\Gamma$. It is well known (see [@Dek]) that $\Gamma_{\textrm{Fitt}}$ is torsion free and finite index in $\Gamma$. By Lemma \[GirthAsymptoticLemma\] (c), we may assume that $\Gamma$ is torsion free. In this case, $\Gamma$ admits a faithful, linear representation $\psi$ into ${\ensuremath{\mathbf{U}}}(d,{\ensuremath{{\ensuremath{\mathbf{Z}}}}})$, the group of upper triangular, unipotent matrices with integer coefficients in $\operatorname{GL}(d,{\ensuremath{{\ensuremath{\mathbf{Z}}}}})$ (see [@Dek]). Under this injective homomorphism, the elements in $B_{\Gamma,X}(n)$ have matrix entries with norm bounded above by $Cn^k$, where $C$ and $k$ only depends on $\Gamma$. Specifically, we have $${\left\vert(\psi(\gamma))_{i,j}\right\vert} \leq C{\left\vert \left\vert \gamma\right\vert\right\vert}_X^k.$$ This is a consequence of the Hausdorff–Baker–Campbell formula (see [@Dek]). Let $r$ be the reduction homomorphism $$r {\ensuremath{\colon}}{\ensuremath{\mathbf{U}}}(d,{\ensuremath{{\ensuremath{\mathbf{Z}}}}}) {\longrightarrow}{\ensuremath{\mathbf{U}}}(d,{\ensuremath{{\ensuremath{\mathbf{Z}}}}}/ 2Cn^k {\ensuremath{{\ensuremath{\mathbf{Z}}}}})$$ defined by reducing matrix coefficients modulo $2 Cn^k$. By selection, the restriction of $r$ to $B_{\Gamma,X}^\bullet(n)$ is injective. So we have $$\label{CardinalityInequality}
{\left\vertr(\psi(\Gamma))\right\vert} \leq {\left\vert{\ensuremath{\mathbf{U}}}(d,{\ensuremath{{\ensuremath{\mathbf{Z}}}}}/ 2Cn^k {\ensuremath{{\ensuremath{\mathbf{Z}}}}})\right\vert} \leq (2Cn^k)^{d^2}.$$ This inequality gives $$\operatorname{G}_{\Gamma,X}(n) \leq (2Cn^k)^{d^2} = C_1n^{kd^2}.$$ Therefore, $\operatorname{G}_{\Gamma,X}(n)$ is bounded above by a polynomial function in $n$ as claimed.
#### **5. Generalities.**
The results and methods for the free group in this section can be generalized. Specifically, we require the following two properties:
- $\Gamma$ has an element of infinite order.
- For all non-trivial $\gamma_1,\gamma_2 \in \Gamma$, there exists $\mu_{1,2} \in X$ such that $[\gamma_1,\mu_{1,2}\gamma_2\mu_{1,2}^{-1}]\ne 1$.
With this, we can state a general result established with an identical method taken for the free group.
Let $\Gamma$ be finitely generated group that satisfies (i) and (ii). Then
- $\operatorname{G}_{\Gamma,X}(n/2) \leq \operatorname{F}_{\Gamma,X}(n/2){\left( 6n (\operatorname{w}_{\Gamma,X}(n))^{2} \right) }$.
- $n^{1/3} \preceq \operatorname{F}_{\Gamma,X}$.
The proof of Theorem \[toughlowerbound\] {#toughlowerboundSection}
========================================
In this section we prove Theorem \[toughlowerbound\]. For sake of clarity, before commencing with the proof, we outline the basic strategy. We will proceed via contradiction, assuming that $\max \operatorname{D}_{F_m}(n) \preceq \log n$. We will apply this assumption to a family of test elements $\delta_n$ derived from least common multiples of certain simple sets $S(n)$ to produce a family of finite index subgroups $\Delta_n$ in $F_m$. Employing the Prime Number Theorem, we will obtain upper bounds (see (\[LinearBound\]) below) for the indices $[F_m:\Delta_n]$. Using covering space theory and a simple albeit involved inductive argument, we will derive the needed contradiction by showing the impossibility of these bounds. The remainder of this section is devoted to the details.
Our goal is to show $\max \operatorname{D}_{F_m}(n) \npreceq \log(n)$ for $m \geq 2$. By Lemma 1.1 in [@Bou], it suffices to show this for $m=2$. To that end, set $\Gamma = F_2$ with free generating set $X={\left\{x,y\right\}}$, and $$S(n) = {\left\{x,x^2,\dots,x^{\operatorname{lcm}(1,\dots,n)}\right\}}.$$ We proceed by contradiction, assuming that $\max \operatorname{D}_\Gamma(n) \preceq \log(n)$. By definition, there exists a constant $C>0$ such that $\max \operatorname{D}_\Gamma(n) \leq C\log(Cn)$ for all $n$. For any $\delta_n \in \operatorname{LCM}_{\Gamma,X}(S(n))$, this implies that there exists a finite index subgroup $\Delta_n < \Gamma$ such that $\delta_n \notin \Delta_n$ and $$[\Gamma:\Delta_n] \leq C\log(C{\left\vert \left\vert \delta_n\right\vert\right\vert}_X).$$ According to Proposition \[FreeCandidateLemma\], we also know that $${\left\vert \left\vert \delta_n\right\vert\right\vert}_X \leq D(\operatorname{lcm}(1,\dots,n))^3.$$ In tandem, this yields $$[\Gamma:\Delta_n] \leq C\log(CD(\operatorname{lcm}(1,\dots,n))^3).$$ By the Prime Number Theorem, we have $$\lim_{n \to {\infty}} \frac{\log(\operatorname{lcm}(1,\dots,n))}{n} = 1.$$ Therefore, there exists $N>0$ such that for all $n \geq N$ $$\frac{n}{2} \leq \log(\operatorname{lcm}(1,\dots,n)) \leq \frac{3n}{2}.$$ Combining this with the above, we see that there exists a constant $M>0$ such that for all $n\geq N$, $$\label{LinearBound}
[\Gamma:\Delta_n] \leq C\log(CD) + \frac{9Cn}{2} \leq Mn.$$ Our task now is to show (\[LinearBound\]) cannot hold. In order to achieve the desired contradiction, we use covering space theory. With that goal in mind, let $S^1 \vee S^1$ be the wedge product of two circles and recall that we can realize $\Gamma$ as $\pi_1(S^1 \vee S^1,*)$ by identifying $x,y$ with generators for the fundamental groups of the respective pair of circles. Here, $*$ serves as both the base point and the identifying point for the wedge product. According to covering space theory, associated to the conjugacy class $[\Delta_n]$ of $\Delta_n$ in $\Gamma$, is a finite cover $Z_n$ of $S^1 \vee S^1$ of covering degree $[\Gamma:\Delta_n]$ (unique up to covering isomorphisms). Associated to a conjugacy class $[\gamma]$ in $\Gamma$ is a closed curve $c_\gamma$ on $S^1 \vee S^1$. The distinct lifts of $c_\gamma$ to $Z_n$ correspond to the distinct $\Delta_n$–conjugacy classes of $\gamma$ in $\Gamma$. The condition that $\gamma \notin \Delta_n$ implies that at least one such lift cannot be a closed loop.
Removing the edges of $Z_n$ associated to the lifts of the closed curve associated to $[y]$, we get a disjoint union of topological circles, each of which is a union of edges associated to the lifts of the loop associated to $[x]$. We call these circles $x$–cycles and say the length of an $x$–cycle is the total number of edges of the cycle. The sum of the lengths over all the distinct $x$–cycles is precisely $[\Gamma:\Delta_n]$. For an element of the form $x^\ell$, each lift of the associated curve $c_{x^\ell}$ is contained on an $x$–cycle. Using elements of the form $x^\ell$, we will produce enough sufficiently long $x$–cycles in order to contradict (\[LinearBound\]).
We begin with the element $x^{\operatorname{lcm}(1,\dots,m)}$ for $1 \leq m \leq n$. This will serve as both the base case for an inductive proof and will allow us to introduce some needed notation. By construction, some $\Gamma$–conjugate of $x^{\operatorname{lcm}(1,\dots,m)}$ is not contained in $\Delta_n$. Indeed, $x^\ell$ for any $1 \leq \ell \leq \operatorname{lcm}(1,\dots,n)$ is never contained in the intersection of all conjugates of $\Delta_n$. Setting $c_m$ to be the curve associated to $x^{\operatorname{lcm}(1,\dots,m)}$, this implies that there exists a lift of $c_m$ that is not closed in $Z_n$. Setting $C_n^{(1)}$ to be the $x$–cycle containing this lift, we see that the length of $C_n^{(1)}$ must be at least $m$. Otherwise, some power $x^\ell$ for $1 \leq \ell \leq m$ would have a closed lift for this base point and this would force this lift of $c_m$ to be closed. Setting $k_{n,m}^{(1)}$ to be the associated length, we see that $m \leq k_{n,m}^{(1)} \leq Mn$ when $n \geq N$.
Using the above as the base case, we claim the following:
**Claim.** *For each positive integer $i$, there exists a positive integer $N_i \geq N$ such that for all $n \geq 8N_i$, there exists disjoint $x$–cycles $C_n^{(1)},\dots,C_n^{(i)}$ in $Z_n$ with respective lengths $k_n^{(1)},\dots,k_n^{(i)}$ such that $k_n^{(j)} \geq n/8$ for all $1 \leq j \leq i$.*
That this claim implies the desired contradiction is clear. Indeed, if the claim holds, we have $$\frac{ni}{8} \leq \sum_{j=1}^i k_n^{(j)} \leq [\Gamma:\Delta_n]$$ for all positive integers $i$ and all $n \geq N_i$. Taking $i > 8M$ yields an immediate contradiction of (\[LinearBound\]). Thus, we are reduced to proving the claim.
For the base case $i=1$, we can take $N_1=N$ and $m=n$ in the above argument and thus produce an $x$–cycle of length $k_n^{(1)}$ with $n \leq k_n^{(1)}$ for any $n \geq N_1$. Proceeding by induction on $i$, we assuming the claim holds for $i$. Specifically, there exists $N_i \geq N$ such that for all $n \geq 8N_i$, there exists disjoint $x$–cycles $C_n^{(1)},\dots,C_n^{(i)}$ in $Z_n$ with lengths $k_n^{(j)} \geq n/8$. By increasing $N_i$ to some $N_{i+1}$, we need to produce a new $x$–cycle $C_n^{(i+1)}$ in $Z_n$ of length $k_n^{(i+1)} \geq n/8$ for all $n \geq 8N_{i+1}$. For this, set $$\ell_{n,m} = \operatorname{lcm}(1,\dots,m)\prod_{j=1}^i k_n^{(j)}.$$ By construction, the lift of the closed curve associated to $x^{\ell_{n,m}}$ to each cycle $C_n^{(j)}$ is closed. Consequently, any lift of the curve associated to $x^{\ell_{n,m}}$ that is not closed must necessarily reside on an $x$–cycle that is disjoint from the previous $i$ cycles $C_n^{(1)},\dots, C_n^{(i)}$. In addition, we must ensure that this new $x$–cycle has length at least $n/8$. To guarantee that the curve associated to $x^{\ell_{n,m}}$ has a lift that is not closed, it is sufficient to have the inequality $$\label{NonClosedLift}
\ell_{n,m} \leq \operatorname{lcm}(1,\dots,n).$$ In addition, if $m\geq n/8$, then the length of $x$–cycle containing this lift must be at least $n/8$. We focus first on arranging (\[NonClosedLift\]). For this, since $k_n^{(j)} \leq Mn$ for all $j$, (\[NonClosedLift\]) holds if $$(Mn)^i\operatorname{lcm}(1,\dots,m) \leq \operatorname{lcm}(1,\dots,n).$$ This, in turn, is equivalent to $$\log(\operatorname{lcm}(1,\dots,m)) \leq \log(\operatorname{lcm}(1,\dots,n)) - i\log(Mn).$$ Set $N_{i+1}$ to be the smallest positive integer such that $$\frac{n}{8} - i\log(Mn) > 0$$ for all $n \geq 8N_{i+1}$. Taking $n>8N_{i+1}$ and $n/8 \leq m \leq n/4$, we see that $$\begin{aligned}
\log(\operatorname{lcm}(1,\dots,m)) &\leq \frac{3m}{2} \\
&\leq \frac{3n}{8} \\
&\leq \frac{3n}{8} + {\left( \frac{n}{8}-i\log(Mn) \right) } \\
&= \frac{n}{2} - i\log(Mn) \\
&\leq \log(\operatorname{lcm}(1,\dots,n)) - i\log(Mn).\end{aligned}$$ In particular, we produce a new $x$–cycle $C_n^{(i+1)}$ of length $k_n^{(i+1)}\geq n/8$ for all $n \geq N_{i+1}$.
Having proven the claim, our proof of Theorem \[toughlowerbound\] is complete.
Just as in Theorem \[basiclowerbound\], Theorem \[toughlowerbound\] can be extended to any finitely generated group that contains a nonabelian free subgroup.
Let $\Gamma$ be a finitely generated group that contains a nonabelian free subgroup. Then $$\max \operatorname{D}_{\Gamma,X}(n) \npreceq \log(n).$$
[9]{}
K. Bou-Rabee, *Quantifying residual finiteness*, J. Algebra, [**323**]{} (2010), 729–737.
K. Bou-Rabee and D. B. McReynolds, *Bertrand’s postulate and subgroup growth*, to appear in J. Algebra.
N. V. Buskin, *Economical separability in free groups*, Siberian Mathematical Journal, **50** (2009), 603–-608.
K. Dekimpe, *Almost-Bieberbach groups: Affine and polynomial structures*, Springer-Verlag, 1996.
M. Gromov with an appendix by J. Tits, *Groups of polynomial growth and expanding maps*, Publ. Math. Inst. Hautes Étud. Sci., **53** (1981), 53–78.
P. de La Harpe, *Topics in Geometric Group Theory*, Chicago Lectures in Mathematics, Chicago 2000.
U. Hadad, *On the Shortest Identity in Finite Simple Groups of Lie Type*, preprint.
M. Kassabov and F. Matucci, *Bounding residual finiteness in free groups*, preprint.
A. Lubotzky and D. Segal, *Subgroup growth*, Birkhäuser, 2003.
V. D. Mazurov and E. I. Khukhro, editors, *The Kourovka notebook*, Russian Academy of Sciences Siberian Division Institute of Mathematics, Novosibirsk, sixteenth edition, 2006. Unsolved problems in group theory, Including archive of solved problems.
I. Rivin, *Geodesics with one self-intersection, and other stories*, preprint.
Department of Mathematics\
University of Chicago\
Chicago, IL 60637, USA\
email: [khalid@math.uchicago.edu]{}, [dmcreyn@math.uchicago.edu]{}\
[^1]: University of Chicago, Chicago, IL 60637. E-mail:
[^2]: University of Chicago, Chicago, IL 60637. E-mail:
|
{
"pile_set_name": "arxiv"
}
|
Jordan Morris
Jordan Perry Morris (born October 26, 1994) is an American soccer player who plays as a forward for Seattle Sounders FC in Major League Soccer, and the United States national team.
Club career
Youth, college and amateur
Morris, from Mercer Island, Washington, began his youth career with Eastside FC, where he played from 2004 to 2012, from U11 to U17, with the Eastside FC B94 Red team, coached by Dan Strom, and helped the team to six of its seven Washington State titles as well as two third-place finishes at the US Youth Soccer National Championships in 2011 and 2012: he was named to the Best XI in 2011, and was the Golden Ball winner in 2012. Morris was also named NSCAA Washington State Player of the Year and NSCAA High School All-American in 2012.
He joined the Sounders FC youth academy and played in the U.S. Soccer Development Academy for one season. On February 6, 2012, Morris signed a letter of intent to play college soccer at Stanford University.
In his freshman year with the Cardinal, Morris appeared in all 21 matches and led all Pac-12 freshman with seven assists and 19 points and tied for the lead with six goals and helped lead his team to their first NCAA Tournament since 2009 where they would eventually fall 1–0 to #2 seed Washington in the Round of 16. He went on to be named first team All-Pac-12 that year. Morris also spent time with Seattle Sounders FC U-23 in the Premier Development League.
In his sophomore year, Morris helped lead Stanford to its first Pac-12 championship since 2001.
In his junior year, Morris scored 13 goals and had 3 assists. He led the Cardinal to both the Pac-12 and the NCAA Championships. In the NCAA tournament, Morris scored 5 of Stanford's total of 12 goals. In the championship game against Clemson, Morris scored his first of two goals in the game only 87 seconds into the contest.
On January 8, 2016, Morris was awarded the Hermann Trophy as the best player in NCAA Division I soccer.
Seattle Sounders
After winning the NCAA Division I Men's Soccer Championship, there was speculation that Morris would begin to play professionally. Coach Jürgen Klinsmann stated that Morris "obviously has to" turn pro. On January 5, 2016, Morris announced he decided to forgo his senior season at Stanford to turn pro. It was widely speculated that Morris would sign with the Sounders, the club for which his father works, and also holds his amateur rights. On January 21, 2016, Morris signed with Seattle Sounders FC, being given MLS's highest-ever Homegrown Player contract worth roughly $250,000 a year. He joined the Sounders' preseason training camp in Arizona, debuting in a friendly against Celaya F.C. on February 9, 2016. On February 23, 2016, Morris made his professional debut against Club América in the CONCACAF Champions League, starting the match. The following week, he debuted in the Sounders' first Major League Soccer game of the season against Sporting Kansas City.
Morris scored his first Major League Soccer goal for the Sounders on April 16, 2016, against the Philadelphia Union. He then went on to score in his next three consecutive games, matching the Seattle rookie scoring record, his next goal then surpassed the rookie goalscoring record which had been set by Steve Zakuani in 2009. He has since helped his team to win the MLS Cup after a run from ninth place into fourth, along with the help of Nicolas Lodeiro, a new midseason acquisition made by Seattle.
On February 22, 2018 while playing in El Salvador against Santa Tecla in the Sounders' first match of the 2018 CONCACAF Champions League, Morris collapsed untouched in the 85th minute with a torn ACL. He was reported to likely miss 6–9 months. After missing the entirety of the 2018 MLS season, Morris was signed to a five-year contract extension with the Sounders in December 2018.
Werder Bremen trial
On January 5, 2016, it was reported that Morris was set to train with Werder Bremen at their winter camp, which Bremen chief executive Thomas Eichin claimed was "an opportunity for us to get to know the player better. Nothing more and nothing less". On January 13, 2016, it was reported that Bremen extended the trial of Morris who then played in a friendly match against Inter Baku PIK and recorded an assist. On January 18, 2016, it was reported that Bremen had offered a contract to Morris, and Eichin claimed he was confident that they would sign him. However, it was later reported by Werder Bremen that Morris had turned down their offer in favor of playing in the United States.
International career
In May 2013, Morris was one of 22 players named to the U.S. under-20 squad for the Toulon Tournament where he made three appearances. He also made appearances for the U.S. under-23 national team on August 6, 2014 and scored in a 5–1 win over Barbados.
On August 28, 2014, Morris received his first senior call up to the U.S. men's national team for a friendly against the Czech Republic, making him the first college player to be called into squad since Chris Albright was called up in 1999 while he was still playing at the University of Virginia. While he was left on the bench, he would make his international debut in a 4–1 defeat to Ireland in November.
On April 15, 2015, he scored his first U.S. men's national team goal against Mexico in an international friendly. In the 2017 CONCACAF Gold Cup Final, Morris scored the winning goal for the United States, assuring a victory over Jamaica and becoming joint top scorer of the tournament with three goals.
International goals
As of matches played November 19, 2019. Scores and results list the United States's goal tally first.
Personal life
Morris was born in Seattle, Washington, to Michael and Leslie Morris. His father, Dr. Michael Morris, is the chief medical director of Seattle Sounders FC. He has three siblings named Christopher, Julian and Talia. He attended Mercer Island High School, where he played high school soccer prior to joining the Sounders Academy.
Morris was diagnosed with Type 1 diabetes at the age of nine and is one of the few professional athletes with the condition to play. He said that having diabetes has helped shape him. His tattoo "T1D" on his inner arm is a tribute to the armband people with diabetes have to wear.
Career statistics
International
Honors
Stanford Cardinal
NCAA Division I Men's Soccer Championship: 2015
Pac-12 Conference: 2015
Seattle Sounders
MLS Cup: 2016, 2019
Western Conference: 2016, 2017, 2019
United States
CONCACAF Gold Cup: 2017
Individual
NSCAA High School All-American: 2012
First team All-Pac-12: 2013, 2014, 2015
Pac-12 Player of the Year: 2015
Hermann Trophy: 2015
MLS Rookie of the Year: 2016
CONCACAF Gold Cup Best XI: 2017
MLS Comeback Player of the Year: 2019
References
External links
Stanford University bio
Category:1994 births
Category:Living people
Category:American soccer players
Category:Stanford Cardinal men's soccer players
Category:Seattle Sounders FC U-23 players
Category:Seattle Sounders FC players
Category:Association football forwards
Category:Soccer players from Washington (state)
Category:USL League Two players
Category:Major League Soccer players
Category:People with type 1 diabetes
Category:Hermann Trophy men's winners
Category:NCAA Division I Men's Soccer Tournament Most Outstanding Player winners
Category:United States men's under-20 international soccer players
Category:United States men's under-23 international soccer players
Category:United States men's international soccer players
Category:2017 CONCACAF Gold Cup players
Category:Sportspeople from Seattle
Category:People from Mercer Island, Washington
Category:CONCACAF Gold Cup-winning players
Category:All-American men's college soccer players
Category:2019 CONCACAF Gold Cup players
|
{
"pile_set_name": "wikipedia_en"
}
|
'use strict';
var dbm;
var type;
var seed;
/**
* We receive the dbmigrate dependency from dbmigrate initially.
* This enables us to not have to rely on NODE_PATH.
*/
exports.setup = function (options, seedLink) {
dbm = options.dbmigrate;
type = dbm.dataType;
seed = seedLink;
};
exports.up = function (db) {
return Promise.all([
db.runSql('UPDATE office SET name = \'Office of Brazil and Southern Cone (WHA/BSC)\' where name=\'Office of Brail and Southern Cone (WHA/BSC)\''),
db.runSql('UPDATE office SET name = \'U.S. Embassy La Paz\' where name=\'U.S. Embassy LaPaz\''),
]);
};
exports.down = function (db) {
return Promise.all([
db.runSql('UPDATE office SET name = \'Office of Brail and Southern Cone (WHA/BSC)\' where name=\'Office of Brazil and Southern Cone (WHA/BSC)\''),
db.runSql('UPDATE office SET name = \'U.S. Embassy LaPaz\' where name=\'U.S. Embassy La Paz\''),
]);
};
|
{
"pile_set_name": "github"
}
|
Summer Flowers at Danckerts
Summer is now well and truly on its way now as we come upon another Bank Holiday this weekend.
We have some lovely gardens plants and pots at the shop, as well as a new range of "Vivid Arts" garden animals on display, which are a fantastically realistic range of life size animals and birds to enhance the garden...from frogs to foxes, and rabbits to robins, pop in and take a look!
The gardens in Wednesbury are going to be coming alive with plants, animals, and barbies! The summer flower collection is now in full swing, with some delightful bouquets and vases full of Snaps, Sweet Williams, and other summer favourites.
Keep in touch via Facebook, and we'll keep you notified of any Special Offers that are coming up!
We recently had St Georges day, and the St Georges Day March was hugely popular, starting at Stone Cross, just past the Wednesbury/ West Bromwich border, and finishing up at Dartmouth Park in the Sandwell Valley.
|
{
"pile_set_name": "pile-cc"
}
|
---
author:
- 'M. C. Wyatt'
date: 'Submitted 27 September 2004, Accepted 13 December 2004'
title: 'The Insignificance of P-R Drag in Detectable Extrasolar Planetesimal Belts'
---
Introduction {#s:intro}
============
Some 15% of nearby stars exhibit more infrared emission than that expected from the stellar photosphere alone (e.g., Aumann et al. 1984). This excess emission comes from dust in orbit around the stars and its relatively cold temperature implies that it resides at large distances from the stars, 30-200 AU, something which has been confirmed for the disks which are near enough and bright enough for their dust distribution to be imaged (Holland et al. 1998; Greaves et al. 1998; Telesco et al. 2000; Wyatt et al. 2004). Because the dust would spiral inwards due to Poynting-Robertson (P-R) drag or be destroyed in mutual collisions on timescales which are much shorter than the ages of these stars, the dust is thought to be continually replenished (Backman & Paresce 1993), probably from collisions between km-sized planetesimals (Wyatt & Dent 2002). In this way the disks are believed to be the extrasolar equivalents of the Kuiper Belt in the Solar System (Wyatt et al. 2003).
These debris disks will play a pivotal role in increasing our understanding of the outcome of planet formation. Not only do these disks tell us about the distribution of planetesimals resulting from planet formation processes, but they may also provide indirect evidence of unseen planets in their systems. Models have been presented that show how planets can cause holes at the centre of the disks and clumps in the azimuthal distribution of dust, both of which are commonly observed features of debris disks. Many of these models require the dust to migrate inward due to P-R drag to be valid; e.g., in the model of Roques et al. (1994) the inner hole is caused by a planet which prevents dust from reaching the inner system which would otherwise be rapidly replenished by P-R drag (e.g., Strom, Edwards & Skrutskie 1993), and clumps arise in models when dust migrates inward due to P-R drag and becomes trapped in a planet’s resonances (Ozernoy et al. 2000; Wilner et al. 2002; Quillen & Thorndike 2002). Alternative models exist for the formation of both inner holes and clumps; e.g., in some cases inner holes may be explained by the sublimation of icy grains (Jura et al. 1998) or by the outward migration of dust to the outer edge of a gas disk (Takeuchi & Artymowicz 2001), and clumps may arise from the destruction of planetesimals which were trapped in resonance with a planet when it migrated out from closer to the star (Wyatt 2003).
The focus of the models on P-R drag is perhaps not surprising, as the dynamical evolution of dust in the solar system is undeniably dominated by the influence of P-R drag, since this is the reason the inner solar system is populated with dust (Dermott et al. 1994; Liou & Zook 1999; Moro-Martín & Malhotra 2002). However, there is no reason to expect that the physics dominating the structure of extrasolar planetesimal disks should be the same as that in the solar system. In fact the question of whether any grains in a given disk suffer significant P-R drag evolution is simply determined by how dense that disk is (Wyatt et al. 1999). It has been noted by several authors that the collisional lifetime of dust grains in the well studied debris disks is shorter than that of P-R drag (e.g., Backman & Paresce 1993; Wilner et al. 2002; Dominik & Decin 2003), a condition which means that P-R drag can be ignored in these systems.
Clearly it is of vital importance to know which physical processes are at play in debris disks to ascertain the true origin of these structures. In this paper I show that P-R drag is not an important physical process in the disks which have been detected to date because collisions occur on much shorter timescales meaning that planetesimals are ground down into dust which is fine enough to be removed by radiation pressure before P-R drag has had a chance to act. In §\[s:sm\] a simple model is derived for the spatial distribution of dust created in a planetesimal belt. In §\[s:em\] this model is used to determine the emission spectrum of these dust disks. A discussion of the influence of P-R drag in detectable and detected debris disks as well as of the implications for how structure in these disks should be modelled and interpreted is given in §\[s:disc\].
Balance of Collisions and P-R Drag {#s:sm}
==================================
In this simple model I consider a planetesimal belt at a distance of $r_0$ from a star of mass $M_\star$ which is producing particles all of the same size, $D$. The orbits of those particles are affected by the interaction of the dust grains with stellar radiation which causes a force which is inversely proportional to the square of distance from the star, and which is commonly defined by the parameter $\beta=F_{rad}/F_{grav}$ (Burns et al. 1979; Gustafson 1994). This parameter is a function of particle size and for large particles $\beta \propto 1/D$. The tangential component of this force is known as Poynting-Robertson drag, or P-R drag. This results in a loss of angular momentum from the particle’s orbit which makes it spiral in toward the star. Assuming the particle’s orbit was initially circular, the migration rate is: $$\dot{r}_{pr} = -2\alpha/r, \label{eq:rpr}$$ where $\alpha = 6.24 \times 10^{-4} (M_\star/M_\odot)\beta$ AU$^2$yr$^{-1}$ (Wyatt et al. 1999).
On their way in, dust grains may collide with other dust grains. The mean time between such collisions depends on the dust density: $$t_{coll}(r) = t_{per}(r) / 4\pi \tau_{eff}(r), \label{eq:tcoll}$$ where $t_{per} = \sqrt{(r/a_\oplus)^3(M_\odot/M_\star)}$ is the orbital period at this distance from the star, and $\tau_{eff}$ is the effective optical depth of the disk, or the surface density of cross-sectional area of the dust (Wyatt et al. 1999). If the collisions are assumed to be destructive then the distribution of dust in the disk can be determined by considering the amount of material entering and leaving an annulus at $r$ of width $dr$. The steady state solution is that the amount entering the annulus due to P-R drag is equal to that leaving due to P-R drag and that which is lost by collisions (i.e., the continuity equation): $$d[n(r)\dot{r}_{pr}(r)]/dr = -N^{-}(r), \label{eq:cont}$$ where $n(r)$ is the one dimensional number density (number of particles per unit radius), and $N^{-}(r) = n(r)/t_{coll}(r)$ is the rate of collisional loss of $n(r)$. Since in a thin disk $\tau_{eff}(r) = 0.125 D^2 n(r)/r$, this continuity equation can be solved analytically to find the variation of effective optical depth with distance from the star (Wyatt 1999): $$\begin{aligned}
\tau_{eff}(r) & = & \frac{\tau_{eff}(r_0)}
{1+4\eta_0(1-\sqrt{r/r_0})} \label{eq:prwcoll} \\
\eta_0 & = &
5000 \tau_{eff}(r_0)\sqrt{(r_0/a_\oplus)(M_\odot/M_\star)}/\beta
\label{eq:eta0}\end{aligned}$$ where this distribution has been scaled by the boundary condition that at $r_0$, $\tau_{eff} = \tau_{eff}(r_0)$.
This distribution is shown in Fig. \[fig:teffeta0\]. The result is that in disks which are very dense, i.e., those for which $\eta_0 \gg 1$, most of the dust is confined to the region where it is produced. Very little dust in such disks makes it into the inner regions as it is destroyed in mutual collisions before it gets there. In disks which are tenuous, however, i.e., those for which $\eta_0 \ll 1$, all of the dust makes it to the star without suffering a collision. The consequence is a dust distribution with a constant surface density as expected from P-R drag since this is the solution to $d[n(r)\dot{r}_{pr}] = 0$. Dust distributions with $\eta_0 \approx 1$ have a distribution which reflects the fact that some fraction of the dust migrates in without encountering another dust grain, while other dust grains are destroyed. This can be understood by considering that $\eta_0 = 1$ describes the situation in which the collisional lifetime in the source region given by eq. \[eq:tcoll\] equals the time it takes for a dust grain to migrate from the source region to the star, which from eq. \[eq:rpr\] is $t_{pr} = 400(M_\odot/M_\star)(r_0/a_\oplus)^2/\beta$ years.
--------- -------------------------------------- --------- --------------------------------------
**(a)** {width="3.2in"} **(b)** {width="3.2in"}
--------- -------------------------------------- --------- --------------------------------------
Fig. \[fig:teffeta0\]b shows the distribution of dust originating in a planetesimal belt 30 AU from a solar mass star for different dust production rates. This illustrates the fact that the density at the centre does not increase when the dust density reduces to a level at which P-R drag becomes important, because even when the disk is very dense a significant number of particles still make it into the inner system. A look at eqs. \[eq:prwcoll\] and \[eq:eta0\] shows that even in the limit of a very dense disk the effective optical depth at the centre of the disk cannot exceed $$max[\tau_{eff}(r=0)] = 5 \times 10^{-5} \beta
\sqrt{(M_\star/M_\odot)(a_\oplus/r_0)},$$ which for the belt plotted here means that the density at the centre is at most $4.6 \times 10^{-6}$.
Of course the situation described above is a simplification, since dust is really produced with a range of sizes. Dust of different sizes would have different migration rates, as defined by eq. \[eq:rpr\], but would also have different collisional lifetimes. Eq. \[eq:tcoll\] was derived under the assumption that the dust is most likely to collide with grains of similar size (Wyatt et al. 1999), collisions which were assumed to be destructive. In reality the collisional lifetime depends on particle size, in a way which depends on the size distribution of dust in the disk, and the size of impactor required to destroy the particle, rather than result in a non-destructive collision (e.g., Wyatt & Dent 2002). Once such a size distribution is considered, one must also consider that dust of a given size is not only destroyed in collisions, but also replenished by the destruction of larger particles. The resulting continuity equation can no longer be solved analytically, but must be solved numerically along with an appropriate model for the outcome of collisions between dust grains of different sizes. Such a solution is not attempted in this paper which is more interested in the large scale distribution of material in extrasolar planetesimal belts for which the assumption that the observations are dominated by grains of just one size is a fair first approximation, albeit one which should be explored in more detail in future work.
Emission Properties {#s:em}
===================
For simplicity the emission properties of the disk are derived under the assumption that dust at a given distance from the star is heated to black body temperatures of $T_{bb} = 278.3 (L_\star/L_\odot)^{1/4}/\sqrt{r/a_\oplus}$ K. It should be noted, however, that small dust grains tend to emit at temperatures hotter than this because they emit inefficiently at mid- to far-IR wavelengths, and temperatures above black body have been seen in debris disks (e.g., Telesco et al. 2000).
{width="5.0in"}
The emission spectrum of dust from planetesimal belts around stars of different spectral type are shown in Fig. \[fig:fnus\]. The shape of these spectra can be understood qualitatively. At the longest wavelengths all of the dust is emitting in the Rayleigh-Jeans regime leading to a spectrum $F_\nu \propto \lambda^{-2}$. At shorter wavelengths there is a regime in which $F_\nu \propto \lambda$. This emission arises from the dust which is closest to the star. Since dust which has a temperature $\ll 2898$ $\mu$m$/\lambda$ is emitting on the Wien side of the black body curve, this contributes little to the flux at this wavelength. Thus the flux at a given wavelength the comes from a region around the star extending out to a radius $\propto \lambda^2$, corresponding to an area $\propto \lambda^4$ and so an emission spectrum $F_\nu \propto \lambda$ (see also Jura et al. 1998). For dust belts in which $\eta_0 \ll 1$ the two regimes blend smoothly into one another at a wavelength corresponding to the peak of black body emission at the distance of $r_0$. For more massive disks the shorter wavelength component is much smaller leading to a spectrum which more closely resembles black body emission at the distance of $r_0$ plus an additional hot component.
The flux presented in Fig. \[fig:fnus\] includes one contentious assumption which is the size of the dust grains used for the parameter $\beta$. The most appropriate number to use is that for the size of grains contributing most to the observed flux from the disk. In general that corresponds to the size at which the cross-sectional area distribution peaks. In a collisional cascade size distribution the cross-sectional area is concentrated in the smallest grains in the distribution. Since dust with $\beta > 0.5$ is blown out of the system by radiation pressure as soon as it is created, this implies that $\beta = 0.5$ is the most appropriate value to use, which is what was assumed in Fig. \[fig:fnus\]. However, evolution due to P-R drag has an effect on the size distribution. Since small grains are removed faster than large grains (see eq. \[eq:rpr\]), the resulting cross-sectional area distribution peaks at large sizes (Wyatt et al. 1999; Dermott et al. 2001). Analogy with the zodiacal cloud in which the cross-sectional area distribution peaks at a few hundred $\mu$m (Love & Brownlee 1993) implies that a much lower value of $\beta$ may be more appropriate, perhaps as low as 0.01 for disks in which $\eta_0 \ll 1$. Thus the fluxes given in Fig. \[fig:fnus\] should be regarded as upper limits to the flux expected from these disks (since $\beta >
0.5$ regardless). This is particularly true for fluxes at wavelengths longer than 100 $\mu$m, because even in a collisional cascade distribution the emission at sub-mm wavelengths is dominated by grains larger than a few hundred $\mu$m, since grains smaller than this emit inefficiently at long wavelengths (see e.g., Fig. 5 of Wyatt & Dent 2002). Inefficient emission at long wavelengths results in a spectrum which is steeper than $F_\nu \propto \lambda^{-2}$ in the Rayleigh-Jeans regime. For debris disks the observed spectrum is seen to fall off at a rate closer to $F_\nu \propto \lambda^{-3}$ (Dent et al. 2000).
Discussion {#s:disc}
==========
A disk’s detectability is determined by two factors. First is the question of whether the disk is bright enough to be detected in a reasonable integration time for a given instrument. For example, SCUBA observations at 850 $\mu$m have a limit of a few mJy (Wyatt, Dent & Greaves 2003) and IRAS observations at 60 and 100 $\mu$m had $3\sigma$ sensitivity limits of around 200 and 600 mJy. More important at short wavelengths, and for nearby stars, however, is how bright the disk is relative to the stellar photosphere. This is because unless a disk is resolved in imaging, or is particularly bright, its flux is indistinguishable from the stellar photosphere, the flux of which is not generally known with better precision than $\pm 10$ %. For such cases an appropriate limit for detectability is that the disk flux must be at least 0.3 times that of the photosphere.
The total flux presented in Fig. \[fig:fnus\] assumes that the star is at a distance of 10 pc. The flux from disks around stars at different distances scales proportionally with the inverse of the distance squared. However, the ratio of disk flux to stellar flux (shown with a solid line on Fig. \[fig:fnus\]) would remain the same. Given the constraints above, as a first approximation one can consider that the disks which have been detected to date are those with fluxes which lie to the upper right of the photospheric flux in Fig. \[fig:fnus\], but with the caveat that such disks can only be detected out to a certain distance which is a function of the instrument’s sensitivity.
This allows conclusions to be reached about the balance between collisions and P-R drag in the disks which can have been detected. Fundamentally this is possible because the effective optical depth and $\eta_0$ are observable parameters (see next paragraph). The first conclusion is that it is impossible to detect disks with $\eta_0 \leq 0.01$ because these are too faint with respect to the stellar photosphere. The conclusion about disks with $\eta_0 = 1$ is less clear cut. It would not be possible to detect such disks if they were, like the asteroid belt, at 3 AU from the host stars. At larger distances the disks are more readily detectable. However, detectability is wavelength dependent, with disks around G0V and M0V stars only becoming detectable longward of around 100 $\mu$m, while those around A0V stars are detectable at $>50$ $\mu$m. Disks with $\eta_0 \gg 100$ are readily detectable for all stars, although again there is some dependence on wavelength.
Since most disks known about to date were discovered by IRAS at 60 $\mu$m this implies that P-R drag is not a dominant factor governing the evolution of these disks, except for perhaps the faintest disks detected around A stars. To check this conclusion a crude estimate for the value of $\eta_0$ was made for all disks in the debris disk database (http://www.roe.ac.uk/atc/research/ddd). This database includes all main sequence stars determined in previous surveys of the IRAS catalogues to have infrared emission in excess of that of the stellar photosphere (e.g., Stencel & Backman 1991; Mannings & Barlow 1998). To calculate $\eta_0$, first only stars within 100 pc and with detections of excess emission at two IRAS wavelengths were chosen. The fluxes at the longest two of those wavelengths were then used to determine the dust temperature and so its radius by assuming black body emission. Eliminating spectra which implied the emission may be associated with background objects resulted in a list of 37 candidates, including all the well-known debris disks. A disk’s effective optical depth was then estimated from its flux at the longest wavelength: $$\tau_{eff} = F_\nu \Omega_{disk}/B_\nu(T), \label{eq:taueffbnu}$$ where $\Omega_{disk}$ is the solid angle subtended by the disk if seen face-on, which for a ring-like disk of radius $r$ and width $dr$ at a distance of $d$ in pc is $6.8 \times 10^9 d^2/rdr$. The ring width is generally unknown and so for uniformity it was assumed to be $dr=0.1r$ for all disks. Finally, $\eta_0$ was calculated under the assumption that $\beta=0.5$. All of these stars were found to have $\eta_0 > 10$, with a median value of 80 (see Fig. \[fig:eta0teffobs\]). [^1] All 18 stars (i.e., half the sample) with $\eta_0 <80$ are of spectral type earlier than A3V, while stars with disks with $\eta_0 > 80$ are evenly distributed in spectral type. It is worth noting that of the disks which have been resolved, those with ages $\sim 10$ Myr all have $\eta_0 > 1000$ ($\beta$ Pic, HR4796, HD141569) while those older than 100 Myr all have $\eta_0 < 100$ (Vega, $\epsilon$ Eridani, Fomalhaut, $\eta$ Corvi).
![The value of $\eta_0$ for the disks of the 37 stars in the debris disk database with excess flux measurements at two wavelengths plotted against the effective temperature of the stars. The disk around the star HD98800 falls off the plot at $\eta_0
\approx 10^{5}$.[]{data-label="fig:eta0teffobs"}](2073fig3.ps){width="3.0in"}
The fact that the debris disks which have been detected to date have $\eta_0 \gg 1$ implies that the holes at their centres are not caused by planets which prevent this dust from reaching the inner system. Rather the majority of this dust is ground down in mutual collisions until it is fine enough to be removed by radiation pressure. A similar conclusion was reached by Dominik & Decin (2003) for debris disks which were detected by ISO. This also means that azimuthal structure in the disks cannot be caused by dust migrating into the resonances of a planet (e.g., Kuchner & Holman 2003), at least not due to P-R drag alone. Models of structures in debris disks which have to invoke P-R drag should be reconsidered and would have to include the effect of collisions at the fundamental level to remain viable (e.g., Lecavelier des Etangs et al. 1996), since it appears that P-R drag can effectively be ignored in most detectable disks.
Collisions are not 100% efficient at stopping dust from reaching the star, and the small amount which does should result in a small mid-IR excess. If no such emission is detected at a level consistent with the $\eta_0$ for a given disk, then an obstacle such as a planet could be inferred. However, because of the low level of this emission with respect to the photosphere, it could only be detected in resolved imaging making such observations difficult (e.g., Liu et al. 2004). Even in disks with $\eta_0 \ll 1$, the resulting emission spectrum still peaks at the temperature of dust in the planetesimal belt itself. This means that the temperature of the dust is a good tracer of the distribution of the planetesimals and a relative dearth of warm dust really indicates a hole in the planetesimal distribution close to the star.
While uncertainties in the simple model presented here preclude hard conclusions been drawn on whether it is possible to detect disks with $\eta_0 \approx 1$, it is important to remind the reader that fluxes plotted in Fig. \[fig:fnus\] used the most optimistic assumptions about the amount of flux emanating from a disk with a given $\eta_0$, so that the conclusions may become firmer than this once a proper analysis of the evolution of a disk with a range of particle sizes is done. However, this study does show that detecting such disks would be much easier at longer wavelengths, since photosphere subtraction is less problematic here. Disks which are too cold for IRAS to detect in the far-IR, but which are bright enough to detect in the sub-mm have recently been found (Wyatt, Dent & Greaves 2003). Thus disks with $\eta_0 \leq 1$ may turn up in sub-mm surveys of nearby stars. They may also be detected at 160 $\mu$m by SPITZER (Rieke et al. 2004).
Aumann, H. H., et al. 1984, , 278, L23 Backman, D. E., & Paresce, F. 1993, in Protostars and Planets III, eds. E. H. Levy & J. I. Lunine (Tucson: Univ. Arizona Press), 1253 Burns, J. A., Lamy, P. L., & Soter, S. 1979, Icarus, 40, 1 Dent, W. R. F., Walker, H. J., Holland, W. S., & Greaves, J. S. 2000, , 314, 702 Dermott, S. F., Jayaraman, S., Xu, Y. L., Gustafson, B. A. S., & Liou, J. C. 1994, , 369, 719 Dermott, S. F., Grogan, K., Durda, D. D., Jayaraman, S., Kehoe, T. J. J., Kortenkamp, S. J., & Wyatt, M. C. 2001, in Interplanetary Dust, eds. E. Grun, B. Å. S. Gustafson, S. F. Dermott, H. Fechtig (Heidelberg: Springer-Verlag), 569 Dominik, C., & Decin, G. 2003, , 598, 626 Greaves, J. S., et al. 1998, , 506, L133 Gustafson, B. Å. S. 1994, Annu. Rev. Earth Planet. Sci., 22, 553 Holland, W. S., et al. 1998, , 392, 788 Jura, M., Malkan, M., White, R., Telesco, C., Piña, R., & Fisher, R. S. 1998, , 505, 897 Kuchner, M. J., & Holman, M. J. 2003, , 588, 1110 Lecavelier des Etangs, A., Scholl, H., Roques, F., Sicardy, B., & Vidal-Madjar, A. 1996, Icarus, 123, 168 Liou, J.-C., & Zook, H. A. 1999, , 118, 580 Liu, W. M., et al. 2004, , 610, L125 Love, S. G., & Brownlee, D. E. 1993, Science, 262, 550 Mannings, V., & Barlow, M. J. 1998, , 497, 330 Moro-Martín, A., & Malhotra, R. 2002, , 124, 2305 Ozernoy, L. M., Gorkavyi, N. N., Mather, J. C., & Taidakova, T. A. 2000, , 537, L147 Quillen, A. C., & Thorndike, S. 2002, , 578, L149 Rieke, G. H., et al. 2004, , 154, 25 Roques, F., Scholl, H., Sicardy, B., & Smith, B. A. 1994, Icarus, 108, 37 Strom, S. E., Edwards, S., & Skrutskie, M. F. 1993, in Protostars and Planets III, eds. E. H. Levy & J. I. Lunine (Tucson: Univ. Arizona Press), 837 Stencel, R. E., & Backman, D. E. 1991, , 75, 905 Takeuchi, T., & Artymowicz, P. 2001, , 557, 990 Telesco, C. M., et al. 2000, , 530, 329 Wilner, D. J., Holman, M. J., Kuchner, M. J., & Ho, P. T. P. 2002, , 569, L115 Wyatt, M. C. 1999, PhD Thesis, Univ. Florida Wyatt, M. C. 2003, , 598, 1321 Wyatt, M. C., & Dent, W. R. F. 2002, , 334, 589 Wyatt, M. C., Dermott, S. F., Telesco, C. M., Fisher, R. S., Grogan, K., Holmes, E. K., & Piña, R. K. 1999, , 527, 918 Wyatt, M. C., Dent, W. R. F., & Greaves, J. S. 2003, , 342, 876 Wyatt, M. C., Holland, W. S., Greaves, J. S., & Dent, W. R. F. 2003, Earth Moon Planets, 92, 423 Wyatt, M. C., Greaves, J. S., Dent, W. R. F., & Coulson, I. M. C. 2004, , in press
[^1]: The biggest uncertainties in the derived values of $\eta_0$ are in $r$, $dr$ and $\beta$: e.g., if black body temperatures underestimate the true radius by a factor of 2 and the width of the ring is $dr=0.5r$ then the $\eta_0$ values would have to be reduced by a factor of 10; changes to $\beta$ would increase $\eta_0$.
|
{
"pile_set_name": "arxiv"
}
|
// Copyright (C) 2005-2006 The Trustees of Indiana University.
// Use, modification and distribution is subject to the Boost Software
// License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt)
// Authors: Douglas Gregor
// Andrew Lumsdaine
#ifndef BOOST_GRAPH_DETAIL_REMOTE_UPDATE_SET_HPP
#define BOOST_GRAPH_DETAIL_REMOTE_UPDATE_SET_HPP
#ifndef BOOST_GRAPH_USE_MPI
#error "Parallel BGL files should not be included unless <boost/graph/use_mpi.hpp> has been included"
#endif
#include <boost/graph/parallel/process_group.hpp>
#include <boost/type_traits/is_convertible.hpp>
#include <vector>
#include <boost/assert.hpp>
#include <boost/optional.hpp>
#include <queue>
namespace boost { namespace graph { namespace detail {
template<typename ProcessGroup>
void do_synchronize(ProcessGroup& pg)
{
using boost::parallel::synchronize;
synchronize(pg);
}
struct remote_set_queued {};
struct remote_set_immediate {};
template<typename ProcessGroup>
class remote_set_semantics
{
BOOST_STATIC_CONSTANT
(bool,
queued = (is_convertible<
typename ProcessGroup::communication_category,
boost::parallel::bsp_process_group_tag>::value));
public:
typedef typename mpl::if_c<queued,
remote_set_queued,
remote_set_immediate>::type type;
};
template<typename Derived, typename ProcessGroup, typename Value,
typename OwnerMap,
typename Semantics = typename remote_set_semantics<ProcessGroup>::type>
class remote_update_set;
/**********************************************************************
* Remote updating set that queues messages until synchronization *
**********************************************************************/
template<typename Derived, typename ProcessGroup, typename Value,
typename OwnerMap>
class remote_update_set<Derived, ProcessGroup, Value, OwnerMap,
remote_set_queued>
{
typedef typename property_traits<OwnerMap>::key_type Key;
typedef std::vector<std::pair<Key, Value> > Updates;
typedef typename Updates::size_type updates_size_type;
typedef typename Updates::value_type updates_pair_type;
public:
private:
typedef typename ProcessGroup::process_id_type process_id_type;
enum message_kind {
/** Message containing the number of updates that will be sent in
* a msg_updates message that will immediately follow. This
* message will contain a single value of type
* updates_size_type.
*/
msg_num_updates,
/** Contains (key, value) pairs with all of the updates from a
* particular source. The number of updates is variable, but will
* be provided in a msg_num_updates message that immediately
* preceeds this message.
*
*/
msg_updates
};
struct handle_messages
{
explicit
handle_messages(remote_update_set* self, const ProcessGroup& pg)
: self(self), update_sizes(num_processes(pg), 0) { }
void operator()(process_id_type source, int tag)
{
switch(tag) {
case msg_num_updates:
{
// Receive the # of updates
updates_size_type num_updates;
receive(self->process_group, source, tag, num_updates);
update_sizes[source] = num_updates;
}
break;
case msg_updates:
{
updates_size_type num_updates = update_sizes[source];
BOOST_ASSERT(num_updates);
// Receive the actual updates
std::vector<updates_pair_type> updates(num_updates);
receive(self->process_group, source, msg_updates, &updates[0],
num_updates);
// Send updates to derived "receive_update" member
Derived* derived = static_cast<Derived*>(self);
for (updates_size_type u = 0; u < num_updates; ++u)
derived->receive_update(source, updates[u].first, updates[u].second);
update_sizes[source] = 0;
}
break;
};
}
private:
remote_update_set* self;
std::vector<updates_size_type> update_sizes;
};
friend struct handle_messages;
protected:
remote_update_set(const ProcessGroup& pg, const OwnerMap& owner)
: process_group(pg, handle_messages(this, pg)),
updates(num_processes(pg)), owner(owner) {
}
void update(const Key& key, const Value& value)
{
if (get(owner, key) == process_id(process_group)) {
Derived* derived = static_cast<Derived*>(this);
derived->receive_update(get(owner, key), key, value);
}
else {
updates[get(owner, key)].push_back(std::make_pair(key, value));
}
}
void collect() { }
void synchronize()
{
// Emit all updates and then remove them
process_id_type num_processes = updates.size();
for (process_id_type p = 0; p < num_processes; ++p) {
if (!updates[p].empty()) {
send(process_group, p, msg_num_updates, updates[p].size());
send(process_group, p, msg_updates,
&updates[p].front(), updates[p].size());
updates[p].clear();
}
}
do_synchronize(process_group);
}
ProcessGroup process_group;
private:
std::vector<Updates> updates;
OwnerMap owner;
};
/**********************************************************************
* Remote updating set that sends messages immediately *
**********************************************************************/
template<typename Derived, typename ProcessGroup, typename Value,
typename OwnerMap>
class remote_update_set<Derived, ProcessGroup, Value, OwnerMap,
remote_set_immediate>
{
typedef typename property_traits<OwnerMap>::key_type Key;
typedef std::pair<Key, Value> update_pair_type;
typedef typename std::vector<update_pair_type>::size_type updates_size_type;
public:
typedef typename ProcessGroup::process_id_type process_id_type;
private:
enum message_kind {
/** Contains a (key, value) pair that will be updated. */
msg_update
};
struct handle_messages
{
explicit handle_messages(remote_update_set* self, const ProcessGroup& pg)
: self(self)
{ update_sizes.resize(num_processes(pg), 0); }
void operator()(process_id_type source, int tag)
{
// Receive the # of updates
BOOST_ASSERT(tag == msg_update);
update_pair_type update;
receive(self->process_group, source, tag, update);
// Send update to derived "receive_update" member
Derived* derived = static_cast<Derived*>(self);
derived->receive_update(source, update.first, update.second);
}
private:
std::vector<updates_size_type> update_sizes;
remote_update_set* self;
};
friend struct handle_messages;
protected:
remote_update_set(const ProcessGroup& pg, const OwnerMap& owner)
: process_group(pg, handle_messages(this, pg)), owner(owner) { }
void update(const Key& key, const Value& value)
{
if (get(owner, key) == process_id(process_group)) {
Derived* derived = static_cast<Derived*>(this);
derived->receive_update(get(owner, key), key, value);
}
else
send(process_group, get(owner, key), msg_update,
update_pair_type(key, value));
}
void collect()
{
typedef std::pair<process_id_type, int> probe_type;
handle_messages handler(this, process_group);
while (optional<probe_type> stp = probe(process_group))
if (stp->second == msg_update) handler(stp->first, stp->second);
}
void synchronize()
{
do_synchronize(process_group);
}
ProcessGroup process_group;
OwnerMap owner;
};
} } } // end namespace boost::graph::detail
#endif // BOOST_GRAPH_DETAIL_REMOTE_UPDATE_SET_HPP
|
{
"pile_set_name": "github"
}
|
I listened to this one, narrated by Justine Eyre. It was about 12 hours long, but it passed by quickly with this fun read. It's not particularly deep or magical and it doesn't call life as we know it into question.
It's a nice read/listen, light and intriguing for anyone in the mood for a little escape from the disappointments that have been abounding.
Funny enough, the only problems with the book are also reasons why I liked it. Lily Kaiser's journey is a little too convenient throughout the book but that can be just perfect sometimes. It can be exactly what I need to read or listen in order to balance out the pressure of the world.
So, yes, the book is a little too neat. The story a little too beautiful and coincidental and works a little too well, but I didn't mind it at all. Mostly because it was also written incredibly well. It moves between times, giving insight into Rose Gallway's life that Lily doesn't readily have and let's the reader piece some of it together on our own. I do enjoy that. And then the author lays it all out and it's just perfect. A little too perfect, like in one of those rom-coms that we watch to feel good but that we all know aren't the way the world works.
I really loved that about it. It's going to be one of my comfort books, to peruse when I'm down, maybe listen to when I wanna revel in new beginnings, like the mood I re-watch Stardust in. If you've read a few too many mysteries lately, or too many books that ripped your heart out (like I have recently), than this is the perfect book to recover with. It's comforting and sweet and romantic and doesn't take itself too seriously. But it's not the book for that serious deep read. Don't expect it to be.
|
{
"pile_set_name": "pile-cc"
}
|
---
abstract: 'Inspired by the paper of Tasaka [@tasaka], we study the relations between totally odd, motivic depth-graded multiple zeta values. Our main objective is to determine the rank of the matrix $C_{N,r}$ defined by Brown [@Brown]. We will give new proofs for (conjecturally optimal) upper bounds on ${\operatorname{rank}}C_{N,3}$ and ${\operatorname{rank}}C_{N,4}$, which were first obtained by Tasaka [@tasaka]. Finally, we present a recursive approach to the general problem, which reduces the evaluation of ${\operatorname{rank}}C_{N,r}$ to an isomorphism conjecture.'
address:
- 'Charlotte Dietze, Max-Planck-Institut für Mathematik,Vivatsgasse 7,53111 Bonn, Germany'
- 'Chorkri Manai, Max-Planck-Institut für Mathematik,Vivatsgasse 7,53111 Bonn, Germany'
- 'Christian Nöbel, Max-Planck-Institut für Mathematik,Vivatsgasse 7,53111 Bonn, Germany'
- 'Ferdinand Wagner, Max-Planck-Institut für Mathematik,Vivatsgasse 7,53111 Bonn, Germany'
author:
- Charlotte Dietze
- Chokri Manai
- Christian Nöbel
- Ferdinand Wagner
bibliography:
- 'mzv.bib'
date: September 2016
title: 'Totally Odd Depth-graded Multiple Zeta Values and Period Polynomials'
---
Introduction
============
In this paper we will be interested in $\mathbb{Q}$-linear relations among totally odd depth-graded multiple zeta values (MZVs), for which there conjecturally is a bijection with the kernel of a specific matrix $C_{N,r}$ connected to restricted even period polynomials (for a definition, see [@schneps] or [@gkz2006 Section 5]). For integers $n_1,\ldots,n_{r-1}\geq1$ and $n_r\geq2$, the MZV of $n_1,\ldots,n_r$ is defined as the number $$\begin{aligned}
\zeta(n_1,\ldots,n_r)\coloneqq \sum_{0<k_1<\cdots<k_r} \frac{1}{k_1^{n_1}\cdots k_r^{n_r}}{\,}.\end{aligned}$$ We call the sum $n_1+\cdots+n_r$ of arguments the weight and their number $r$ the depth of $\zeta(n_1,\ldots,n_r)$. One classical question about MZVs is counting the number of linearly independent $\mathbb Q$-linear relations between MZVs. It is highly expected, but for now seemingly out of reach that there are no relations between MZVs of different weight. Such questions become reachable when considered in the motivic setting. Motivic MZVs $\zeta^{\mathfrak m}(n_1,\ldots,n_r)$ are elements of a certain $\mathbb Q$-algebra $\mathcal H=\bigoplus_{N\geq 0}\mathcal H_N$ which was constructed by Brown in [@brownMixedMotives] and is graded by the weight $N$. Any relation fulfilled by motivic MZVs also holds for the corresponding MZVs via the period homomorphism $per\colon\mathcal H\to \mathbb R$.
We further restrict to depth-graded MZVs: Let $\mathcal Z_{N,r}$ and $\mathcal H_{N,r}$ denote the $\mathbb Q$-vector space spanned by the real respectively motivic MZVs of weight $N$ and depth $r$ modulo MZVs of lower depth. The depth-graded MZV of $n_1,\ldots,n_r$, that is, the equivalence class of $\zeta(n_1,\ldots,n_r)$ in $\mathcal Z_{N,r}$, is denoted by $\zeta_{\mathfrak D}(n_1,\ldots,n_r)$. The elements of $\mathcal H_{N,r}$ are denoted $\zeta_{\mathfrak D}^{\mathfrak m}(n_1,\ldots,n_r)$ analogously. The dimension of $\mathcal Z_{N,r}$ is subject of the Broadhurst-Kreimer Conjecture.
The generating function of the dimension of the space $\mathcal Z_{N,r}$ is given by $$\begin{aligned}
\sum_{N,r\geq0}\dim_{\mathbb Q}\mathcal Z_{N,r}\cdot x^N y^r\overset?= \frac{1- \mathbb E(x)y}{1-\mathbb O(x)y+\mathbb S(x) y^2 - \mathbb S(x)y^4}{\,},\end{aligned}$$ where we denote $\mathbb E(x)\coloneqq \frac{x^2}{1-x^2}=x^2+x^4+x^6+\cdots$, $\mathbb O(x)\coloneqq \frac{x^3}{1-x^2}=x^3+x^5+x^7+\cdots$, and $\mathbb S(x)\coloneqq \frac{x^{12}}{(1-x^4)(1-x^6)}$.
It should be mentioned that $\mathbb S(x)=\sum_{n>0}\dim\mathcal S_n\cdot x^n$, where $\mathcal S_n$ denotes the space of cusp forms of weight $n$, for which there is an isomorphism to the space of restricted even period polynomials of degree $n-2$ (defined in [@schneps] or [@gkz2006 Section 5]).
In his paper [@Brown], Brown considered the $\mathbb Q$-vector space $\mathcal Z_{N,r}^{{\operatorname{odd}}}$ (respectively $\mathcal H_{N,r}^{{\operatorname{odd}}}$) of totally odd (motivic) and depth-graded MZVs, that is, $\zeta_{\mathfrak D}(n_1,\ldots,n_r)$ (respectively $\zeta_{\mathfrak D}^{\mathfrak m}(n_1,\ldots,n_r)$) for $n_i\geq3$ odd, and linked them to a certain explicit matrix $C_{N,r}$, where $N=n_1+\cdots+n_r$ denotes the weight. In particular, he showed that any right annihilator $(a_{n_1,\ldots,n_r})_{(n_1,\ldots,n_r)\in S_{N,r}}$ of $C_{N,r}$ induces a relation $$\begin{aligned}
\sum_{(n_1,\ldots,n_r)\in S_{N,r}}a_{n_1,\ldots,n_r}\zeta_{\mathfrak D}^{\mathfrak m}(n_1,\ldots,n_r)=0{\,},\text{ hence also }\sum_{(n_1,\ldots,n_r)\in S_{N,r}}a_{n_1,\ldots,n_r}\zeta_{\mathfrak D}(n_1,\ldots,n_r)=0\end{aligned}$$(see Section \[sec:preliminaries\] for the notations) and conjecturally all relations in $\mathcal Z_{N,r}^{{\operatorname{odd}}}$ arise in this way. This led to the following conjecture (the uneven part of the Broadhurst-Kreimer Conjecture).
\[con:Brown\] The generating series of the dimension of $\mathcal Z_{N,r}^{{\operatorname{odd}}}$ and the rank of $C_{N,r}$ are given by $$\begin{aligned}
1+\sum_{N,r>0}{\operatorname{rank}}C_{N,r}\cdot x^Ny^r\overset?=1+\sum_{N,r>0}\dim_{\mathbb Q}\mathcal Z_{N,r}^{{\operatorname{odd}}}\cdot x^Ny^r\overset?=\frac1{1-\mathbb O(x)y+\mathbb S(x)y^2}{\,}.\end{aligned}$$
The contents of this paper are as follows. In Section \[sec:preliminaries\], we explain our notations and define the matrices $C_{N,r}$ due to Brown [@Brown] as well as $E_{N,r}$ and $E_{N,r}^{(j)}$ considered by Tasaka [@tasaka]. In Section \[sec:known results\], we briefly state some of Tasaka’s results on the matrix $E_{N,r}$. Section \[sec:main tools\] is devoted to further investigate the connection between the left kernel of $E_{N,r}$ and restricted even period polynomials, which was first discovered by Baumard and Schneps [@schneps] and appears again in [@tasaka Theorem 3.6]. In Section \[sec:main results\], we will apply our methods to the cases $r=3$ and $r=4$. The first goal of Section \[sec:main results\] will be to show
\[thm:case3\]Assume that the map from Theorem \[thm:injection\] is injective. We then have the lower bound $$\begin{aligned}
\sum_{N>0}\dim_{\mathbb Q}\ker C_{N,3}\cdot x^N\geq 2 \mathbb O(x)\mathbb S(x){\,},\end{aligned}$$ where $\geq$ means that for every $N>0$ the coefficient of $x^N$ on the right-hand side does not exceed the corresponding one on the left-hand side.
This was stated without proof in [@tasaka]. Furthermore, we will give a new proof by the polynomial methods developed in Section \[sec:main tools\] for the following result.
\[thm:case4\]Assume that the map from Theorem \[thm:injection\] is injective. We then have the lower bound $$\begin{aligned}
\sum_{N>0}\dim_{\mathbb Q}\ker C_{N,4}\cdot x^N\geq 3 \mathbb O(x)^2\mathbb S(x)-\mathbb S(x)^2{\,}.\end{aligned}$$
In the last two subsections of this paper, we will consider the case of depth 5 and give an idea for higher depths. For depth 5, we will prove that upon Conjecture \[con:isomorphism\] due to Tasaka ([@tasaka Section 3]), the lower bound predicted by Conjecture \[con:Brown\] holds, i.e. $$\begin{aligned}
\sum_{N>0}\dim_{\mathbb Q}\ker C_{N,5}\cdot x^N\geq 4 \mathbb O(x)^3\mathbb S(x)-3 \mathbb O(x)\mathbb S(x)^2{\,}.\end{aligned}$$ These bounds are conjecturally sharp (i.e. the ones given by Conjecture \[con:Brown\]). Finally, we will prove a recursion for value of $\dim_{\mathbb Q}\ker C_{N,r}$ under the assumption of a similar isomorphism conjecture stated at the end of Section \[sec:main tools\], which was proposed by Claire Glanois.
Acknowledgments {#acknowledgments .unnumbered}
---------------
This research was conducted as part of the Hospitanzprogramm (internship program) at the Max-Planck-Institut für Mathematik (Bonn). We would like to express our deepest thanks to our mentor, Claire Glanois, for introducing us into the theory of multiple zeta values. We are also grateful to Daniel Harrer, Matthias Paulsen and Jörn Stöhler for many helpful comments.
Preliminaries {#sec:preliminaries}
=============
Notations
---------
In this section we introduce our notations and we give some definitions. As usual, for a matrix $A$ we define $\ker A$ to be the set of right annihilators of $A$. Apart from this, we mostly follow the notations of Tasaka in his paper [@tasaka]. Let $$\begin{aligned}
S_{N,r}\coloneqq \left\{(n_1,\ldots,n_r)\in\mathbb Z^r\ |\ n_1+\cdots+n_r=N,\ n_1,\ldots,n_r\geq3\text{ odd}\right\}{\,},\end{aligned}$$ where $N$ and $r$ are natural numbers. Since the elements of the set $S_{N,r}$ will be used as indices of matrices and vectors, we usually arrange them in lexicographically decreasing order. Let $$\begin{aligned}
\mathbf V_{N,r}\coloneqq \left\langle \left.x_1^{m_1-1}\cdots x_r^{m_r-1}\ \right|\ (m_1,\ldots,m_r)\in S_{N,r}\right\rangle_\mathbb{Q}\end{aligned}$$denote the vector space of restricted totally even homogeneous polynomials of degree $N-r$ in $r$ variables. There is a natural isomorphism from $\mathbf V_{N,r}$ to the $\mathbb Q$-vector space ${\mathsf{Vect}}_{N,r}$ of $n$-tuples $(a_{n_1,\ldots,n_r})_{(n_1,\ldots,n_r)\in S_{N,r}}$ indexed by totally odd indices $(n_1,\ldots,n_r)\in S_{N,r}$, which we denote $$\begin{aligned}
\label{eq:natiso}
\begin{split}
\pi\colon\mathbf V_{N,r}&\overset{\sim\,}{\longrightarrow}{\mathsf{Vect}}_{N,r}\\
\sum_{(n_1,\ldots,n_r)\in S_{N,r}}a_{n_1,\ldots,n_r}x_1^{n_1-1}\cdots x_r^{n_r-1}&\longmapsto \left(a_{n_1,\ldots,n_r}\right)_{(n_1,\ldots,n_r)\in S_{N,r}}{\,}.
\end{split} \end{aligned}$$ We assume vectors to be row vectors by default.
Finally, let $\mathbf W_{N,r}$ be the vector subspace of $\mathbf V_{N,r}$ defined by $$\begin{aligned}
\mathbf W_{N,r}\coloneqq \left\{P\in\mathbf V_{N,r}\ |\ P(x_1,\ldots,x_r)\right.&=P(x_2-x_1,x_2,x_3,\ldots,x_r)\\&\left.\phantom=-P(x_2-x_1,x_1,x_3,\ldots,x_r)\right\}{\,}.\end{aligned}$$ That is, $P(x_1,x_2,x_3,\ldots,x_r)$ is a sum of restricted even period polynomials in $x_1,x_2$ multiplied by monomials in $x_3,\ldots,x_r$. More precisely, one can decompose $$\begin{aligned}
\mathbf W_{N,r}=\bigoplus_{\substack{1<n<N\\n\text{ even}}}\mathbf W_{n,2}\otimes \mathbf V_{N-n,r-2} \label{eq:decomposition}{\,},\end{aligned}$$ where $\mathbf W_{n,2}$ is the space of restricted even period polynomials of degree $n-2$. Since $\mathbf W_{n,2}$ is isomorphic to the space $\mathcal S_n$ of cusp forms of weight $n$ by Eichler-Shimura correspondence (see [@zagier]), leads to the following dimension formula.
\[lem:wnreq\] For all $r\geq 2$, $$\begin{aligned}
\sum_{N>0}\dim_{\mathbb Q}\mathbf W_{N,r}\cdot x^N= \mathbb O(x)^{r-2} \mathbb S(x){\,}.\end{aligned}$$
Ihara action and the matrices $E_{N,r}$ and $C_{N,r}$
-----------------------------------------------------
We use Tasaka’s notation (from [@tasaka]) for the polynomial representation of the Ihara action defined by Brown [@Brown Section 6]. Let $$\begin{aligned}
{\mathbin{\underline{\circ}}}\colon\mathbb Q[x_1]\otimes\mathbb Q[x_2,\ldots,x_r]&\longrightarrow\mathbb Q[x_1,\ldots,x_r] \\
f\otimes g&\longmapsto f{\mathbin{\underline{\circ}}}g{\,},\end{aligned}$$where $f{\mathbin{\underline{\circ}}}g$ denotes the polynomial $$\begin{gathered}
(f{\mathbin{\underline{\circ}}}g)(x_1,\ldots,x_r)\coloneqq f(x_1)g(x_2,\ldots,x_r)+\sum_{i=1}^{r-1}\Bigl(f(x_{i+1}-x_i)g(x_1,\ldots,\hat x_{i+1},\ldots,x_r)\\
-(-1)^{\deg f}f(x_i-x_{i+1})g(x_1,\ldots,\hat x_i,\ldots,x_r)\Bigr){\,}.\end{gathered}$$ (the hats are to indicate, that $x_{i+1}$ and $x_i$ resp. are omitted in the above expression).
For integers $m_1,\ldots,m_r,n_1,\cdots,n_r \geq 1$, let furthermore the integer $e{{\textstyle\binom{m_1,\ldots,m_r}{n_1,\ldots,n_r}}}$ denote the coefficient of $x_1^{n_1-1}\cdots x_r^{n_r-1}$ in $x_1^{m_1-1} {\mathbin{\underline{\circ}}}\left(x_1^{m_2-1}\cdots x_{r-1}^{m_r-1}\right)$, i.e. $$\begin{aligned}
\label{eq:e}
x_1^{m_1-1} {\mathbin{\underline{\circ}}}\left(x_1^{m_2-1}\cdots x_{r-1}^{m_r-1}\right) = \sum_{\substack{n_1 + \cdots + n_r = m_1 + \cdots + m_r \\ n_1, \cdots, n_r \geq 1}} e{{\textstyle\binom{m_1,\ldots,m_r}{n_1,\ldots,n_r}}}x_1^{n_1-1}\cdots x_r^{n_r-1}{\,}.\end{aligned}$$ Note that $e{{\textstyle\binom{m_1,\ldots,m_r}{n_1,\ldots,n_r}}}=0$ if $m_1+\cdots+m_r\not=n_1+\cdots+n_r$.
One can explicitly compute the integers $e{{\textstyle\binom{m_1,\ldots,m_r}{n_1,\cdots,n_r}}}$ by the following formula: ([@tasaka Lemma 3.1]) $$\begin{gathered}
e{{\textstyle\binom{m_1,\ldots,m_r}{n_1,\ldots,n_r}}}=\delta{{\textstyle\binom{m_1,\ldots,m_r}{n_1\ldots,n_r}}}+\sum_{i=1}^{r-1}\delta{{\textstyle\binom{\hat m_1,m_2,\ldots,m_i,\hat m_{i+1},m_{i+2},\ldots,m_r}{n_1,\ldots,n_{i-1},\hat n_i,\hat n_{i+1},n_{i+2},\ldots,n_r}}}\\
\cdot\left((-1)^{n_i}\binom{m_1-1}{n_i-1}+(-1)^{m_1-n_{i+1}}\binom{m_1-1}{n_{i+1}-1}\right)$$ (again, the hats are to indicate that $m_1,m_{i+1},n_i,n_{i+1}$ are omitted), where $$\begin{aligned}
\delta{{\textstyle\binom{m_1,\ldots,m_{s}}{n_1,\ldots,n_{s}}}} \coloneqq
\begin{cases}
1\quad\text{if } m_i=n_i \text{ for all }i\in\{1,\ldots,s\} \\
0\quad\text{else}
\end{cases}\end{aligned}$$ denotes the usual Kronecker delta.
\[def:enrq\] Let $N,r$ be positive integers.
- We define the $|S_{N,r}|\times |S_{N,r}|$ matrix $$\begin{aligned}
E_{N,r}\coloneqq \left(e{{\textstyle\binom{m_1,\ldots,m_r}{n_1,\ldots,n_r}}}\right)_{(m_1,\ldots,m_r),(n_1,\ldots,n_r)\in S_{N,r}}{\,}.\end{aligned}$$
- For integers $r\geq j\geq 2$ we also define the $|S_{N,r}|\times |S_{N,r}|$ matrix $$\begin{aligned}
E_{N,r}^{(j)}\coloneqq \left(\delta{{\textstyle\binom{m_1,\ldots,m_{r-j}}{n_1,\ldots,n_{r-j}}}}e{{\textstyle\binom{m_{r-j+1},\ldots,m_r}{n_{r-j+1},\ldots,n_r}}} \right)_{(m_1,\ldots,m_r),(n_1,\ldots,n_r)\in S_{N,r}}{\,}.\end{aligned}$$
\[def:cnr\] The $|S_{N,r}|\times |S_{N,r}|$ matrix $C_{N,r}$ is defined as $$\begin{aligned}
C_{N,r}\coloneqq E_{N,r}^{(2)}\cdot E_{N,r}^{(3)}\cdots E_{N,r}^{(r-1)}\cdot E_{N,r}{\,}.
\end{aligned}$$
Known Results {#sec:known results}
=============
Recall the map $\pi\colon\mathbf V_{N,r}\rightarrow{\mathsf{Vect}}_{N,r}$ (equation ). Theorem \[thm:Schneps\] due to Baumard and Schneps [@schneps] establishes a connection between the left kernel of the matrix $E_{N,2}$ and the space $\mathbf W_{N,2}$ of restricted even period polynomials. This connection was further investigated by Tasaka [@tasaka], relating $\mathbf W_{N,r}$ and the left kernel of $E_{N,r}$ for arbitrary $r\geq2$.
\[thm:Schneps\] For each integer $N>0$ we have $$\begin{aligned}
\pi\left(\mathbf W_{N,2}\right)=\ker{\prescript{t\!}{}{E}}_{N,2}{\,}.\end{aligned}$$
\[thm:injection\] Let $r\geq2$ be a positive integer and $F_{N,r}=E_{N,r}-{\operatorname{id}}_{{\mathsf{Vect}}_{N,r}}$. Then, the following $\mathbb Q$-linear map is well-defined: $$\begin{aligned}
\label{eq:TasakasFail}
\begin{split}
\mathbf W_{N,r}&\longrightarrow \ker{\prescript{t\!}{}{E}}_{N,r}\\
P(x_1,\ldots,x_r)&\longmapsto\pi(P)F_{N,r}{\,}.
\end{split}
\end{aligned}$$
\[con:isomorphism\]For all $r\geq2$, the map described in Theorem \[thm:injection\] is an isomorphism.
\[rem:isor2\] For now, only the case $r=2$ is known, which is an immediate consequence of Theorem \[thm:Schneps\]. In [@tasaka], Tasaka suggests a proof of injectivity, but it seems to contain a gap, which, as far as the authors are aware, couldn’t be fixed yet. However, assuming the injectivity of morphisms one has the following relation.
\[cor:enrineq\]For all $r\geq2$, $$\begin{aligned}
\sum_{N>0}\dim_{\mathbb Q}\ker{\prescript{t\!}{}{E}}_{N,r}\cdot x^N\geq\mathbb O(x)^{r-2}\mathbb S(x){\,}.
\end{aligned}$$
Main Tools {#sec:main tools}
==========
Decompositions of $E_{N,r}^{(j)}$
---------------------------------
We use the following decomposition lemma:
\[lem:blockdia\] Let $2\leq j\leq r-1$ and arrange the indices $(m_1,\ldots,m_r),(n_1,\ldots,n_r)\in S_{N,r}$ of $E_{N,r}^{(j)}$ in lexicographically decreasing order. Then, the matrix $E_{N,r}^{(j)}$ has block diagonal structure $$\begin{aligned}
E_{N,r}^{(j)}={\operatorname{diag}}\left(E_{3r-3,r-1}^{(j)},E_{3r-1,r-1}^{(j)},\ldots,E_{N-3,r-1}^{(j)}\right){\,}.\end{aligned}$$
This follows directly from Definition \[def:enrq\].
\[cor:blockdia\] We have $$\begin{aligned}
E_{N,r}^{(2)}E_{N,r}^{(3)}\cdots E_{N,r}^{(r-1)}={\operatorname{diag}}\left(C_{3r-3,r-1},C_{3r-1,r-1},\ldots,C_{N-3,r-1}\right){\,}.
\end{aligned}$$
Multiplying the block diagonal representations of $E_{N,r}^{(2)},E_{N,r}^{(3)},\ldots,E_{N,r}^{(r-1)}$ block by block together with Definition \[def:cnr\] yields the desired result.
\[cor:enrjocnr\] For all $r\geq 3$, $$\begin{aligned}
\sum_{N>0}\dim_{\mathbb Q}\ker\left(E_{N,r}^{(2)}\cdots E_{N,r}^{(r-1)}\right)\cdot x^N=\mathbb O(x)\sum_{N>0}\dim_{\mathbb Q}\ker C_{N,r-1}\cdot x^N{\,}.
\end{aligned}$$
According to Corollary \[cor:blockdia\], the matrix $E_{N,r}^{(2)}\cdots E_{N,r}^{(r-1)}$ has block diagonal structure, the blocks being $C_{3r-3,r-1},C_{3r-1,r-1},\ldots,C_{N-3,r-1}$. Hence, $$\begin{aligned}
\sum_{N>0}\dim_{\mathbb Q}\ker\left(E_{N,r}^{(2)}\cdots E_{N,r}^{(r-1)}\right)\cdot x^N &= \sum_{N>0} \left( \sum_{k=3r-3}^{N-3}\dim_{\mathbb Q}\ker C_{k,r-1} \right) \cdot x^N \\
&=\sum_{N>0}\dim_{\mathbb Q}\ker C_{N,r-1}\left(x^{N+3}+x^{N+5}+x^{N+7}+\cdots\right)\\
&=\mathbb O(x)\sum_{N>0}\dim_{\mathbb Q}\ker C_{N,r-1}\cdot x^N{\,},
\end{aligned}$$ thus proving the assertion.
Connection to polynomials
-------------------------
Motivated by Theorem \[thm:injection\], we interpret the right action of the matrices $E_{N,r}^{(2)},\ldots,E_{N,r}^{(r-1)},E_{N,r}^{(r)}=E_{N,r}$ on ${\mathsf{Vect}}_{N,r}$ as endomorphisms of the polynomial space $\mathbf V_{N,r}$. Having established this, we will prove Theorems \[thm:case3\] and \[thm:case4\] from a polynomial point of view.
\[def:phij\] The *restricted totally even part* of a polynomial $Q(x_1,\ldots,x_r)\in\mathbf V_{N,r}$ is the sum of all of its monomials, in which each exponent of $x_1,\ldots,x_r$ is even and at least 2. Let $r\geq j$. We define the $\mathbb Q$-linear map $$\begin{aligned}
{\varphi^{(r)}_{j}}\colon\mathbf V_{N,r}\longrightarrow\mathbf V_{N,r}{\,},
\end{aligned}$$ which maps each polynomial $Q(x_1,\ldots,x_r)\in \mathbf V_{N,r}$ to the restricted totally even part of $$\begin{gathered}
Q(x_1,\ldots,x_r)+\sum_{i=r-j+1}^{r-1}\Bigl(Q(x_1,\ldots,x_{r-j},x_{i+1}-x_i,x_{r-j+1},\ldots,\hat x_{i+1},\ldots,x_r)\\ -Q(x_1,\ldots,x_{r-j},x_{i+1}-x_i,x_{r-j+1},\ldots,\hat x_i,\ldots,x_r)\Bigr){\,}.
\end{gathered}$$
Note that ${\varphi^{(r)}_{1}}\equiv{\operatorname{id}}_{\mathbf V_{N,r}}$.
The following lemma shows that the map ${\varphi^{(r)}_{j}}$ corresponds to the right action of the matrix $E_{N,r}^{(j)}$ on ${\mathsf{Vect}}_{N,r}$ via the isomorphism $\pi$.
\[lem:phij\] Let $r\geq j$. Then, for each polynomial $Q\in\mathbf V_{N,r}$, $$\begin{aligned}
\pi\left({\varphi^{(r)}_{j}}(Q)\right)=\pi(Q)E_{N,r}^{(j)}{\,}.\end{aligned}$$ or equivalently, the following diagram commutes:
\(a) at (0,2) [$\mathbf V_{N,r}$]{}; (b) at (3,2) [$\mathbf V_{N,r}$]{}; (c) at (0,0) [${\mathsf{Vect}}_{N,r}$]{}; (d) at (3,0) [${\mathsf{Vect}}_{N,r}$]{}; (e) at (0,-0.5) [$v$]{}; (f) at (3,-0.5) [$v\cdot E_{N,r}^{(j)}$]{}; (c1) at (e) ; (d1) at (f) ; (a) – (b) node\[pos=0.5,above\][${\varphi^{(r)}_{j}}$]{}; (c) – (d); (a) – (c) node\[pos=0.5,sloped,above=-2pt\][$\sim$]{} node\[pos=0.5,left\][$\pi$]{}; (b) – (d) node\[pos=0.5,sloped,above=-2pt\][$\sim$]{} node\[pos=0.5,left\][$\pi$]{}; (c1) – (d1);
We proceed by induction on $r$. Let $r=j$ and $$\begin{aligned}
Q(x_1,\ldots,x_j)=\sum_{(n_1,\ldots,n_j)\in S_{N,j}}q_{n_1,\ldots,n_j}x_1^{n_1-1}\cdots x_r^{n_j-1}{\,}.\end{aligned}$$Then, $E_{N,j}^{(j)}=E_{N,j}$ and thus $$\begin{aligned}
\pi(Q)E_{N,j}=\left(\sum_{(m_1,\ldots,m_j)\in S_{N,j}}q_{n_1,\ldots,n_j}e{{\textstyle\binom{m_1,\ldots,m_j}{n_1,\ldots,n_j}}}\right)_{(n_1,\ldots,n_j)\in S_{N,j}}{\,}.\end{aligned}$$ By and linearity of the Ihara action ${\mathbin{\underline{\circ}}}$, the row vector on the right-hand side corresponds to $\pi$ applied to the restricted totally even part of the polynomial $$\begin{aligned}
\label{eq:phiihara}
\sum_{(n_1,\ldots,n_j)\in S_{N,j}}q_{n_1,\ldots,n_j}x_1^{n_1-1}{\mathbin{\underline{\circ}}}\left(x_1^{n_2-1}\cdots x_{j-1}^{n_j-1}\right){\,}.\end{aligned}$$ On the other hand, plugging $r=j$ into Definition \[def:phij\] yields that ${\varphi^{(j)}_{j}}(Q(x_1,\ldots,x_j))$ corresponds to the restricted totally even part of some polynomial, which by definition of the Ihara action ${\mathbin{\underline{\circ}}}$ coincides with the polynomial defined in . Thus, the claim holds for $r=j$.
Now suppose that $r\geq j+1$ and the claim is proven for all smaller $r$. Let us decompose $$\begin{aligned}
Q(x_1,\ldots,x_r)=\sum_{\substack{n_1=3\\n_1\text{ odd}}}^{N-(3r-3)}x_1^{n_1-1}\cdot Q_{N-n_1}(x_2,\ldots,x_r){\,},\end{aligned}$$where the $Q_k$ are restricted totally even homogeneous polynomials in $r-1$ variables. In particular, $Q_k\in\mathbf V_{k,r-1}$ for all $k$. Arrange the indices of $\pi(Q)$ in lexicographically decreasing order. Then, by grouping consecutive entries, $\pi(Q)$ is the list-like concatenation of $\pi(Q_{3r-3}),\ldots,\pi(Q_{N-3})$, which we denote by $$\begin{aligned}
\pi(Q)=\big(\pi(Q_{3r-3}),\pi(Q_{3r-1}),\ldots,\pi(Q_{N-3})\big){\,}.\end{aligned}$$ Since we have lexicographically decreasing order of indices, the block diagonal structure of $E_{N,r}^{(j)}$ stated in Lemma \[lem:blockdia\] yields $$\begin{aligned}
\pi(Q)E_{N,r}^{(j)}&=\left(\pi(Q_{3r-3})E_{3r-3,r-1}^{(j)},\pi(Q_{3r-1})E_{3r-1,r-1}^{(j)},\ldots,\pi(Q_{N-3})E_{N-3,r-1}^{(j)}\right)\\
&=\left(\pi\left({\varphi^{(r-1)}_{j}}(Q_{3r-3})\right),\pi\left({\varphi^{(r-1)}_{j}}(Q_{3r-1})\right),\ldots,\pi\left({\varphi^{(r-1)}_{j}}(Q_{N-3})\right)\right)\\
&=\pi\left({\varphi^{(r)}_{j}}(Q)\right)\end{aligned}$$by linearity of ${\varphi^{(r)}_{j}}$ and the induction hypothesis. This shows the assertion.
\[cor:phiEiso\] For all $r\geq2$, $$\begin{aligned}
{\operatorname{Im}}{\prescript{t\!}{}{\left(E_{N,r}^{(2)}\cdots E_{N,r}^{(r-1)}\right)}}\cap\ker{\prescript{t\!}{}{E}}_{N,r}\cong\ker{\varphi^{(r)}_{r}}\cap{\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ\cdots\circ{\varphi^{(r)}_{2}}\right){\,}.\end{aligned}$$
By the previous Lemma \[lem:phij\], the following diagram commutes:
\(a) at (0,2) [$\mathbf V_{N,r}$]{}; (b) at (3,2) [$\mathbf V_{N,r}$]{}; (c) at (6,2) [$\cdots$]{}; (d) at (9,2) [$\mathbf V_{N,r}$]{}; (e) at (12,2) [$\mathbf V_{N,r}$]{}; (A) at (0,0) [${\mathsf{Vect}}_{N,r}$]{}; (B) at (3,0) [${\mathsf{Vect}}_{N,r}$]{}; (C) at (6,0) [$\cdots$]{}; (D) at (9,0) [${\mathsf{Vect}}_{N,r}$]{}; (E) at (12,0) [${\mathsf{Vect}}_{N,r}$]{}; /in[a/A,b/B,d/D,e/E]{}[ () – () node\[pos=0.5,sloped,above=-2pt\][$\sim$]{} node\[pos=0.5,left\][$\pi$]{}; ]{} //in[a/b/2,b/c/3,c/d/r-1,d/e/r]{}[ () – () node\[pos=0.5,above\][${\varphi^{(r)}_{\j}}$]{}; ]{} //in[A/B/2,B/C/3,C/D/r-1,D/E/r]{}[ () – () node\[pos=0.5,above\][${}\cdot E_{N,r}^{(\j)}$]{}; ]{}
From this, we have ${\operatorname{Im}}{\prescript{t\!}{}{\left(E_{N,r}^{(2)}\cdots E_{N,r}^{(r-1)}\right)}}\cong{\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ\cdots\circ{\varphi^{(r)}_{2}}\right)$ and $\ker{\prescript{t\!}{}{E}}_{N,r}\cong\ker{\varphi^{(r)}_{r}}$. Thereby, the claim is established.
\[lem:kerphi\]Let $j\leq r-1$. Then, $$\begin{aligned}
\ker{\varphi^{(r)}_{j}}\cong\bigoplus_{n<N}\mathbf V_{N-n,r-j}\otimes\ker E_{n,j}{\,}.
\end{aligned}$$
Let $Q\in\mathbf V_{N,r}$. We may decompose $$\begin{aligned}
Q(x_1,\ldots,x_r)=\sum_{n<N}\ \sum_{(n_1,\ldots,n_{r-j})\in S_{N-n,r-j}}x_1^{n_1-1}\cdots x_{r-j}^{n_{r-j}-1}R_{n_1,\ldots,n_{r-j}}(x_{r-j+1},\ldots,x_r){\,},\end{aligned}$$ where $R_{n_1,\ldots,n_{r-j}}\in\mathbf V_{n,j}$ is a restricted totally even homogeneous polynomial. Note that we have $Q\in\ker{\varphi^{(r)}_{j}}$ if and only if ${\varphi^{(j)}_{j}}(R_n)=0$ holds for each $R_n$ in the above decomposition. By Lemma \[lem:phij\], ${\varphi^{(j)}_{j}}(R_n)=0$ if and only if $\pi(R_n)\in\ker E_{n,j}$. Now, the assertion is immediate.
\[cor:phijres\] Let $2\leq j\leq r-2$. The restricted map $$\begin{aligned}
{\varphi^{(r)}_{j}\big|_{\mathbf W_{N,r}}}\colon\mathbf W_{N,r}\longrightarrow\mathbf W_{N,r}
\end{aligned}$$ is well-defined and satisfies $$\begin{aligned}
\ker{\varphi^{(r)}_{j}\big|_{\mathbf W_{N,r}}}\cong\bigoplus_{n<N}\mathbf W_{N-n,r-j}\otimes\ker E_{n,j}{\,}.
\end{aligned}$$
Since $j\leq r-2$, for each $Q\in\mathbf V_{N,r}$ the map $Q(x_1,\ldots,x_r)\mapsto{\varphi^{(r)}_{j}}(Q)$ does not interfere with $x_1$ or $x_2$ and thus not with the defining property of $\mathbf W_{N,r}$. Hence, ${\varphi^{(r)}_{j}\big|_{\mathbf W_{N,r}}}$ is well-defined. The second assertion is done just like in the previous Lemma \[lem:kerphi\].
\[lem:fnr\] Let $r\geq3$. For all $P\in\mathbf W_{N,r}$, $$\begin{aligned}
\pi(-P)E_{N,r}^{(r-1)}=\pi(P)F_{N,r}{\,}.\end{aligned}$$
Recall that by Lemma \[lem:phij\], $$\begin{aligned}
\pi(-P)E_{N,r}^{(r-1)}&=\pi\left( {\varphi^{(r)}_{r-1}}\big(-P(x_1,\ldots,x_r)\big)\right)\\
&=\begin{multlined}[t]
\pi\bigg(-P(x_1,\ldots,x_r)+\sum_{i=2}^{r-1}\Big(-P(x_1,x_{i+1}-x_i,x_2,\ldots, \hat x_{i+1},\ldots,x_r)\\
+P(x_1,x_{i+1}-x_i,x_2,\ldots, \hat x_i,\ldots,x_r)\Big)\bigg)
\end{multlined}\\
&=\begin{multlined}[t]
\pi\bigg(-P(x_1,\ldots,x_r)+\sum_{i=2}^{r-1}\Big(P(x_{i+1}-x_i,x_1,\ldots, \hat x_{i+1},\ldots,x_r)\\
-P(x_{i+1}-x_i,x_1,\ldots, \hat x_i,\ldots,x_r)\Big)\bigg){\,},
\end{multlined}\end{aligned}$$ since $-P$ is antisymmetric with respect to $x_1\leftrightarrow x_2$. In the same way we compute $$\begin{aligned}
\pi(P)F_{N,r}&=\pi(P)\left(E_{N,r}-{\operatorname{id}}_{{\mathsf{Vect}}_{N,r}}\right)=\pi\big({\varphi^{(r)}_{r}}(P(x_1,\ldots,x_r))\big)-\pi(P)\\
&=\begin{multlined}[t]
\pi\bigg(P(x_1,\ldots,x_r)+\sum_{i=1}^{r-1}\Big(P(x_{i+1}-x_i,x_1,\ldots, \hat x_{i+1},\ldots,x_r)\\
-P(x_{i+1}-x_i,x_1,\ldots, \hat x_i,\ldots,x_r)\Big)\bigg)-\pi(P){\,}.
\end{multlined}\end{aligned}$$ Now the desired result follows from $$\begin{aligned}
P(x_1,x_2,\ldots,x_r)+P(x_2-x_1,x_1,x_3,\ldots,x_r)-P(x_2-x_1,x_2,\ldots,x_r)=0{\,},\end{aligned}$$ since $P$ is in $\mathbf W_{N,r}$.
\[cor:kerimineq\] Assume that the map from Theorem \[thm:injection\] is injective. Then, for all $r\geq 3$, $$\begin{aligned}
\dim_{\mathbb Q}\left({\operatorname{Im}}{\prescript{t\!}{}{E}}_{N,r}^{(r-1)}\cap\ker{\prescript{t\!}{}{E}}_{N,r}\right)\geq\dim_{\mathbb Q}\mathbf W_{N,r}{\,}.\end{aligned}$$
This is immediate by the previous Lemma \[lem:fnr\].
\[lem:imphi\] For all $r\geq3$, $$\begin{aligned}
{\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ{\varphi^{(r)}_{r-2}\big|_{\mathbf W_{N,r}}}\circ\cdots\circ{\varphi^{(r)}_{2}\big|_{\mathbf W_{N,r}}}\right)\subseteq\ker {\varphi^{(r)}_{r}}\cap{\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ\cdots\circ{\varphi^{(r)}_{2}}\right){\,}.\end{aligned}$$
We may replace the right-hand side by just $\ker{\varphi^{(r)}_{r}}$. Note that by Corollary \[cor:phijres\] the composition of restricted ${\varphi^{(r)}_{j}\big|_{\mathbf W_{N,r}}}$ on the left-hand side is well-defined. Moreover, each $Q\in{\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ{\varphi^{(r)}_{r-2}\big|_{\mathbf W_{N,r}}}\circ\cdots\circ{\varphi^{(r)}_{2}\big|_{\mathbf W_{N,r}}}\right)$ can be represented as $Q={\varphi^{(r)}_{r-1}}(P)$ for some $P\in\mathbf W_{N,r}$ and thus $Q\in\ker{\varphi^{(r)}_{r}}$ according to Lemma \[lem:fnr\] and Theorem \[thm:injection\].
Similar to Conjecture \[con:isomorphism\] we expect a stronger result to be true, which is stated in the following conjecture due to Claire Glanois:
\[con:imiso\] For all $r\geq 3$, $$\begin{aligned}
{\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ{\varphi^{(r)}_{r-2}\big|_{\mathbf W_{N,r}}}\circ\cdots\circ{\varphi^{(r)}_{2}\big|_{\mathbf W_{N,r}}}\right)=\ker {\varphi^{(r)}_{r}}\cap{\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ\cdots\circ{\varphi^{(r)}_{2}}\right){\,}.\end{aligned}$$
Note that intersecting $\ker{\varphi^{(r)}_{r}}\cap{\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ\cdots\circ{\varphi^{(r)}_{2}}\right)$, Conjecture \[con:imiso\] does not need the injectivity from Conjecture \[con:isomorphism\]. However, we haven’t been able to derive Conjecture \[con:imiso\] from Conjecture \[con:isomorphism\], so it is not necessarily weaker.
Main Results {#sec:main results}
============
Throughout this section we will assume that the map from Theorem \[thm:injection\] is injective, i.e. the injectivity part of Conjecture \[con:isomorphism\] is true. This was also the precondition for Tasaka’s original proof of Theorem \[thm:case4\].
Proof of Theorem \[thm:case3\].
-------------------------------
By Corollary \[cor:enrjocnr\], Remark \[rem:isor2\] and the fact that $E_{N,2}=C_{N,2}$ we obtain $$\begin{aligned}
\label{eq:case3ineq1}
\sum_{N>0}\dim_{\mathbb Q}\ker E_{N,3}^{(2)}\cdot x^N=\mathbb O(x)\sum_{N>0}\dim_{\mathbb Q}\ker C_{N,2}\cdot x^N=\mathbb O(x)\mathbb S(x){\,}.\end{aligned}$$ We use Corollary \[cor:kerimineq\] and Lemma \[lem:wnreq\] to obtain $$\begin{aligned}
\label{eq:case3ineq2}
\sum_{N>0}\dim_{\mathbb Q}\left({\operatorname{Im}}{\prescript{t\!}{}{E}}_{N,3}^{(2)} \cap \ker{\prescript{t\!}{}{E}}_{N,3}\right)\cdot x^N \geq \sum_{N>0}\dim_{\mathbb Q}\mathbf W_{N,3}\cdot x^N
= \mathbb O(x) \mathbb S(x){\,}.\end{aligned}$$ Now observe that since $C_{N,3}=E_{N,3}^{(2)}E_{N,3}$, we have $$\begin{aligned}
\dim_{\mathbb Q}\ker C_{N,3}=\dim_{\mathbb Q}\ker{\prescript{t\!}{}{E}}_{N,3}^{(2)}+\dim_{\mathbb Q}\left({\operatorname{Im}}{\prescript{t\!}{}{E}}_{N,3}^{(2)} \cap \ker{\prescript{t\!}{}{E}}_{N,3}\right){\,}.\end{aligned}$$ By and , the assertion is proven.
Proof of Theorem \[thm:case4\].
-------------------------------
Since $C_{N,4}=E_{N,4}^{(2)}E_{N,4}^{(3)}E_{N,4}$, we may split $\dim_{\mathbb Q}\ker C_{N,4}$ into $$\begin{aligned}
\dim_{\mathbb Q}\ker C_{N,4}=\dim_{\mathbb Q}\ker{\prescript{t\!}{}{\left( E_{N,4}^{(2)}E_{N,4}^{(3)}\right)}}+\dim_{\mathbb Q}\left({\operatorname{Im}}{\prescript{t\!}{}{\left( E_{N,4}^{(2)}E_{N,4}^{(3)}\right)}}
\cap \ker{\prescript{t\!}{}{E}}_{N,4}\right){\,}.\end{aligned}$$ The two summands on the right-hand side are treated separately. For the first one, by Corollary \[cor:enrjocnr\] and Theorem \[thm:case3\] one has $$\begin{aligned}
\label{eq:case4ineq1}
\sum_{N>0}\dim_{\mathbb Q}\ker{\prescript{t\!}{}{\left( E_{N,4}^{(2)}E_{N,4}^{(3)}\right)}}\cdot x^N\geq2\mathbb O(x)^2\mathbb S(x){\,}.\end{aligned}$$ For the second one, we use Corollary \[cor:phiEiso\] and Lemma \[lem:imphi\] to obtain $$\begin{aligned}
\dim_{\mathbb Q}\left({\operatorname{Im}}{\prescript{t\!}{}{\left( E_{N,4}^{(2)}E_{N,4}^{(3)}\right)}} \cap \ker{\prescript{t\!}{}{E}}_{N,4}\right)&\geq\dim_{\mathbb Q}{\operatorname{Im}}\left({\varphi^{(4)}_{3}}\circ{\varphi^{(4)}_{2}\big|_{\mathbf W_{N,4}}}\right)\\
&=\dim_{\mathbb Q}{\operatorname{Im}}{\varphi^{(4)}_{2}\big|_{\mathbf W_{N,4}}}{\,},\end{aligned}$$ since we assume ${\varphi^{(4)}_{3}}$ to be injective on $\mathbf W_{N,r}$ according to Conjecture \[con:isomorphism\]. According to Corollary \[cor:phijres\] and Theorem \[thm:Schneps\], $$\begin{aligned}
\ker{\varphi^{(4)}_{2}\big|_{\mathbf W_{N,4}}}\cong\bigoplus_{n<N}\mathbf W_{N-n,2}\otimes\ker E_{n,2}\cong\bigoplus_{n<N}\mathbf W_{N-n,2}\otimes\mathbf W_{n,2}{\,}.\end{aligned}$$ Now, by $\dim_{\mathbb Q}{\operatorname{Im}}{\varphi^{(4)}_{2}\big|_{\mathbf W_{N,4}}}=\dim_{\mathbb Q}\mathbf W_{N,4}-\dim_{\mathbb Q}\ker{\varphi^{(4)}_{2}\big|_{\mathbf W_{N,4}}}$ we obtain $$\begin{aligned}
\label{eq:case4ineq2}
\sum_{N>0}\dim_{\mathbb Q}{\operatorname{Im}}{\varphi^{(4)}_{2}\big|_{\mathbf W_{N,4}}}\cdot x^N=\mathbb O(x)^2\mathbb S(x)-\mathbb S(x)^2{\,}.\end{aligned}$$ Combining and , the proof is finished.
The case $r=5$ assuming Conjecture \[con:isomorphism\].
-------------------------------------------------------
In addition to the injectivity of , we now assume Conjecture \[con:isomorphism\] is true in the case $r=3$, i.e. $$\begin{aligned}
\sum_{N>0}\dim_{\mathbb Q}\ker{\prescript{t\!}{}{E}}_{N,3}\cdot x^N=\mathbb O(x)\mathbb S(x)\label{eq:case5eq1}\end{aligned}$$ by Corollary \[cor:enrineq\]. Our goal is to prove the lower bound $$\begin{aligned}
\sum_{N>0}\dim_{\mathbb Q}\ker C_{N,5}\cdot x^N\geq 4 \mathbb O(x)^3\mathbb S(x)-3 \mathbb O(x)\mathbb S(x)^2{\,},\label{eq:case5}\end{aligned}$$ which as an equality would be the exact value predicted by Conjecture \[con:Brown\]. Again we use the decomposition $C_{N,5}=E_{N,5}^{(2)}E_{N,5}^{(3)}E_{N,5}^{(4)}E_{N,5}$ to split $\dim_{\mathbb Q}\ker C_{N,5}$ into $$\begin{aligned}
\dim_{\mathbb Q}\ker C_{N,5}=\dim_{\mathbb Q}\ker{\prescript{t\!}{}{\left( E_{N,5}^{(2)}E_{N,5}^{(3)}E_{N,5}^{(4)}\right)}}+\dim_{\mathbb Q}\left({\operatorname{Im}}{\prescript{t\!}{}{ \left( E_{N,5}^{(2)}E_{N,5}^{(3)}E_{N,5}^{(4)}\right)}} \cap \ker{\prescript{t\!}{}{E}}_{N,5}\right){\,}.\end{aligned}$$ Applying Corollary \[cor:enrjocnr\] and Theorem \[thm:case4\] to the first summand on the right-hand side, we obtain $$\begin{aligned}
\label{eq:case5ineq1}
\sum_{N>0}\dim_{\mathbb Q}\ker E_{N,5}^{(2)}E_{N,5}^{(3)}E_{N,5}^{(4)}\cdot x^N\geq3\mathbb O(x)^3\mathbb S(x)-\mathbb O(x)\mathbb S(x)^2{\,}.\end{aligned}$$ Again, for the second summand Corollary \[cor:phiEiso\] and Lemma \[lem:imphi\] yield $$\begin{aligned}
\begin{split}
\dim_{\mathbb Q}\left({\operatorname{Im}}{\prescript{t\!}{}{\left( E_{N,5}^{(2)}E_{N,5}^{(3)}E_{N,5}^{(4)}\right)}} \cap \ker{\prescript{t\!}{}{E}}_{N,5}\right)&\geq\dim_{\mathbb Q}{\operatorname{Im}}\left({\varphi^{(5)}_{4}}\circ{\varphi^{(5)}_{3}\big|_{\mathbf W_{N,5}}}\circ{\varphi^{(5)}_{2}\big|_{\mathbf W_{N,5}}}\right)\\
&=\dim_{\mathbb Q}{\operatorname{Im}}\left({\varphi^{(5)}_{3}\big|_{\mathbf W_{N,5}}}\circ{\varphi^{(5)}_{2}\big|_{\mathbf W_{N,5}}}\right){\,},
\end{split}\end{aligned}$$ since ${\varphi^{(5)}_{4}}$ is injective on $\mathbf W_{N,r}$ by our assumption. According to Corollary \[cor:phijres\] and Theorem \[thm:Schneps\], $$\begin{aligned}
\ker{\varphi^{(5)}_{2}\big|_{\mathbf W_{N,5}}}\cong\bigoplus_{n<N}\mathbf W_{N-n,3}\otimes\ker E_{n,2}\cong\bigoplus_{n<N}\mathbf W_{N-n,3}\otimes\mathbf W_{n,2}\end{aligned}$$ and by , $$\begin{aligned}
\ker{\varphi^{(5)}_{3}\big|_{\mathbf W_{N,5}}}\cong\bigoplus_{n<N}\mathbf W_{N-n,2}\otimes\ker E_{n,3}\cong\bigoplus_{n<N}\mathbf W_{N-n,2}\otimes\mathbf W_{n,3}{\,}.\end{aligned}$$ Now, by $$\begin{aligned}
\dim_{\mathbb Q}{\operatorname{Im}}\left({\varphi^{(5)}_{3}\big|_{\mathbf W_{N,5}}}\circ{\varphi^{(5)}_{2}\big|_{\mathbf W_{N,5}}}\right)\geq\dim_{\mathbb Q}\mathbf W_{N,5}-\dim_{\mathbb Q}\ker{\varphi^{(5)}_{2}\big|_{\mathbf W_{N,5}}}-\dim_{\mathbb Q}\ker{\varphi^{(5)}_{3}\big|_{\mathbf W_{N,5}}}\end{aligned}$$ we arrive at $$\begin{aligned}
\label{eq:case5ineq2}
\sum_{N>0}\dim_{\mathbb Q}{\operatorname{Im}}\left({\varphi^{(5)}_{3}\big|_{\mathbf W_{N,5}}}\circ{\varphi^{(5)}_{2}\big|_{\mathbf W_{N,5}}}\right)\cdot x^N\geq\mathbb O(x)^3\mathbb S(x)-2\mathbb O(x)\mathbb S(x)^2{\,}.\end{aligned}$$ Combining and yields the desired result.
A recursive approach to the general case $r\geq2$
-------------------------------------------------
In this section, we show that one can recursively derive the exact value of $\dim_{\mathbb Q}\ker C_{N,r}$ from Conjecture \[con:imiso\]. Let us fix some notations:
For $r\geq2$, let us define the formal series $$\begin{aligned}
B_r(x)&\coloneqq \sum_{N>0}\dim_{\mathbb Q}{\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ{\varphi^{(r)}_{r-2}\big|_{\mathbf W_{N,r}}}\circ\cdots\circ{\varphi^{(r)}_{2}\big|_{\mathbf W_{N,r}}}\right)\cdot x^N\tag i\\
T_r(x)&\coloneqq \sum_{N>0}\dim_{\mathbb Q}\ker C_{N,r}\cdot x^N\tag{ii}{\,}.\end{aligned}$$ We set $T_0(x),T_1(x)\coloneqq 0$.
The main observation is the following lemma:
\[lem:recursion\] Assume that Conjecture \[con:imiso\] is true and that the map from Theorem \[thm:injection\] is injective. Then, for $r\geq 3$ the following recursion holds: $$\begin{aligned}
B_r(x)=\mathbb O(x)^{r-2}\mathbb S(x)-\sum_{j=2}^{r-2}\mathbb O(x)^{r-j-2}\mathbb S(x)B_j(x){\,}.\end{aligned}$$
We have $$\begin{gathered}
\dim_{\mathbb Q}{\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ{\varphi^{(r)}_{r-2}\big|_{\mathbf W_{N,r}}}\circ\cdots\circ{\varphi^{(r)}_{2}\big|_{\mathbf W_{N,r}}}\right)\\
=\begin{aligned}[t]
\dim_{\mathbb Q}\mathbf W_{N,r}&-
\sum_{j=2}^{r-2}\dim_{\mathbb Q}\ker{\varphi^{(r)}_{j}\big|_{\mathbf W_{N,r}}}\cap{\operatorname{Im}}\left({\varphi^{(r)}_{j-1}\big|_{\mathbf W_{N,r}}}\circ\cdots\circ{\varphi^{(r)}_{2}\big|_{\mathbf W_{N,r}}}\right)\\
&-\dim_{\mathbb Q}\ker{\varphi^{(r)}_{r-1}}\cap{\operatorname{Im}}\left({\varphi^{(r)}_{r-2}\big|_{\mathbf W_{N,r}}}\circ\cdots\circ{\varphi^{(r)}_{2}\big|_{\mathbf W_{N,r}}}\right){\,}.
\end{aligned}\end{gathered}$$ Since we assume ${\varphi^{(r)}_{r-1}}$ to be injective on $\mathbf W_{N,r}$, the last summand on the right-hand side vanishes. Let $2\leq j\leq r-2$. As the restriction to $\mathbf W_{N,r}$ only affects $x_1$ and $x_2$, whereas ${\varphi^{(r)}_{j}}$ acts on $x_{r-j+1},\ldots,x_r$, we obtain $$\begin{gathered}
\ker{\varphi^{(r)}_{j}\big|_{\mathbf W_{N,r}}}\cap{\operatorname{Im}}\left({\varphi^{(r)}_{j-1}\big|_{\mathbf W_{N,r}}}\circ\cdots\circ{\varphi^{(r)}_{2}\big|_{\mathbf W_{N,r}}}\right)\\
\begin{aligned}[t]
&=\bigoplus_{n<N}\mathbf W_{N-n,r-j}\otimes\left(\ker{\varphi^{(j)}_{j}}\cap{\operatorname{Im}}\left({\varphi^{(j)}_{j-1}}\circ\cdots\circ{\varphi^{(j)}_{2}}\right)\right)\\
&=\bigoplus_{n<N}\mathbf W_{N-n,r-j}\otimes{\operatorname{Im}}\left({\varphi^{(j)}_{j-1}}\circ{\varphi^{(j)}_{j-2}\big|_{\mathbf W_{n,j}}}\circ\cdots\circ{\varphi^{(j)}_{2}\big|_{\mathbf W_{n,j}}}\right){\,},
\end{aligned}\end{gathered}$$ where the last equality follows from Conjecture \[con:imiso\]. Hence, if we denote $$\begin{aligned}
a_{N,r}=\dim_{\mathbb Q}{\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ{\varphi^{(r)}_{r-2}\big|_{\mathbf W_{N,r}}}\circ\cdots\circ{\varphi^{(r)}_{2}\big|_{\mathbf W_{N,r}}}\right){\,},\end{aligned}$$we obtain the recursion $$\begin{aligned}
\label{eq:casercauchy}
a_{N,r}=\dim_{\mathbb Q}\mathbf W_{N,r}-\sum_{j=2}^{r-2}\sum_{n<N}\dim_{\mathbb Q}\mathbf W_{N-n,r-j}\cdot a_{n,j}{\,}.\end{aligned}$$ By Lemma \[lem:wnreq\] and the convolution formula for multiplying formal series, equation establishes the claim.
\[thm:recursion\] Upon Conjecture \[con:imiso\] and the injectivity of , for all $r\geq3$ the following recursion is satisfied: $$\begin{aligned}
T_r(x)=\mathbb O(x)T_{r-1}(x)-\mathbb S(x)T_{r-2}(x)+\mathbb O(x)^{r-2}\mathbb S(x){\,}.\end{aligned}$$
As we assume Conjecture \[con:imiso\], we get from Definition \[def:cnr\] and Corollary \[cor:phiEiso\] $$\begin{aligned}
\dim_{\mathbb Q}\ker C_{N,r}&=
\begin{multlined}[t]
\dim_{\mathbb Q}\ker{\prescript{t\!}{}{\left( E_{N,r}^{(2)}\cdots E_{N,r}^{(r-1)}\right)}}\\+\dim_{\mathbb Q}\left({\operatorname{Im}}{\prescript{t\!}{}{\left( E_{N,r}^{(2)}\cdots E_{N,r}^{(r-1)}\right)}} \cap \ker{\prescript{t\!}{}{E}}_{N,r}\right)
\end{multlined}\\
&=\begin{multlined}[t]
\dim_{\mathbb Q}\ker{\prescript{t\!}{}{\left( E_{N,r}^{(2)}\cdots E_{N,r}^{(r-1)}\right)}}\\+\dim_{\mathbb Q}{\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ{\varphi^{(r)}_{r-2}\big|_{\mathbf W_{N,r}}}\circ\cdots\circ{\varphi^{(r)}_{2}\big|_{\mathbf W_{N,r}}}\right)
\end{multlined}\end{aligned}$$ and thus, by Corollary \[cor:enrjocnr\], $$\begin{aligned}
T_r(x)=\mathbb O(x)T_{r-1}(x)+B_r(x){\,}.\end{aligned}$$ Using Lemma \[lem:recursion\], we obtain $$\begin{aligned}
T_r(x)&=\mathbb O(x)T_{r-1}(x)+\mathbb O(x)^{r-2}\mathbb S(x)-\sum_{j=2}^{r-2}\mathbb O(x)^{r-j-2}\mathbb S(x)\big(T_j(x)-\mathbb O(x)T_{j-1}(x)\big)\\
&=\mathbb O(x)T_{r-1}(x)+\mathbb O(x)^{r-2}\mathbb S(x)-\mathbb S(x)T_{r-2}(x)+\mathbb O(x)^{r-3}\mathbb S(x)T_1(x)\\
&=\mathbb O(x)T_{r-1}(x)-\mathbb S(x)T_{r-2}(x)+\mathbb O(x)^{r-2}\mathbb S(x){\,},\end{aligned}$$where by definition $T_1(x)=0$. The conclusion follows.
Note that by our choice of $T_0(x)$ and $T_1(x)$, Theorem \[thm:recursion\] remains true for $r=2$ since we know from [@schneps] that $T_2(x)=\mathbb S(x)$. Under the assumption of Conjecture \[con:imiso\] and injectivity in , we are now ready to prove that the generating series of ${\operatorname{rank}}C_{N,r}$ equals the explicit series $\frac{1}{1-\mathbb O(x)y+\mathbb S(x)y^2}$ as was claimed in Conjecture \[con:Brown\]. This (under the same assumptions though) proves the motivic version of Conjecture \[con:Brown\] (i.e. with $\mathcal Z_{N,r}^{{\operatorname{odd}}}$ replaced by $\mathcal H_{N,r}^{{\operatorname{odd}}}$).
Let $R_r(x)=\mathbb O(x)^r-T_r(x)$ and note that by Theorem \[thm:recursion\] $$\begin{aligned}
R_r(x)=\mathbb O(x)R_{r-1}(x)-\mathbb S(x)R_{r-2}(x)\end{aligned}$$for all $r\geq2$. Hence, $$\begin{aligned}
\left(1-\mathbb O(x)y+\mathbb S(x)y^2\right)\sum_{r\geq0}R_r(x)y^r&=
\begin{aligned}[t]
&\sum_{r\geq2}\big(R_r(x)-\mathbb O(x)R_{r-1}(x)+\mathbb S(x)R_{r-2}(x)\big)y^r\\
&+R_0(x)+R_1(x)y-\mathbb O(x)R_0(x)y
\end{aligned}\\
&=R_0(x)+\mathbb O(x)y-\mathbb O(x)y\\
&=1\end{aligned}$$and thus $$\begin{aligned}
1+\sum_{N,r>0}{\operatorname{rank}}C_{N,r}\cdot x^Ny^r=\sum_{r\geq0}R_r(x)y^r=\frac1{1-\mathbb O(x)y+\mathbb S(x)y^2}{\,},\end{aligned}$$ which is the desired result.
|
{
"pile_set_name": "arxiv"
}
|
Ancient toolmaking site discovered near Niagara Falls
Archaeologists have found arrowheads and drills, indicating that the camps were occupied for extended periods of time.
DIGGING FOR TOOLS: Students at work in 2006 excavating a feature at the site on Grand Island that was most likely a hearth. (Photo: L.M. Anselmi)
An ancient campsite where people were manufacturing tools has been discovered near the Niagara Falls.
This find, combined with other archaeological discoveries in the area over the past few decades, suggests that such campsites lined the Niagara River as far back as 4,000 years ago.
So far, the team has unearthed more than 20,000 artifacts, mostly bits of rock broken off when people were creating stone tools, on the southeastern tip of Grand Island New York, about 12 miles (20 km) upstream from Niagara Falls. The earliest artifacts at the site date back at least 4,000 years, opening a window on a time when people were living a nomadic lifestyle based on hunting, fishing and gathering plants. [In Photos: Digging Up Niagara's History]
"I would anticipate that there would have been, back in the day, these kinds of campsites all along the Niagara River on both sides and on both sides of the island," team leader Lisa Anselmi, of Buffalo State University of New York, told LiveScience.
The archaeologists found that people at the Grand Island site were making a wide variety of tools, including spear points, arrowheads and even a few stone drills. Anselmi said that the drills "would be sharp enough to go through a piece of leather... or go through shell or some bone to create a bead."
The team also found bits of yellow and red ochre at the site; in ancient times it was common, for religious reasons, for ochre to be applied on the skin of someone who was being buried. No evidence of burials has been found so far at the site.
Stretching across time
The south tip of Grand Island appears to have been occupied for an extended time.
Fragments of pottery dating between 2,900 and 1,500 years ago found by Anselmi and her colleagues suggest inhabitants experimented with ceramic production, using pots to collect nuts and plant remains.
The team also found spear points that date back around 500 years, to a period shortly before Europeans started arriving in the area. More recent artifacts included nails from houses built in the 19th century and bullets that appear to date to the 1930s or 40s.
Anselmi said that the site probably would have been used mainly between the spring and fall, when food would have been plentiful. "The island would have had the advantage of being close to the river (with) lots of freshwater fish and other kinds of resources from the river," she said. Also, "in all likelihood there would have been a very strong deer population on the island."
Crossing the Niagara River
To get to Grand Island people in antiquity would have had to cross the Niagara River. Today, the fast-flowing waterway moves at a rate of about 2-3 feet per second near the island.
Curiously, rather than making use of rock found on the island, the ancient people imported a type of Onondaga chert — a tough limestone that they would have had to carry across the river from the mainland.
Anselmi explained that they would have brought over small bits of this rock that could then be molded into tools. "It's not necessarily that they're filling a canoe up with boulders," she said.
By using Onondaga chert the people of Grand Island were continuing a toolmaking tradition that goes back to when people were first entering New York State.
For instance, at a site called Emanon Pond, located in western New York, people were using the material almost exclusively nearly 11,000 years ago.
"With the exception of a single projectile point made from glacially derived drusy quartz, all of the artifacts are manufactured using local Onondaga chert," write Peter Neal Peregrine and Melvin Ember in the North America edition of the "Encyclopedia of Prehistory," published in 2001.
The findings were presented in May at a meeting of the Toronto chapter of the Ontario Archaeological Society.
|
{
"pile_set_name": "pile-cc"
}
|
---
abstract: 'We describe a 325-MHz survey, undertaken with the Giant Metrewave Radio Telescope (GMRT), which covers a large part of the three equatorial fields at 9, 12 and 14.5 h of right ascension from the [ *Herschel*]{}-Astrophysical Terahertz Large Area Survey (H-ATLAS) in the area also covered by the Galaxy And Mass Assembly survey (GAMA). The full dataset, after some observed pointings were removed during the data reduction process, comprises 212 GMRT pointings covering $\sim90$ deg$^2$ of sky. We have imaged and catalogued the data using a pipeline that automates the process of flagging, calibration, self-calibration and source detection for each of the survey pointings. The resulting images have resolutions of between 14 and 24 arcsec and minimum rms noise (away from bright sources) of $\sim1$ mJy beam$^{-1}$, and the catalogue contains 5263 sources brighter than $5\sigma$. We investigate the spectral indices of GMRT sources which are also detected at 1.4 GHz and find them to agree broadly with previously published results; there is no evidence for any flattening of the radio spectral index below $S_{1.4}=10$ mJy. This work adds to the large amount of available optical and infrared data in the H-ATLAS equatorial fields and will facilitate further study of the low-frequency radio properties of star formation and AGN activity in galaxies out to $z \sim 1$.'
author:
- |
Tom Mauch$^{1,2,3}$[^1], Hans-Rainer Klöckner$^{1,4}$, Steve Rawlings$^1$, Matt Jarvis$^{2,5,1}$, Martin J. Hardcastle$^2$, Danail Obreschkow$^{1,6}$, D.J. Saikia$^{7,8}$ and Mark A. Thompson$^2$\
$^1$Oxford Astrophysics, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH\
$^2$Centre for Astrophysics Research, University of Hertfordshire, College Lane, Hatfield, Hertfordshire AL10 9AB\
$^3$SKA South Africa, Third Floor, The Park, Park Road, Pinelands, 7405 South Africa\
$^4$Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, 53121 Bonn, Germany\
$^5$ Physics Department, University of the Western Cape, Cape Town, 7535, South Africa\
$^6$ International Centre for Radio Astronomy Research, University of Western Australia, 35 Stirling Highway, Crawley, WA 6009, Australia\
$^7$ National Centre for Radio Astrophysics, Tata Institute of Fundamental Research, Pune University Campus, Ganeshkind P.O., Pune 411007, India\
$^8$ Cotton College State University, Panbazar, Guwahati 781001, India\
bibliography:
- 'allrefs.bib'
- 'mn-jour.bib'
title: 'A 325-MHz GMRT survey of the [*Herschel*]{}-ATLAS/GAMA fields'
---
\[firstpage\]
surveys – catalogues – radio continuum: galaxies
Introduction
============
The *Herschel*-Astrophysical Terahertz Large Area Survey [H-ATLAS; @eales10] is the largest Open Time extragalactic survey being undertaken with the *Herschel Space Observatory* [@herschel10]. It is a blind survey and aims to provide a wide and unbiased view of the sub-millimetre Universe at a median redshift of $1$. H-ATLAS covers $\sim 570$ deg$^2$ of sky at $110$, $160$, $250$, $350$ and $500$ ${\mu}$m and is observed in parallel mode with [*Herschel*]{} using the Photodetector Array Camera [PACS; @pacs] at 110 and 160 ${\mu}$m and the Spectral and Photometric Imaging Receiver [SPIRE; @spire] at 250, 350 and 500 ${\mu}$m. The survey is made up of six fields chosen to have minimal foreground Galactic dust emission, one field in the northern hemisphere covering $150$ deg$^2$ (the NGP field), two in the southern hemisphere covering a total of $250$ deg$^2$ (the SGP fields) and three fields on the celestial equator each covering $\sim 35$ deg$^2$ and chosen to overlap with the Galaxy and Mass Assembly redshift survey [GAMA; @Driver+11] (the GAMA fields). The H-ATLAS survey is reaching 5-$\sigma$ sensitivities of (132, 121, 33.5, 37.7, 44.0) mJy at (110, 160, 250, 350, 500) $\mu$m and is expected to detect $\sim 200,000$ sources when complete [@Rigby+11].
A significant amount of multiwavelength data is available and planned over the H-ATLAS fields. In particular, the equatorial H-ATLAS/GAMA fields, which are the subject of this paper, have been imaged in the optical (to $r \sim 22.1$) as part of the Sloan Digital Sky Survey [SDSS; @sloan] and in the infrared (to $K \sim 20.1$) with the United Kingdom Infra-Red Telescope (UKIRT) through the UKIRT Infrared Deep Sky Survey [UKIDSS; @ukidss] Large Area Survey (LAS). In the not-too-distant future, the GAMA fields will be observed approximately two magnitudes deeper than the SDSS in 4 optical bands by the Kilo-Degree Survey (KIDS) to be carried out with the Very Large Telescope (VLT) Survey Telescope (VST), which was the original motivation for observing these fields. In addition, the GAMA fields are being observed to $K \sim 1.5-2$ mag. deeper than the level achieved by UKIDSS as part of the Visible and Infrared Survey Telescope for Astronomy (VISTA) Kilo-degree Infrared Galaxy (VIKING) survey, and with the Galaxy Evolution Explorer (GALEX) to a limiting AB magnitude of $\sim 23$.
In addition to this optical and near-infrared imaging there is also extensive spectroscopic coverage from many of the recent redshift surveys. The SDSS survey measured redshifts out to $z\sim0.3$ in the GAMA and NGP fields for almost all galaxies with $r<17.77$. The Two-degree Field (2dF) Galaxy Redshift Survey [2dFGRS; @2df] covers much of the GAMA fields for galaxies with $b_{J}<19.6$ and median redshift of $\sim0.1$. The H-ATLAS fields were chosen to overlap with the GAMA survey, which is ongoing and aims to measure redshifts for all galaxies with $r<19.8$ to $z\sim0.5$. Finally, the WiggleZ Dark Energy survey has measured redshifts of blue galaxies over nearly half of the H-ATLAS/GAMA fields to a median redshift of $z\sim0.6$ and detects a significant population of galaxies at $z\sim1$.
The wide and deep imaging from the far infrared to the ultraviolet and extensive spectroscopic coverage makes the H-ATLAS/GAMA fields unparallalled for detailed investigation of the star-forming and AGN radio source populations. However, the coverage of the H-ATLAS fields is not quite so extensive in the radio. All of the fields are covered down to a $5\sigma$ sensitivity of 2.5 mJy beam$^{-1}$ at 1.4 GHz by the National Radio Astronomy Obervatory (NRAO) Very Large Array (VLA) Sky Survey [NVSS; @nvss]. These surveys are limited by their $\sim45$-arcsec resolution, which makes unambiguous identification of radio sources with their host galaxy difficult, and by not being deep enough to find a significant population of star-forming galaxies, which only begin to dominate the radio-source population below 1 mJy [e.g. @Wilman08]. The Faint Images of the Radio Sky at Twenty-cm [FIRST; @first] survey covers the NGP and GAMA fields at a resolution of $\sim6$ arcsec down to $\sim0.5$ mJy at 1.4 GHz, is deep enough to probe the bright end of the star-forming galaxy population, and has good enough resolution to see the morphological structure of the larger radio-loud AGN, but it must be combined with the less sensitive NVSS data for sensitivity to extended structure. Catalogues based on FIRST and NVSS have already been used in combination with H-ATLAS data to investigate the radio-FIR correlation [@jarvis+10] and to search for evidence for differences between the star-formation properties of radio galaxies and their radio-quiet counterparts (@hardcastle+10 [@hardcastle+12; @virdee+13]).
To complement the already existing radio data in the H-ATLAS fields, and in particular to provide a second radio frequency, we have observed the GAMA fields (which have the most extensive multi-wavelength coverage) at 325 MHz with the Giant Metrewave Radio Telescope [GMRT; @gmrtref]. The most sensitive GMRT images reach a $1\sigma$ depth of $\sim 1$ mJy beam$^{-1}$ and the best resolution we obtain is $\sim
14$ arcsec, which is well matched to the sensitivity and resolution of the already existing FIRST data. The GMRT data overlaps with the three $\sim60$-deg$^2$ GAMA fields, and cover a total of $108$ deg$^2$ in $288$ 15-minute pointings (see Fig. \[noisemaps\]). These GMRT data, used in conjunction with the available multiwavelength data, will be valuable in many studies, including an investigation of the radio-infrared correlation as a function of redshift and as a function of radio spectral index, the link between star formation and accretion in radio-loud AGN and how this varies as a function of environment and dust temperature, and the three-dimensional clustering of radio-source populations. The data will also bridge the gap between the well-studied 1.4-GHz radio source populations probed by NVSS and FIRST and the radio source population below 250 MHz, which will be probed by the wide area surveys made with the Low Frequency Array [LOFAR; @reflofar] in the coming years.
This paper describes the 325-MHz survey of the H-ATLAS/GAMA regions. The structure of the paper is as follows. In Section 2 we describe the GMRT observations and the data. In Section 3 we describe the pipeline that we have used to reduce the data and in Section 4 we describe the images and catalogues produced. In Section 5 we discuss the data quality and in Section 6 we present the spectral index distribution for the detected sources between 1.4 GHz and 325 MHz. A summary and prospects for future work are given in Section 7.
GMRT Observations
=================
Date Start Time (IST) Hours Observerd N$_{\rm{antennas}}$ Antennas Down Comments
-------------- ------------------ ----------------- --------------------- --------------------- ------------------------------
2009, Jan 15 21:00 14.0 27 C01,S03,S04 C14,C05 stopped at 09:00
2009, Jan 16 21:00 15.5 27 C01,S02,S04 C04,C05 stopped at 09:00
2009, Jan 17 21:00 15.5 29 C01 C05 stopped at 06:00
2009, Jan 18 21:00 16.5 26 C04,E02,E03,E04 C05 stopped at 09:00
2009, Jan 19 22:00 16.5 29 C04
2009, Jan 20 21:00 13.5 29 C01 20min power failure at 06:30
2009, Jan 21 21:30 13.0 29 S03 Power failure after 06:00
2010, May 17 16:00 10.0 26 C12,W01,E06,E05
2010, May 18 17:00 10.0 25 C11,C12,S04,E05,W01 E05 stopped at 00:00
2010, May 19 18:45 10.5 25 C12,E05,C05,E03,E06 40min power failure at 22:10
2010, Jun 4 13:00 12.0 28 W03,W05
Survey Strategy
---------------
The H-ATLAS/GAMA regions that have been observed by the *Herschel Space Observatory* and are followed up in our GMRT survey are made up of three separate fields on the celestial equator. The three fields are centered at 9 h, 12 h, and 14.5 h right ascension (RA) and each spans approximately 12 deg in RA and 3 deg in declination to cover a total of 108 deg$^2$ (36 deg$^2$ per field). The Full Width at Half Maximum (FWHM) of the primary beam of the GMRT at 325 MHz is 84 arcmin. In order to cover each H-ATLAS/GAMA field as uniformly and efficiently as possible, we spaced the pointings in an hexagonal grid separated by 42 arcmin. An example of our adopted pointing pattern is shown in Fig. \[pointings\]; each field is covered by 96 pointings, with 288 pointings in the complete survey.
![The 96 hexagonal GMRT pointings for the 9-h H-ATLAS/GAMA fields. The pointing strategy for the 12- and 14.5-h fields is similar. The dark grey ellipses (circles on the sky) show the 42-arcmin region at the centre of each pointing; the light grey ellipses (circles) show the 84-arcmin primary beam.[]{data-label="pointings"}](gamma9hr.png){width="\linewidth"}
Observations
------------
{width="\textwidth"} {width="\textwidth"} {width="\textwidth"}
Observations were carried out in three runs in Jan 2009 (8 nights) and in May 2010 (3 nights) and in June 2010 (1 night). Table \[obssummary\] gives an overview of each night’s observing. On each night as many as 5 of the 30 GMRT antennas could be offline for various reasons, including being painted or problems with the hardware backend. On two separate occasions (Jan 20 and May 19) power outages at the telescope required us to stop observing, and on one further occasion on Jan 21 a power outage affected all the GMRT baselines outside the central square. Data taken during the Jan 21 power outage were later discarded.
Each night’s observing consisted of a continuous block of 10-14 h beginning in the early evening or late afternoon and running through the night. Night-time observations were chosen so as to minimise the ionopheric variations. We used the GMRT with its default parameters at 325 MHz and its hardware backend (GMRT Hardware Backend; GHB), two 16 MHz sidebands (Upper Sideband; USB, and Lower Sideband; LSB) on either side of 325 MHz, each with 128 channels, were used. The integration time was set to 16.7 s.
The flux calibrators 3C147 and 3C286 were observed for 10 minutes at the beginning and towards the end of each night’s oberving. We assumed 325-MHz flux densities of 46.07 Jy for 3C147 and 24.53 Jy for 3C286, using the standard VLA (2010) model provided by the [AIPS]{} task [SETJY]{}. Typically the observing on each night was divided into 3 $\sim4-5$-h sections, concentrating on each of the 3 separate fields in order of increasing RA. The 9-h and 12-h fields were completely covered in the Jan 2009 run and we carried out as many observations of the 14.5-h field as possible during the remaining nights in May and June 2010. The resulting coverage of the sky, after data affected by power outages or other instrumental effects had been taken into account, is shown in Fig. \[noisemaps\], together with an indication of the relationship between our sky coverage and that of GAMA and H-ATLAS.
Each pointing was observed for a total of 15 minutes in two 7.5-min scans, with each scan producing $\sim 26$ records using the specified integration time. The two scans on each pointing were always separated by as close to 6 h in hour angle as possible so as to maximize the $uv$ coverage for each pointing. The $uv$ coverage and the dirty beam of a typical pointing, observed in two scans with an hour-angle separation of 3.5 h, is shown in Fig. \[uvcoverage\].
Phase Calibrators
-----------------
One phase calibrator near to each field was chosen and was verified to have stable phases and amplitudes on the first night’s observing. All subsequent observations used the same phase calibrator, and these calibrators were monitored continuously during the observing to ensure that their phases and amplitudes remained stable. The positions and flux densities of the phase calibrators for each field are listed in Table \[phasecals\]. Although there are no 325-MHz observations of the three phase calibrators in the literature, we estimated their 325-MHz flux densities that are listed in the table using their measured flux densities from the 365-MHz Texas survey [@texassurvey] and extrapolated to 325 MHz assuming a spectral index of $\alpha=-0.8$[^2].
Each 7.5-minute scan on source was interleaved with a 2.5-minute scan on the phase calibrator in order to monitor phase and amplitude fluctuations of the telescope, which could vary significantly during an evening’s observing. During data reduction we discovered that the phase calibrator for the 14.5-h field (PHC00) was significantly resolved on scales of $\sim 10$ arcsec. It was therefore necessary to flag all of the data at $uv$ distance $>20$ k$\lambda$ from the 14.5-h field. This resulted in degraded resolution and sensitivity in the 14.5-h field, which will be discussed in later sections of this paper.
During observing the phases and amplitudes of the phase calibrator measured on each baseline were monitored. The amplitudes typically varied smoothly by $<30$ per cent in amplitude for the working long baselines and by $<10$ per cent for the working short baselines. We can attribute some of this effect to variations in the system temperature, but since the effects are larger on long baselines it may be that slight resolution of the calibrators is also involved. Phase variations on short to medium baselines were of the order of tens of degrees per hour, presumably due to ionospheric effects. On several occasions some baselines showed larger phase and amplitude variations, and these data were discarded during the data reduction.
------------ --------- --------------- --------------- ----------------------
Calibrator Field RA (J2000) Dec. (J2000) $S_{325\,{\rm MHz}}$
Name *hh mm ss.ss* *dd mm ss.ss* Jy
PHA00 9-hr 08 15 27.81 -03 08 26.51 9.3
PHB00 12-hr 11 41 08.24 +01 14 17.47 6.5
PHC00 14.5-hr 15 12 25.35 +01 21 08.64 6.7
------------ --------- --------------- --------------- ----------------------
: The phase calibrators for the three fields.[]{data-label="phasecals"}
The Data Reduction Pipeline
===========================
The data handling was carried out using an automated calibration and imaging pipeline. The pipeline is based on [python]{}, [aips]{} and [ParselTongue]{} (Greisen 1990; Kettenis 2006) and has been specially developed to handle GMRT data. The pipeline performs a full cycle of data calibration, including automatic flagging, delay corrections, absolute amplitude calibration, bandpass calibration, a multi-facet self-calibration process, cataloguing, and evaluating the final catalogue. A full description of the GMRT pipeline and the calibration will be provided elsewhere (Klöckner in prep.).
Flagging
--------
The GMRT data varies significantly in quality over time; in particular, some scans had large variations in amplitude and/or phase over short time periods, presumably due either to instrumental problems or strong ionospheric effects. The phases and amplitudes on each baseline were therefore initially inspected manually and any scans with obvious problems were excluded prior to running the automated flagging procedures. Non-working antennas listed in Table \[obssummary\] were also discarded at this stage. Finally, the first and last 10 channels of the data were removed as the data quality was usually poor at the beginning and end of the bandpass.
After the initial hand-flagging of the most seriously affected data an automated flagging routine was run on the remaining data. The automatic flagging checked each scan on each baseline and fitted a 2D polynomial to the spectrum which was then subtracted from it. Visibilities $>3\sigma$ from the mean of the background-subtracted data were then flagged, various kernels were then applied to the data and also $3\sigma$ clipped and the spectra were gradient-filtered and flagged to exclude values $>3\sigma$ from the mean. In addition, all visibilities $>3\sigma$ from the gravitational centre of the real-imaginary plane were discarded. Finally, after all flags had been applied any time or channel in the scan which had had $>40$ per cent of its visibilities flagged was completely removed.
On average, 60 per cent of a night’s data was retained after all hand and automated flagging had been performed. However at times particularly affected by Radio Frequency Interference (RFI) as little as 20 per cent of the data might be retained. A few scans ($\sim10$ per cent) were discarded completely due to excessive RFI during their observation.
Calibration and Imaging {#imagepipe}
-----------------------
After automated flagging, delay corrections were determined via the [aips]{} task [FRING]{} and the automated flagging was repeated on the delay-corrected data. Absolute amplitude calibration was then performed on the flagged and delay corrected dataset, using the [ aips]{} task [SETJY]{}. The [aips]{} calibration routine [ CALIB]{} was then run on channel 30, which was found to be stable across all the different night’s observing, to determine solutions for the phase calibrator. The [aips]{} task [GETJY]{} was used to estimate the flux density of the phase-calibrator source (which was later checked to be consistent with other catalogued flux densities for this source, as shown in Table \[phasecals\]). The bandpass calibration was then determined using [BPASS]{} using the cross-correlation of the phase calibrator. Next, all calibration and bandpass solutions were applied to the data for the phase calibrator and the amplitude and phase versus $uv$-distance plots were checked to ensure the calibration had succeded.
The calibration solutions of the phase-calibrator source were then applied to the target pointing, and a multi-facet imaging and phase self-calibration process was carried out in order to increase the image sensitivity. To account for the contributions of the $w$-term in the imaging and self-calibration process the field of view was divided into sub-images; the task [SETFC]{} was used to produce the facets. The corrections in phase were determined using a sequence of decreasing solution intervals starting at 15 min and ending at 3 min (15, 7, 5, 3). At each self-calibration step a local sky model was determined by selecting clean components above $5\sigma$ and performing a model fit of a single Gaussian in the image plane using [SAD]{}. The number of clean components used in the first self-calibration step was 50, and with each self-calibration step the number of clean components was increased by 100.
After applying the solutions from the self-calibration process the task [IMAGR]{} is then used to produce the final sub-images. These images were then merged into the final image via the task [FLATN]{}, which combines all facets and performs a primary beam correction. The parameters used in [FLATN]{} to account for the contribution of the primary beam (the scaled coefficients of a polynomial in the off-axis distance) were: -3.397, 47.192, -30.931, 7.803.
Cataloguing {#catadesc}
-----------
The LSB and USB images that were produced by the automated imaging pipeline were subsequently run through a cataloguing routine. As well as producing source catalogues for the survey, the cataloguing routine also compared the positions and flux densities measured in each image with published values from the NVSS and FIRST surveys as a figure-of-merit for the output of the imaging pipeline. This allowed the output of the imaging pipeline to be quickly assesed; the calibration and imaging could subsequently be run with tweaked parameters if necessary.
The cataloguing procedure first determined a global rms noise ($\sigma_{\rm global}$) in the input image by running [IMEAN]{} to fit the noise part of the pixel histogram in the central 50 per cent of the (non-primary-beam corrected) image. In order to mimimise any contribtion from source pixels to the calculation of the image rms, [IMEAN]{} was run iteratively using the mean and rms measured from the previous iteration until the measured noise mean changed by less than 1 per cent.
The limited dynamic range of the GMRT images and errors in calibration can cause noise peaks close to bright sources to be fitted in a basic flux-limited cataloguing procedure. We therefore model background noise variation in the image as follows:
1. Isolated point sources brighter than $100\sigma_{\rm global}$ were found using [SAD]{}. An increase in local source density around these bright sources is caused by noise peaks and artefacts close to them. Therefore, to determine the area around each bright source that has increased noise and artefacts, the source density of $3\sigma_{\rm global}$ sources as a function of radius from the bright source position was determined. The radius at which the local source density is equal to the global source density of all $3\sigma_{\rm global}$ sources in the image was then taken as the radius of increased noise around bright sources.
2. To model the increased noise around bright sources a *local* dynamic range was found by determining the ratio of the flux density of each $100\sigma_{\rm global}$ bright source to the brightest $3\sigma_{\rm global}$ source within the radius determined in step (i). The median value of the local dynamic range for all $100\sigma_{\rm global}$ sources in the image was taken to be the local dynamic range. This median local dynamic range determination prevents moderately bright sources close to the $100\sigma$ source from being rejected, which would happen if *all* sources within the computed radius close to bright sources were rejected.
3. A local rms ($\sigma_{\rm local}$) map was made from the input image using the task [RMSD]{}. This calculates the rms of pixels in a box of $5$ times the major axis width of the restoring beam and was computed for each pixel in the input image. [RMSD]{} iterates its rms determination 30 times and the computed histogram is clipped at $3\sigma$ on each iteration to remove the contribution of source data to the local rms determination.
4. We then added to this local rms map a Gaussian at the position of each $100\sigma_{\rm global}$ source, with width determined from the radius of the local increased source density from step (i) and peak determined from the median local dynamic range from step (ii).
5. A local mean map is constructed in a manner similar to that described in step (iii).
Once a local rms and mean model has been produced the input map was mean-subtracted and divided by the rms model. This image was then run through the [SAD]{} task to find the positions and sizes of all $5\sigma_{\rm local}$ peaks. Eliptical Gaussians were fitted to the source positions using [JMFIT]{} (with peak flux density as the only free parameter) on the original input image to determine the peak and total flux density of each source. Errors in the final fitted parameters were determined by summing the equations in @c97 (with $\sigma_{\rm
local}$ as the rms), adding an estimated $5$ per cent GMRT calibration uncertanty in quadrature.
Once a final $5\sigma$ catalogue had been produced from the input image, the sources were compared to positions and flux densities from known surveys that overlap with the GMRT pointing (i.e., FIRST and NVSS) as a test of the image quality and the success of the calibration. Any possible systematic position offset in the catalogue was computed by comparing the positions of $>15\sigma$ point sources to their counterparts in the FIRST survey (these are known to be accurate to better than 0.1 arcsec [@first]). For this comparison, a point source was defined as being one whose fitted size is smaller than the restoring beam plus 2.33 times the error in the fitted size (98 per cent confidence), as was done in the NVSS and SUMSS surveys [@nvss; @sumss].
The flux densities of all catalogue sources were compared to the flux densities of sources from the NVSS survey. At the position of each NVSS source in the image area, the measured flux densities of each GMRT source within the NVSS source area were summed and then converted from 325 MHz to 1.4 GHz assuming a spectral index of $\alpha=-0.7$. We chose $\alpha=-0.7$ because it is the median spectral index of radio souces between 843 MHz and 1.4 GHz found between the SUMSS and NVSS surveys [@sumss]; it should therefore serve to indicate whether any large and systematic offsets can be seen in the distribution of measured flux densities of the GMRT sources.
Mosaicing
---------
The images from the upper and lower sidebands of the GMRT that had been output from the imaging pipeline described in Section \[imagepipe\] were then coadded to produce uniform mosaics. In order to remove the effects of the increased noise at the edges of each pointing due to the primary beam and produce a survey as uniform as possible in sensitivity and resolution across each field, all neighbouring pointings within 80 arcmin of each pointing were co-added to produce a mosaic image of $100\times100$ arcmin. This section describes the mosaicing process in detail, including the combination of the data from the two sidebands.
![The offsets in RA and declination between $15\sigma$ point sources in the GMRT survey that are detected in the FIRST survey. Each point in the plot is the median offset for all sources in an entire field. The error bars in the bottom right of the Figure show the rms in RA and declination from Fig. \[finaloffs\].[]{data-label="posnoffsets"}](medposnoffs.pdf){width="\linewidth"}
### Combining USB+LSB data
We were unable to achieve improved signal-to-noise in images produced by coadding the data from the two GMRT sidebands in the $uv$ plane, so we instead chose to image the USB and LSB data separately and then subsequently co-add the data in the image plane, which always produced output images with improved sensitivity. During the process of co-adding the USB and LSB images, we regridded all of them to a 2 arcsec pixel scale using the [aips]{} task [regrid]{}, shifted the individual images to remove any systematic position offsets, and smoothed the images to a uniform beam shape across each of the three survey fields.
Fig. \[posnoffsets\] shows the distribution of the median offsets between the GMRT and FIRST positions of all $>15\sigma$ point sources in each USB pointing output from the pipeline. The offsets measured for the LSB were always within 0.5 arcsec of the corresponding USB pointing. These offsets were calculated for each pointing using the method described in Section \[catadesc\] as part of the standard pipeline cataloguing routine. As the Figure shows, there was a significant distribution of non-zero positional offsets between our images and the FIRST data, which was usually larger than the scatter in the offsets measured per pointing (shown as an error bar on the bottom right of the figure). It is likely that these offsets are caused by ionospheric phase errors, which will largely be refractive at 325 MHz for the GMRT. Neighbouring images in the survey can have significantly different FIRST-GMRT position offsets, and coadding these during the mosaicing process may result in spurious radio-source structures and flux densities in the final mosaics. Because of this, the measured offsets were all removed using the [aips]{} task [shift]{} before producing the final coadded USB+LSB images.
![The distribution of the raw clean beam major axis FWHM in each of the three H-ATLAS/GAMA fields from the USB+LSB images output from the imaging pipeline. The dotted line shows the width of the convolving beam used before the mosaicing process. Images with raw clean beam larger than our adopted cutoffs have been discarded from the final dataset.[]{data-label="beamsizes"}](BMAJ.pdf){width="\linewidth"}
Next, the USB+LSB images were convolved to the same resolution before they were co-added; the convolution minimises artefacts resulting from different source structures at different resolution, and in any case is required to allow flux densities to be measured from the resulting co-added maps. Fig. \[beamsizes\] shows the distribution in restoring beam major axes in the images output from the GMRT pipeline. The beam minor axis was always better than 12 arcsec in the three surveyed fields. In the 9-h and 12-h fields, the majority of images had better than 10-arcsec resolution. However, roughly 10 per cent of them are significantly worse; this can happen for various reasons but is mainly caused by the poor $uv$ coverage produced by the $2\times7.5$-minute scans on each pointing. Often, due to scheduling constraints, the scans were observed immediately after one another rather than separated by 6 h which can limit the distribution of visibilities in the $uv$ plane. In addition, when even a few of the longer baselines are flagged due to interference or have problems during their calibration, the resulting image resolution can be degraded.
The distribution of restoring beam major axes in the 14.5-h field is much broader. This is because of the problems with the phase calibrator outlined in Section \[phasecals\]. All visibilities in excess of $20$ k$\lambda$ were removed during calibration of the 14.5-h field and this resulted in degraded image resolution.
The dotted lines in Fig. \[beamsizes\] show the width of the beam used to convolve the images for each of the fields before coadding USB+LSB images. We have used a resolution of 14 arcsec for the 9-h field, 15 arcsec for the 12-h field and 23.5 arcsec for the 14.5-h field. Images with lower resolution than these were discarded from the final data at this stage. Individual USB and LSB images output from the self-calibration step of the pipeline were smoothed to a circular beam using the [aips]{} task [convl]{}.
After smoothing, regridding and shifting the USB+LSB images, they are combined after being weighted by their indivdual variances, which were computed from the square of the local rms image measured during the cataloguing process. The combined USB+LSB images have all pixels within 30 arcsec of their edge blanked in order to remove any residual edge effect from the regridding, position shifting and smoothing process.
### Producing the final mosaics
The combined USB+LSB images were then combined with all neighbouring coadded USB+LSB images within 80 arcmin of their pointing center. This removes the effects at the edges of the individual pointings caused by the primary beam correction and improves image sensitivity in the overlap regions. The final data product consists of one combined mosaic for each original GMRT pointing, and therefore the user should note that there is significant overlap between each mosaic image.
Each combined mosaic image has a width of $100\times100$ arcmin and 2 arcsec pixels. They were produced from all neighboring images with pointing centers within 80 arcmin. Each of these individual image was then regridded onto the pixels of the output mosaic. The [aips]{} task [rmsd]{} was run in the same way described during the cataloging (i.e. with a box size of 5 times the major axis of the smoothed beam) on the regridded images to produce local rms noise maps. The noise maps were smoothed with a Gaussian with a FWHM of 3 arcmin to remove any small-scale variation in them. These smoothed noise maps were then used to create variance weight maps (from the sum of the squares of the individual noise maps) which were then in turn multiplied by each regridded input image. Finally, the weighted input images were added together.
The final source catalogue for each pointing was produced as described above from the fully weighted and mosaiced images.
Data Products
=============
The primary data products from the GMRT survey are a set of FITS images (one for each GMRT pointing that has not been discarded during the pipeline reduction process) overlapping the H-ATLAS/GAMA fields, the $5\sigma$ source catalogues and a list of the image central positions.[^3] This section briefly describes the imaging data and the format of the full catalogues.
Images
------
{width="\textwidth"}
An example of a uniform mosaic image output from the full pipeline is shown in Fig. \[eximage\].
In each field some of the 96 originally observed pointings had to be discarded for various reasons that have been outlined in the previous sections. The full released data set comprises 80 pointings in the 9-h field, 61 pointings in the 12-h field and 71 pointings in the 14.5-h field. In total 76 out of the 288 original pointings were rejected. In roughly 50 per cent of cases they were rejected because of the cutoff in beam size shown in Fig. \[beamsizes\], while in the other 50 per cent of cases the $2\times7.5$-minute scans of the pointing were completely flagged due to interference or other problems with the GMRT during observing. The full imaging dataset from the survey comprises a set of mosaics like the one pictured in Fig. \[eximage\], one for each of the non-rejected pointings.
Catalogue
---------
----------- ---------------- ---------------- ------------- ------------ -------------- ------- ------------- ------- ------------- -------- ------ ------ ------------- ------------- ------ ---------------- ----------
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18)
RA Dec. RA Dec. $\Delta$RA $\Delta$Dec. $A$ ${\Delta}A$ $S$ ${\Delta}S$ Maj Min PA $\Delta$Maj $\Delta$Min PA Local $\sigma$ Pointing
$hh$ $mm$ $ss$ $dd$ $mm$ $ss$ $^\circ$ $^\circ$ mJy/bm
130.87617 -00.22886 08 43 30.28 -00 13 43.9 2.5 1.3 6.2 1.1 15.0 3.8 —- —- —– —- —- – 1.1 PNA02
130.87746 +02.11494 08 43 30.59 +02 06 53.8 2.3 1.6 12.1 1.3 41.2 5.6 —- —- —– —- —- – 1.4 PNA67
130.88025 +00.48630 08 43 31.26 +00 29 10.7 0.5 0.3 129.7 4.7 153.6 6.7 15.9 14.6 13.8 0.5 0.4 9 2.6 PNA51
130.88525 +00.49582 08 43 32.46 +00 29 45.0 0.5 0.4 122.2 4.6 152.1 7.0 —- —- —– —- —- – 2.6 PNA51
130.88671 -00.24776 08 43 32.81 -00 14 51.9 0.5 0.3 106.1 3.3 171.3 5.3 21.1 15.0 80.8 0.5 0.4 1 1.0 PNA02
130.88817 -00.89953 08 43 33.16 -00 53 58.3 0.6 0.3 59.6 2.2 118.8 5.0 26.4 14.8 87.5 0.9 0.6 1 1.2 PNA03
130.89171 -00.24660 08 43 34.01 -00 14 47.8 0.5 0.4 34.6 1.5 38.5 2.2 —- —- —– —- —- – 1.0 PNA02
130.89279 -00.12352 08 43 34.27 -00 07 24.7 1.2 1.1 4.6 0.8 4.7 1.5 —- —- —– —- —- – 0.8 PNA35
130.89971 -00.91813 08 43 35.93 -00 55 05.3 2.4 1.0 7.9 1.5 13.9 3.9 —- —- —– —- —- – 1.4 PNA03
130.90150 -00.01532 08 43 36.36 -00 00 55.1 0.9 0.8 6.6 0.8 6.6 1.4 —- —- —– —- —- – 0.8 PNA03
----------- ---------------- ---------------- ------------- ------------ -------------- ------- ------------- ------- ------------- -------- ------ ------ ------------- ------------- ------ ---------------- ----------
Final catalogues were produced from the mosaiced images using the catalogue procedure described in Section \[catadesc\]. The catalogues from each mosaic image were then combined into 3 full catalogues covering each of the 9-h, 12-h, and 14.5-h fields. The mosaic images overlap by about 60 per cent in both RA and declination, so duplicate sources in the full list were removed by finding all matches within 15 arcsec of each other and selecting the duplicate source with the lowest local rms ($\sigma_{\rm local}$) from the full catalogue; this ensures that the catalogue is based on the best available image of each source. Removing duplicates reduced the total size of the full catalogue by about 75 per cent due to the amount of overlap between the final mosaics.
The resulting full catalogues contain 5263 sources brighter than the local $5\sigma$ limit. 2628 of these are in the 9-h field, 1620 in the 12-h field and 1015 in the 14.5-h field. Table \[catexample\] shows 10 random lines of the output catalogue sorted by RA. A short description of each of the columns of the catalogue follows:
Columns (1) and (2): The J2000 RA and declination of the source in decimal degrees (the examples given in Table \[catexample\] have reduced precision for layout reasons).
Columns (3) and (4): The J2000 RA and declination of the source in sexagesimal coordinates.
Columns (5) and (6): The errors in the quoted RA and declination in arcsec. This is calulated from the quadratic sum of the calibration uncertainty, described in Section \[positioncal\], and the fitting uncertainty, calculated using the equations given by @c97.
Columns (7) and (8): The fitted peak brightness in units of mJy beam$^{-1}$ and its associated uncertainty, calculated from the quadratic sum of the fitting uncertainty from the equations given by @c97 and the estimated 5 per cent flux calibration uncertainty of the GMRT. The raw brightness measured from the image has been increased by 0.9 mJy beam$^{-1}$ to account for the effects of clean bias (see Section \[sec:flux\]).
Columns (9) and (10): The total flux density of the source in mJy and its uncertainty calculated from equations given by @c97. This equals the fitted peak brightness if the source is unresolved.
Columns (11), (12) and (13): The major axis FWHM (in arcsec), minor axis FWHM (in arcsec) and position angle (in degrees east of north) of the fitted elliptical Gaussian. The position angle is only meaningful for sources that are resolved (i.e. when the fitted Gaussian is larger than the restoring beam for the relevant field). As discussed in Section \[sec:sizes\], fitted sizes are only quoted for sources that are moderately resolved in their minor axis.
Columns (14), (15) and (16): The fitting uncertanties in the size parameters of the fitted elliptical Gaussian calculated using equations from @c97.
Column (17): The *local* rms noise ($\sigma_{\rm local}$) in mJy beam$^{-1}$ at the source position calculated as described in Section \[catadesc\]. The *local* rms is used to determine the source signal-to-noise ratio, which is used to determine fitting uncertainties.
Column (18): The name of the GMRT mosaic image containing the source. These names consist of the letters PN, a letter A, B or C indicating the 9-, 12- or 14.5-h fields respectively, and a number between 01 and 96 which gives the pointing number within that field (see Fig. \[pointings\]).
Data Quality
============
The quality of the data over the three fields varies considerably due in part to the different phase and flux calibration sources used for each field, and also due to the variable observing conditions over the different nights’ observing. In particular on each night’s observing, the data taken in the first half of the night seemed to be much more stable than that taken in the second half/early mornings. Some power outages at the telescope contributed to this as well as the variation in the ionosphere, particularly at sunrise. Furthermore, as described in Section \[mosaicing\], the poor phase calibrator in the 14.5-h field has resulted in degraded resolution and sensitivity.
Image noise {#sec:noise}
-----------
![The rms noise measured in the central 1000 pixels of each image plotted against the square root of the number of visibilities. Outliers from the locus are produced by the increased noise in images around sources brighter than 1 Jy.[]{data-label="rmsnvis"}](rmsnvis.pdf){width="\linewidth"}
Fig. \[rmsnvis\] shows the distribution of the rms noise measured within a radius of 1000 pixels in the individual GMRT images immediately after the self-calibration stage of the pipeline, plotted against the number of visibilities that have contributed to the final image (this can be seen as a proxy for the effective integration time after flagging). The rms in the individual fields varies from $\sim 1$ mJy beam$^{-1}$ in those images with the most visibilities to $\sim 7$ mJy beam$^{-1}$ in the worst case, with the expected trend toward higher rms noise with decreasing number of visibilities. The scatter to higher rms from the locus is caused by residual problems in the calibration and the presence of bright sources in the primary beam of the reduced images, which can increase the image noise in their vicinity due to the limited dynamic range of the GMRT observations ($\sim1000:1$). A bright 7 Jy source in the 12-h field and a 5 Jy source in the 14.5-h field have both contributed to the generally increased rms noise measured from some images. On average, the most visibilities have been flagged from the 14.5-h field because of the restriction we imposed on the $uv$ range of the data. This has also resulted in higher average noise in the 14.5-h fields.
Fig. \[noisemaps\] shows the rms noise maps covering all of the 3 fields. These have been made by averaging the background rms images produced during the cataloguing of the the final mosaiced images and smoothing the final image with a Gaussian with a FWHM of 3 arcmin to remove edge effects between the individual backgound images. The rms in the final survey is significantly lower than that measured from the individual images output from the pipeline self-calibration process, which is a consequence of the large amount of overlap between the individual GMRT pointings in our survey strategy (see Fig. \[pointings\]). The background rms is $\sim0.6-0.8$ mJy beam$^{-1}$ in the 9-h field, $\sim0.8-1.0$ mJy beam$^{-1}$ in the 12-h field and $\sim1.5-2.0$ mJy beam$^{-1}$ in the 14.5-h field. Gaps in the coverage are caused by having discarded some pointings in the survey due to power outages at the GMRT, due to discarding scans during flagging as described in Section \[flagging\], and as a result of pointings whose restoring beam was larger than the smoothing width during the mosaicing process (Section \[mosaicing\]).
Flux Densities {#sec:flux}
--------------
The $2\times7.5$-min observations of the GMRT survey sample the $uv$ plane sparsely (see Fig. \[uvcoverage\]), with long radial arms which cause the dirty beam to have large radial sidelobes. These radial sidelobes can be difficult to clean properly during imaging and clean components which can be subtracted at their position when cleaning close to the noise can cause the average flux density of all point sources in the restored image to be systematically reduced. This “clean bias” is common in “snapshot” radio surveys and for example was found in the FIRST and NVSS surveys [@first; @nvss].
We have checked for the presence of clean bias in the GMRT data by inserting 500 point sources into the calibrated $uv$ data at random positions and the re-imaging the modified data with the same parameters as the original pipeline. We find an average difference between the imaged and input peak flux densities of $\Delta S_{\rm peak}=-0.9$ mJy beam$^{-1}$ with no significant difference between the 9hr, 12hr and 14hr fields. A constant offset of $0.9$ mJy beam$^{-1}$ has been added to the peak flux densities of all sources in the published catalogues.
As a consistency check for the flux density scale of the survey we can compare the measured flux densities of the phase calibrator source with those listed in Table \[phasecals\]. The phase calibrator is imaged using the standard imaging pipeline and its flux density is measured using [sad]{} in [aips]{}. The scatter in the measurments of each phase calibrator over the observing period gives a measure of the accuracy of the flux calibration in the survey. In the 9-h field, the average measured flux density of the phase calibrator PHA00 is 9.5 Jy with rms scatter 0.5 Jy; in the 12-h field, the average measured flux density of PHB00 is 6.8 Jy with rms 0.4 Jy; and in the 14.5-h field the average measured flux density of PHC00 is 6.3 Jy with rms 0.5 Jy. This implies that the flux density scale of the survey is accurate to within $\sim5$ per cent; there is no evidence for any systematic offset in the flux scales.
As there are no other 325-MHz data available for the region covered by the GMRT survey, it is difficult to provide any reliable external measure of the absolute quality of the flux calibration. An additional check is provided by a comparison of the spectral index distribution of sources detected in both our survey and the 1.4-GHz NVSS survey. We discuss this comparison further in Section \[sindexsec\].
Positions {#positioncal}
---------
-------- --------- -------- -------- --------
Field
median rms median rms
9-h $-0.04$ $0.52$ $0.01$ $0.31$
12-h $-0.06$ $0.54$ $0.01$ $0.39$
14.5-h $0.30$ $0.72$ $0.26$ $0.54$
-------- --------- -------- -------- --------
: Median and rms of position offsets between the GMRT and FIRST catalogues.[]{data-label="offsetdata"}
![The offsets in RA and declination between $>15\sigma$ point sources from the GMRT survey that are detected in the FIRST survey. The mean offsets in each pointing shown in Fig. \[posnoffsets\] have been removed. Different point styles are used to denote the three different H-ATLAS/GAMA fields to show the effect of the variation in the resolution of the GMRT data.[]{data-label="finaloffs"}](finaloffs.pdf){width="\linewidth"}
In order to measure the poitional accuracy of the survey, we have compared the postions of $>15\sigma$ GMRT point sources with sources from the FIRST survey. Bright point sources in FIRST are known to have positional accuracy of better than 0.1 arcsec in RA and declination [@first]. We select point sources using the method outlined in Section \[catadesc\]. Postions are taken from the final GMRT source catalogue, which have had the shifts described in Section \[mosaicing\] removed; the scatter in the measured shifted positions is our means of estimating the calibration accuracy of the positions.
Fig. \[finaloffs\] shows the offsets in RA and declination between the GMRT catalogue and the FIRST survey and Table \[offsetdata\] summarizes the mean offsets and their scatter in the three separate fields. As expected, the mean offset is close to zero in each case, which indicates that the initial image shifts have been correctly applied and that no additional position offsets have appeared in the final mosaicing and cataloguing process. The scatter in the offsets is smallest in the 9-h field and largest in the 14.5-h field, which is due to the increasing size of the restoring beam. The rms of the offsets listed in Table \[offsetdata\] give a measure of the positional calibration uncertainty of the GMRT data; these have been added in quadrature to the fitting error to produce the errors listed in the final catalogues.
Source Sizes {#sec:sizes}
------------
The strong sidelobes in the dirty beam shown in Fig. \[uvcoverage\] extend radially at position angles (PAs) of $40^{\circ}$, $70^{\circ}$ and $140^{\circ}$ and can be as high as 15 per cent of the central peak up to 1 arcmin from it. Improper cleaning of these sidelobes can leave residual radial patterns with a similar structure to the dirty beam in the resulting images. Residual peaks in the dirty beam pattern can also be cleaned (see the discussion of “clean bias” in Section \[sec:flux\]) and this has the effect of enhancing positive and negative peaks in the dirty beam sidelobes, and leaving an imprint of the dirty beam structure in the cleaned images. This effect, coupled with the alternating pattern of positive and negative peaks in the dirty beam structure (see Fig. \[uvcoverage\]), causes sources to appear on ridges of positive flux squeezed between two negative valleys. Therefore, when fitting elliptical Gaussians to even moderately strong sources in the survey these can appear spuriously extended in the direction of the ridge and narrow in the direction of the valleys.
These effects are noticeable in our GMRT images (see, for example, Fig. \[eximage\]) and in the distribution of fitted position angles of sources that appear unresolved in their minor axes (ie. $\phi_{\rm
min}-\theta_{\rm min} < \sigma_{\rm min}$; where $\phi_{\rm min}$ is the fitted minor axis size, $\theta_{\rm min}$ is the beam minor axis size and $\sigma_{\rm min}$ is the rms fitting error in the fitted minor axis size) and are moderately resolved in their major axes (ie. $\phi_{\rm maj}-\theta_{\rm maj} > 2\sigma_{\rm maj}$; defined by analogy with above) from the catalogue. These PAs are clustered on average at $65^\circ$ in the 9-hr field, $140^\circ$ in the 12-hr field and at $130^\circ$ in the 14.5-hr field, coincident with the PAs of the radial sidelobes in the dirty beam shown in Fig. \[uvcoverage\]. The fitted PAs of sources that show some resolution in their minor axes (ie. $\phi_{\rm min}-\theta_{\rm min} >
\sigma_{\rm min}$) are randomly distributed between $0^\circ$ and $180^\circ$ as is expected of the radio source population. We therefore only quote fitted source sizes and position angles for sources with $\phi_{\rm min}-\theta_{\rm min} > \sigma_{\rm min}$ in the published catalogue.
325-MHz Source Counts {#sec:scounts}
=====================
We have made the widest and deepest survey yet carried out at 325 MHz. It is therefore interesting to see if the behaviour of the source counts at this frequency and flux-density limit differ from extrapolations from other frequencies. We measure the source counts from our GMRT observations using both the catalogues and the rms noise map described in Section \[sec:noise\], such that the area available to a source of a given flux-density and signal-to-noise ratio is calculated on an individual basis. We did not attempt to merge individual, separate components of double or multiple sources into single sources in generating the source counts. However, we note that such sources are expected to contribute very little to the overall source counts. Fig. \[fig:scounts\] shows the source counts from our GMRT survey compared to the source count prediction from the Square Kilometre Array Design Study (SKADS) Semi-Empirical Extragalactic (SEX) Simulated Sky [@Wilman08; @Wilman10] and the deep 325 MHz survey of the ELAIS-N1 field by [@Sirothia2009]. Our source counts agree, within the uncertainties, with those measured by [@Sirothia2009], given the expected uncertainties associated with cosmic variance over their relatively small field ($\sim
3$ degree$^{2}$), particularly at the bright end of the source counts.
The simulation provides flux densities down to nJy levels at frequencies of 151 MHz, 610 MHz, 1400 MHz, 4860 MHz and 18 GHz. In order to generate the 325-MHz source counts from this simulation we therefore calculate the power-law spectral index between 151 MHz and 610 MHz and thus determine the 325-MHz flux density. We see that the observed source counts agree very well with the simulated source counts from SKADS, although the observed source counts tend to lie slightly above the simulated curve over the 10-200 mJy flux-density range. This could be a sign that the spectral curvature prescription implemented in the simulation may be reducing the flux density at low radio frequencies in moderate redshift sources, where there are very few constraints. In particular, the SKADS simulations do not contain any steep-spectrum ($\alpha_{325}^{1400}<-0.8$) sources, but there is clear evidence for such sources in the current sample (see the following subsection). A full investigation of this is beyond the scope of the current paper, but future observations with LOFAR should be able to confirm or rebut this explanation: we might expect the SKADS source count predictions for LOFAR to be slightly underestimated.
![The 325-MHz source counts measured from our GMRT survey (filled squares) and from the survey of the ELAIS-N1 field by [@Sirothia2009] (open circles). The solid line shows the predicted source counts from the SKADS simulation [@Wilman08; @Wilman10].[]{data-label="fig:scounts"}](325_scounts.pdf){width="\linewidth"}
Spectral index distribution {#sindexsec}
===========================
In this section we discuss the spectral index distribution of sources in the survey by comparison with the 1.4-GHz NVSS. We do this both as a check of the flux density scale of our GMRT survey (the flux density scale of the NVSS is known to be better than 2 per cent: @nvss) and as an initial investigation into the properties of the faint 325-MHz radio source population.
In all three fields the GMRT data have a smaller beam than the 45 arcsec resolution of the NVSS. We therefore crossmatched the two surveys by taking all NVSS sources in the three H-ATLAS/GAMA fields and summing the flux densities of the catalogued GMRT radio sources that have positions within the area of the catalogued NVSS source (fitted NVSS source sizes are provided in the ‘fitted’ version of the catalogue [@nvss]). 3951 NVSS radio sources in the fields had at least one GMRT identification; of these, 3349 (85 per cent) of them had a single GMRT match, and the remainder had multiple GMRT matches. Of the 5263 GMRT radio sources in the survey 4746 (90 per cent) are identified with NVSS radio sources. (Some of the remainder may be spurious sources, but we expect there to be a population of genuine steep-spectrum objects which are seen in our survey but not in NVSS, particularly in the most sensitive areas of the survey, where the catalogue flux limit approaches 3 mJy.)
![The spectral index distribution between 1.4-GHz sources from the NVSS and 325-MHz GMRT sources.[]{data-label="sindex"}](si.pdf){width="\linewidth"}
Fig. \[sindex\] shows the measured spectral index distribution ($\alpha$ between 325 MHz and 1.4 GHz) of radio sources from the GMRT survey that are also detected in the NVSS. The distribution has median $\alpha=-0.71$ with an rms scatter of 0.38, which is in good agreement with previously published values of spectral index at frequncies below 1.4 GHz [@sumss; @debreuck2000; @randall12]. ([@Sirothia2009] find a steeper 325-MHz/1.4-GHz spectral index, with a mean value of 0.83, in their survey of the ELAIS-N1 field, but their low-frequency flux limit is much deeper than ours, so that they probe a different source population, and it is also possible that their use of FIRST rather than NVSS biases their results towards steeper spectral indices.) The rms of the spectral index distributions we obtain increases with decreasing 325-MHz flux density; it increases from 0.36 at $S_{325}>50$ mJy to 0.4 at $S_{325}<15$ mJy. This reflects the increasing uncertainty in flux density for fainter radio sources in both the GMRT and NVSS data.
![The distribution of the spectral index measured between 325 MHz and 1.4 GHz as a function of 1.4-GHz flux density. The solid line indicates the spectral index traced by the nominal 5 mJy limit of the 325-MHz data.[]{data-label="sindexflux"}](allsi_14.pdf){width="\linewidth"}
There has been some discussion about the spectral index distribution of low-frequency radio sources, with some authors detecting a flattening of the spectral index distribution below $S_{1.4}=10$ mJy [@prandoni06; @prandoni08; @om08] and others not [@randall12; @ibar09]. It is well established that the 1.4-GHz radio source population mix changes at around 1 mJy, with classical radio-loud AGN dominating above this flux density and star-forming galaxies and fainter radio-AGN dominating below it [@condon+84; @Windhorst+85]. In particular, the AGN population below 10 mJy is known to be more flat-spectrum-core dominated [e.g. @nagar00] and it is therefore expected that some change in the spectral-index distribution should be evident. Fig. \[sindexflux\] shows the variation in 325-MHz to 1.4-GHz spectral index as a function of 1.4-GHz flux density. Our data show little to no variation in median spectral index below 10 mJy, in agreement with the results of [@randall12]. The distribution shows significant populations of steep ($\alpha < -1.3$) and flat ($\alpha > 0$) spectrum radio sources over the entire flux density range, which are potentially interesting populations of radio sources for further study (e.g. in searches for high-$z$ radio galaxies [@hzrg] or flat-spectrum quasars).
Summary
=======
In this paper we have described a 325-MHz radio survey made with the GMRT covering the 3 equatorial fields centered at 9, 12 and 14.5-h which form part of the sky coverage of [*Herschel*]{}-ATLAS. The data were taken over the period Jan 2009 – Jul 2010 and we have described the pipeline process by which they were flagged, calibrated and imaged.
The final data products comprise 212 images and a source catalogue containing 5263 325-MHz radio sources. These data will be made available via the H-ATLAS (http://www.h-atlas.org/) and GAMA (http://www.gama-survey.org/) online databases. The basic data products are also available at http://gmrt-gama.extragalactic.info/ .
The quality of the data varies significantly over the three surveyed fields. The 9-h field data has 14 arcsec resolution and reaches a depth of better than 1 mJy beam$^{-1}$ over most of the survey area, the 12-h field data has 15 arcsec resolution and reaches a depth of $\sim 1$ mJy beam$^{-1}$ and the 14.5-h data has 23.5 arcsec resolution and reaches a depth of $\sim 1.5$ mJy beam$^{-1}$. Positions in the survey are usually better than 0.75 arcsec for brighter point sources, and the flux scale is believed to be better than 5 per cent.
We show that the source counts are in good agreement with the prediction from the SKADS Simulated Skies [@Wilman08; @Wilman10] although there is a tendency for the observed source counts to slightly exceed the predicted counts between 10–100 mJy. This could be a result of excessive curvature in the spectra of radio sources implemented within the SKADS simulation.
We have investigated the spectral index distribution of the 325-MHz radio sources by comparison with the 1.4-GHz NVSS survey. We find that the measured spectral index distribution is in broad agreement with previous determinations at frequencies below 1.4 GHz and find no variation of median spectral index as a function of 1.4-GHz flux density.
The data presented in this paper will complement the already extant multi-wavelength data over the H-ATLAS/GAMA regions and will be made publicly available. These data will thus facilitate detailed study of the properties of sub-mm galaxies dectected at sub-GHz radio frequencies in preparation for surveys by LOFAR and, in future, the SKA.
Acknowledgements {#acknowledgements .unnumbered}
================
We thank the staff of the GMRT, which made these observations possible. We also thank the referee Jim Condon, whose comments have helped to improve the final version of this paper. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. The [*Herschel*]{}-ATLAS is a project with [*Herschel*]{}, which is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. The H-ATLAS website is http://www.h-atlas.org/. This work has made use of the University of Hertfordshire Science and Technology Research Institute high-performance computing facility (http://stri-cluster.herts.ac.uk/).
\[lastpage\]
[^1]: E-mail: txmauch@gmail.com
[^2]: $S \propto \nu^\alpha$
[^3]: Data products are available on line at http://gmrt-gama.extragalactic.info .
|
{
"pile_set_name": "arxiv"
}
|
Fidel Antonio Vargas
Fidel Antonio Vargas (born 28 July 1992) is a Cuban canoeist who won a silver medal in the K-2 200 m event at the 2015 Pan American Games, together with Reiner Torres. He competed in the individual 200 m at the 2016 Summer Olympics, but failed to reach the final.
References
Category:1992 births
Category:Living people
Category:Cuban male canoeists
Category:Olympic canoeists of Cuba
Category:Canoeists at the 2016 Summer Olympics
Category:Place of birth missing (living people)
Category:Pan American Games medalists in canoeing
Category:Pan American Games silver medalists for Cuba
Category:Canoeists at the 2015 Pan American Games
|
{
"pile_set_name": "wikipedia_en"
}
|
---
abstract: 'Franson’s Bell experiment with energy-time entanglement \[Phys. Rev. Lett. [**62**]{}, 2205 (1989)\] does not rule out all local hidden variable models. This defect can be exploited to compromise the security of Bell inequality-based quantum cryptography. We introduce a novel Bell experiment using genuine energy-time entanglement, based on a novel interferometer, which rules out all local hidden variable models. The scheme is feasible with actual technology.'
author:
- Adán Cabello
- Alessandro Rossi
- Giuseppe Vallone
- Francesco De Martini
- Paolo Mataloni
title: 'Proposed Bell Experiment with Genuine Energy-Time Entanglement'
---
Two particles exhibit “energy-time entanglement” when they are emitted at the same time in an energy-conserving process and the essential uncertainty in the time of emission makes undistinguishable two alternative paths that the particles can take. Franson [@Franson89] proposed an experiment to demonstrate the violation of local realism [@Bell64] using energy-time entanglement, based on a formal violation of the Bell Clauser-Horne-Shimony-Holt (CHSH) inequality [@CHSH69]. However, Aerts [*et al.*]{} [@AKLZ99] showed that, even in the ideal case of perfect preparation and perfect detection efficiency, there is a local hidden variable (LHV) model that simulates the results predicted by quantum mechanics for the experiment proposed by Franson [@Franson89]. This model proves that “the Franson experiment does not and cannot violate local realism” and that “\[t\]he reported violations of local realism from Franson experiments [@KVHNC90] have to be reexamined” [@AKLZ99].
Despite this fundamental deficiency, and despite that this defect can be exploited to create a Trojan horse attack in Bell inequality-based quantum cryptography [@Larsson02], Franson-type experiments have been extensively used for Bell tests and Bell inequality-based quantum cryptography [@TBZG00], have become standard in quantum optics [@Paul04; @GC08], and an extended belief is that “the results of experiments with the Franson experiment violate Bell’s inequalities” [@GC08]. This is particularly surprising, given that recent research has emphasized the fundamental role of a (loophole-free) violation of the Bell inequalities in proving the device-independent security of key distribution protocols [@Ekert91], and in detecting entanglement [@HGBL05].
Polarization entanglement can be transformed into energy-time entanglement [@Kwiat95]. However, to our knowledge, there is no single experiment showing a violation of the Bell-CHSH inequality using genuine energy-time entanglement (or “time-bin entanglement” [@BGTZ99]) that cannot be simulated by a LHV model. By “genuine” we mean not obtained by transforming a previous form of entanglement, but created because the essential uncertainty in the time of emission makes two alternative paths undistinguishable.
Because of the above reasons, a single experiment using energy-time entanglement able to rule out all possible LHV models is of particular interest. The aim of this Letter is to describe such an experiment by means of a novel interferometric scheme. The main purpose of the new scheme is not to compete with existing interferometers used for quantum communication in terms of practical usability, but to fix a fundamental defect common to all of them.
We will first describe the Franson Bell-CHSH experiment. Then, we will introduce a LHV model reproducing any conceivable violation of the Bell-CHSH inequality. The model underlines why a Franson-type experiment does not and cannot be used to violate local realism. Then, we will introduce a new two-photon energy-time Bell-CHSH experiment that avoids these problems and can be used for a conclusive Bell test.
[*The Franson Bell-CHSH experiment.—*]{}The setup of a Franson Bell-CHSH experiment is in Fig. \[Fig1\]. The source emits two photons, photon $1$ to the left and photon $2$ to the right. Each of them is fed into an unbalanced interferometer. $BS_i$ are beam splitters and $M_i$ are perfect mirrors. There are two distant observers, Alice on the left and Bob on the right. Alice randomly chooses the phase of the phase shifter $\phi_A$ between $A_0$ and $A_1$, and records the counts in each of her detectors (labeled $a=+1$ and $a=-1$), the detection times, and the phase settings at $t_D-t_I$, where $t_D$ is the detection time and $t_I$ is the time the photon takes to reach the detector from the location of the phase shifter $\phi_A$. Similarly, Bob chooses $\phi_B$ between $B_0$ and $B_1$, and records the counts in each of his detectors (labeled $b=+1$ and $b=-1$), the detection times, and the phase settings. The setup must satisfy four requirements: (I) To have two-photon interference, the emission of the two photons must be simultaneous, the moment of emission unpredictable, and both interferometers identical. If the detections of the two photons are coincident, there is no information about whether both photons took the short paths $S$ or both took the long paths $L$. A simultaneous random emission is achieved in actual experiments by two methods, both based on spontaneous parametric down conversion. In energy-time experiments, a non-linear crystal is pumped continuously by a monochromatic laser so the moment of emission is unpredictable in a temporal window equal to the coherence time of the pump laser. In time-bin experiments, a non-linear crystal is pumped by pulses previously passing through an unbalanced interferometer, so it is the uncertainty of which pulse, the earlier or the later, has caused the emission what provokes the uncertainty in the emission time. In both cases, the simultaneity of the emission is guaranteed by the conservation of energy. (II) To prevent single-photon interference, the difference between paths $L$ and $S$, i.e., twice the distance between $BS1$ and $M1$, $\Delta {\cal L}=2 d(BS1,M1)$ (See Fig. \[Fig1\]), must satisfy $\Delta {\cal L} > c t_{\rm coh}$, where $c$ is the speed of light and $t_{\rm coh}$ is the coherence time of the photons. (III) To make distinguishable those events where one photon takes $S$ and the other takes $L$, $\Delta {\cal
L}$ must satisfy $\Delta {\cal L} > c \Delta t_{\rm coinc}$, where $\Delta t_{\rm coinc}$ is the duration of the coincidence window. (IV) To prevent that the local phase setting at one side can affect the outcome at the other side, the local phase settings must randomly switch ($\phi_A$ between $A_0$ and $A_1$, and $\phi_B$ between $B_0$ and $B_1$) with a frequency of the order $c/D$, where $D=d({\rm Source},BS1)$.
The observers record all their data locally and then compare them. If the detectors are perfect they find that
$$\begin{aligned}
P(A_i=+1)=P(A_i=-1)=\frac{1}{2}, \label{Amarginal} \\
P(B_j=+1)=P(B_j=-1)=\frac{1}{2}, \label{Bmarginal}\end{aligned}$$
for $i,j \in \{0,1\}$. $P(A_0=+1)$ is the probability of detecting a photon in the detector $a=+1$ if the setting of $\phi_A$ was $A_0$. They also find $25\%$ of two-photon events in which photon $1$ is detected a time $\Delta {\cal L} /c$ before photon $2$, and $25\%$ of events in which photon $1$ is detected $\Delta {\cal L}/c$ after photon $2$. The observers reject this $50\%$ of events and keep the $50\%$ that are coincident. For these selected events, quantum mechanics predicts that $$P(A_i=a, B_j=b)=\frac{1}{4}\left[1+ab
\cos(\phi_{A_i}+\phi_{B_j})\right], \label{joint}$$ where $a,b \in \{-1,+1\}$ and $\phi_{A_i}$ ($\phi_{B_j}$) is the phase setting corresponding to $A_i$ ($B_j$).
The Bell-CHSH inequality is $$-2 \le \beta_{\rm CHSH} \le 2, \label{CHSH}$$ where $$\beta_{\rm CHSH} = \langle A_0 B_0 \rangle + \langle A_0 B_1 \rangle
+ \langle A_1 B_0 \rangle - \langle A_1 B_1 \rangle.$$ According to quantum mechanics, the maximal violation of the Bell-CHSH inequality is $\beta_{\rm CHSH} = 2 \sqrt{2}$ [@Tsirelson80], and is obtained, e.g., with $\phi_{A_0}=0$, $\phi_{A_1}=\frac{\pi}{2}$, $\phi_{B_0}=-\frac{\pi}{4}$, $\phi_{B_1}=\frac{\pi}{4}$.
--------------------------------------------------------------------------------------------------------------------------------
$A_0$ $A_1$ $B_0$ $B_1$ $\langle $\langle A_0 B_1 \rangle$ $\langle A_1 $\langle A_1 B_1 \rangle$
A_0 B_0 \rangle$ B_0 \rangle$
-------- -------- -------- -------- ------------------ --------------------------- -------------- --------------------------- --
$S+$ $S+$ $S+$ $L\pm$ $+1$ rejected $+1$ rejected
$L+$ $L+$ $L+$ $S\pm$ $+1$ rejected $+1$ rejected
$S+$ $S-$ $L\pm$ $S+$ rejected $+1$ rejected $-1$
$L+$ $L-$ $S\pm$ $L+$ rejected $+1$ rejected $-1$
$S+$ $L\pm$ $S+$ $S+$ $+1$ $+1$ rejected rejected
$L+$ $S\pm$ $L+$ $L+$ $+1$ $+1$ rejected rejected
$L\pm$ $S+$ $S+$ $S-$ rejected rejected $+1$ $-1$
$S\pm$ $L+$ $L+$ $L-$ rejected rejected $+1$ $-1$
--------------------------------------------------------------------------------------------------------------------------------
: \[TableI\]$32$ sets of instructions (out of $64$) of the LHV model (the other $32$ are in Table \[TableII\]). Each row represents $4$ sets of local instructions (first $4$ entries) and their corresponding contributions for the calculation of $\beta_{\rm
CHSH}$ after applying the postselection procedure of the Franson experiment (last $4$ entries). For each row, two sets (corresponding to $\pm$ signs) are explicitly written, while the other two can be obtained by changing all signs.
--------------------------------------------------------------------------------------------------------------------------------
$A_0$ $A_1$ $B_0$ $B_1$ $\langle $\langle A_0 B_1 \rangle$ $\langle A_1 $\langle A_1 B_1 \rangle$
A_0 B_0 \rangle$ B_0 \rangle$
-------- -------- -------- -------- ------------------ --------------------------- -------------- --------------------------- --
$S+$ $S+$ $S-$ $L\pm$ $-1$ rejected $-1$ rejected
$L+$ $L+$ $L-$ $S\pm$ $-1$ rejected $-1$ rejected
$S+$ $S-$ $L\pm$ $S-$ rejected $-1$ rejected $+1$
$L+$ $L-$ $S\pm$ $L-$ rejected $-1$ rejected $+1$
$S-$ $L\pm$ $S+$ $S+$ $-1$ $-1$ rejected rejected
$L-$ $S\pm$ $L+$ $L+$ $-1$ $-1$ rejected rejected
$L\pm$ $S-$ $S+$ $S-$ rejected rejected $-1$ $+1$
$S\pm$ $L-$ $L+$ $L-$ rejected rejected $-1$ $+1$
--------------------------------------------------------------------------------------------------------------------------------
: \[TableII\]$32$ sets of instructions of the LHV model.
[*LHV models for the Franson experiment.—*]{}A LHV theory for the Franson experiment must describe how each of the photons makes two decisions. The $+1/-1$ decision: the decision of a detection to occur at detector $+1$ or at detector $-1$, and the $S/L$ decision: the decision of a detection to occur at time $t_D=t$ or a time $t_D=t+\frac{\Delta {\cal L}}{c}$. Both decisions may be made as late as the detection time $t_D$, and may be based on events in the backward light cones of the detections. In a Franson-type setup both decisions may be based on the corresponding local phase setting at $t_D-t_I$. For a conclusive Bell test, there is no problem if photons make the $+1/-1$ decision based on the local phase setting. The problem is that the $50\%$ postselection procedure should be independent on the phase settings, otherwise the Bell-CHSH inequality (\[CHSH\]) is not valid. In the Franson experiment the phase setting at $t_D-t_I$ can causally affect the decision of a detection of the corresponding photon to occur at time $t_D=t$ or a time $t_D=t+\frac{\Delta {\cal L}}{c}$. If the $S/L$ decision can depend on the phase settings, then, after the $50\%$ postselection procedure, one can formally obtain not only the violations predicted by quantum mechanics, as proven in [@AKLZ99], but any value of $\beta_{\rm CHSH}$, even those forbidden by quantum mechanics. This is proven by constructing a family of explicit LHV models.
Consider the $64$ sets of local instructions in tables \[TableI\] and \[TableII\]. For instance, if the pair of photons follows the first set of local instructions in Table \[TableI\], $(A_0=)S+$, $(A_1=)S+$, $(B_0=)S-$, $(B_1=)L+$, then, if the setting of $\phi_A$ is $A_0$ or $A_1$, photon $1$ will be detected by the detector $a=+1$ at time $t$ (corresponding to the path $S$), and if the setting of $\phi_B$ is $B_0$, photon $2$ will be detected by $b=-1$ at time $t$, but if the setting of $\phi_B$ is $B_1$, photon $2$ will be detected by $b=+1$ at time $t+\frac{\Delta {\cal L}}{c}$ (corresponding to the path $L$). If each of the $32$ sets of instructions in Table \[TableI\] occurs with probability $p/32$, and each of the $32$ sets of instructions in Table \[TableII\] with probability $(1-p)/32$, then it is easy to see that, for any value of $0 \le p \le 1$, the model gives $25\%$ of $SL$ events, $25\%$ of $LS$ events, $50\%$ of $SS$ or $LL$ events, and satisfies (\[Amarginal\]) and (\[Bmarginal\]). If $p=0$, the model gives $\beta_{\rm CHSH}=-4$. If $p=1$, the model gives $\beta_{\rm
CHSH}=4$. If $0 < p < 1$, the model gives any value between $-4 <
\beta_{\rm CHSH} < 4$. Specifically, a maximal quantum violation $\beta_{\rm CHSH} = 2 \sqrt{2}$, satisfying (\[joint\]), is obtained when $p=(2+\sqrt{2})/4$.
The reason why this LHV model is possible is that the $50\%$ postselection procedure in Franson’s experiment allows the subensemble of selected events to depend on the phase settings. For instance, the first $8$ sets of instructions in Table \[TableI\] are rejected only when $\phi_B=B_1$. The main aim of this Letter is to introduce a similar experiment which does not have this problem.
There is a previously proposed solution consisting on replacing the beam splitters $BS_1$ and $BS_2$ in Fig. \[Fig1\] by switchers synchronized with the source [@BGTZ99]. However, these active switchers are replaced in actual experiments by passive beam splitters [@TBZG00; @BGTZ99] that force a Franson-type postselection with the same problem described above.
One way to avoid the problem is to make an extra assumption, namely that the decision of being detected at time $t_D=t$ or a time $t_D=t+\frac{\Delta {\cal L}}{c}$ is actually made at the first beam splitter, before having information of the local phase settings [@AKLZ99; @Franson99]. This assumption is similar to the fair sampling assumption, namely that the probability of rejection does not depend on the measurement settings. As we have seen, there are local models that do not satisfy this assumption. The experiment we propose does not require this extra assumption.
[*Proposed energy-time entanglement Bell experiment.—*]{}The setup of the new Bell experiment is illustrated in Fig. \[Fig2\]. The source emits two photons, photon $1$ to the left and photon $2$ to the right. The $S$ path of photon $1$ (photon $2$) ends on the detectors $a$ on the left ($b$ on the right). The difference with Fig. \[Fig1\] is that now the $L$ path of photon $1$ (photon $2$) ends on the detectors $b$ ($a$). In this setup, the two photons end in different sides only when both are detected in coincidence. If one photon takes $S$ and the other photon takes $L$, both will end on detectors of the same side. An interferometer with this last property is described in [@RVDM08].
The data that the observers must record is the same as in Franson’s experiment. The setup must satisfy the following requirements: (I’) To have two-photon interference, the emission of the two photons must be simultaneous, the moment of emission unpredictable, and both arms of the setup identical. The phase stabilization of the entire setup of Fig. \[Fig2\] is more difficult than in Franson’s experiment. (II’) Single-photon interference is not possible in the setup of Fig. \[Fig2\]. (III’) To temporally distinguish two photons arriving at the same detector at times $t$ and $t+\frac{\Delta {\cal L}'}{c}$, where $\Delta {\cal L}'=2 [d({\rm
Source},BS2)+d(BS2,M1)]$ (see Fig. \[Fig2\]), the dead time of the detectors must be smaller than $\frac{\Delta {\cal L}'}{c}$. For detectors with a dead time of $1$ ns, ${\Delta {\cal L}'} > 30$ cm. (IV’) The probability of two two-photons events in $\frac{\Delta
{\cal L}'}{c}$ must be negligible. This naturally occurs when using standard non-linear crystals pumped continuously. (V’) To prevent that the local phase setting at one side can affect the outcome at the other side, the local phase settings must randomly switch ($\phi_A$ between $A_0$ and $A_1$, and $\phi_B$ between $B_0$ and $B_1$) with a frequency of the order $c/D'$, where $D'=d({\rm
Source},\phi_A)\gg \Delta {\cal L}'$.
There is a trade-off between the phase stabilization of the apparatus (which requires a short interferometer) and the prevention of reciprocal influences between the two local phase settings (which requires a long interferometer). By considering a random phase modulation frequency of 300 kHz, an interferometer about 1 km long would be needed. Current technology allows us to stabilize interferometers of up 4 km long (for instance, one of the interferometers of the LIGO experiment is 4 km long). With these stable interferometers, the experiment would be feasible.
The predictions of quantum mechanics for the setup of Fig. \[Fig2\] are similar to those in Franson’s proposal: Eqs. (\[Amarginal\]) and (\[Bmarginal\]) hold, there is $25\%$ of events in which both photons are detected on the left at times $t$ and $t+\frac{\Delta {\cal L}'}{c}$, $25\%$ of events in which both photons are detected on the right, and $50\%$ of coincident events for which (\[joint\]) holds. The observers must keep the coincident events and reject those giving two detections on detectors of the same side. The main advantages of this setup are: (i) The rejection of events is local and does not require communication between the observers. (ii) The selection and rejection of events is independent of the local phase settings. This is the crucial difference with Franson’s experiment and deserves a detailed examination. First consider a selected event: both photons have been detected at time $t_D$, one in a detector $a$ on the left, and the other in a detector $b$ on the right. $t_I$ is the time a photon takes from $\phi_A$ ($\phi_B$) to a detector $a$ ($b$). The phase setting of $\phi_A$ ($\phi_B$) at $t_D-t_I$ is in the backward light cone of the photon detected in $a$ ($b$), but the point is, could a different value of one or both of the phase settings have caused that this selected event would become a rejected event in which both photons are detected on the same side? The answer is no. This would require a mechanism to make one detection to “wait” until the information about the setting in other side comes. However, when this information has finally arrived, the phase settings (both of them) have changed, so this information is useless to base a decision on it.
Now consider a rejected event. For instance, one in which both photons are detected in the detectors $a$ on the left, one at time $t_D=t$, and the other at $t_D=t+\frac{\Delta {\cal L}'}{c}$. Then, the phase settings of $\phi_B$ at times $t_D-t_I$ are out of the backward light cones of the detected photons. The photons cannot have based their decisions on the phase settings of $\phi_B$. A different value of $\phi_A$ cannot have caused that this rejected event would become a selected event. This would require a mechanism to make one detection to wait until the information about the setting arrives to the other side, and when this information has arrived, the phase setting of $\phi_A$ has changed so this information is useless.
For the proposed setup, there is no physical mechanism preserving locality which can turn a selected (rejected) event into a rejected (selected) event. The selected events are independent of the local phase settings. For the selected events, only the $+1/-1$ decision can depend on the phase settings. This is exactly the assumption under which the Bell-CHSH inequality (\[CHSH\]) is valid. Therefore, an experimental violation of (\[CHSH\]) using the setup of Fig. \[Fig2\] and the postselection procedure described before provides a conclusive (assuming perfect detectors) test of local realism using energy-time (or time-bin) entanglement. Indeed, the proposed setup opens up the possibility of using genuine energy-time or time-bin entanglement for many other quantum information experiments.
The authors thank J.D. Franson, J.-Å. Larsson, T. Rudolph, and M. Żukowski for their comments. This work was supported by Junta de Andalucía Excellence Project No. P06-FQM-02243 and by Finanziamento Ateneo 07 Sapienza Universitá di Roma.
[14]{}
J.D. Franson, Phys. Rev. Lett. [**62**]{}, 2205 (1989).
J.S. Bell, Physics (Long Island City, N.Y.) [**1**]{}, 195 (1964).
J.F. Clauser, M.A. Horne, A. Shimony, and R.A. Holt, Phys. Rev. Lett. [**23**]{}, 880 (1969).
S. Aerts, P.G. Kwiat, J.-Å. Larsson, and M. Żukowski, Phys. Rev. Lett. [**83**]{}, 2872 (1999); [**86**]{}, 1909 (2001).
P.G. Kwiat [*et al.*]{}, Phys. Rev. A [**41**]{}, 2910 (1990); Z.Y. Ou, X.Y. Zou, L.J. Wang, and L. Mandel, Phys. Rev. Lett. [**65**]{}, 321 (1990); J. Brendel, E. Mohler, and W. Martienssen, [*ibid.*]{} [**66**]{}, 1142 (1991); P.G. Kwiat, A.M. Steinberg, and R.Y. Chiao, Phys. Rev. A [**47**]{}, R2472 (1993); P.R. Tapster, J.G. Rarity, and P.C.M. Owens, Phys. Rev. Lett. [**73**]{}, 1923 (1994); W. Tittel, J. Brendel, H. Zbinden, and N. Gisin, Phys. Rev. Lett. [**81**]{}, 3563 (1998).
J.-Å. Larsson, Quantum Inf. Comput. [**2**]{}, 434 (2002).
W. Tittel, J. Brendel, H. Zbinden, and N. Gisin, Phys. Rev. Lett. [**84**]{}, 4737 (2000); G. Ribordy [*et al.*]{}, Phys. Rev. A [**63**]{}, 012309 (2000); R.T. Thew, A. Acín, H. Zbinden, and N. Gisin, Phys. Rev. Lett. [**93**]{}, 010503 (2004); I. Marcikic [*et al.*]{}, [*ibid.*]{} [**93**]{}, 180502 (2004); D. Salart [*et al.*]{}, [*ibid.*]{} [**100**]{}, 220404 (2008).
H. Paul, [*Introduction to Quantum Optics*]{} (Cambridge University Press, Cambridge, England, 2004).
J.C. Garrison and R.Y. Chiao, [*Quantum Optics*]{} (Oxford University Press, Oxford, 2008).
A.K. Ekert, Phys. Rev. Lett. [**67**]{}, 661 (1991); A. Acín, N. Gisin, and L. Masanes, [*ibid.*]{} [**97**]{}, 120405 (2006).
P. Hyllus, O. G[ü]{}hne, D. Bruß, and M. Lewenstein, Phys. Rev. A [**72**]{}, 012321 (2005).
P.G. Kwiat, Phys. Rev. A [**52**]{}, 3380 (1995); D.V. Strekalov [*et al.*]{}, [*ibid.*]{} [**54**]{}, R1 (1996).
J. Brendel, N. Gisin, W. Tittel, and H. Zbinden, Phys. Rev. Lett. [**82**]{}, 2594 (1999).
B.S. Tsirelson, Lett. Math. Phys. [**4**]{}, 93 (1980).
J.D. Franson (private communication). See also, J.D. Franson, Phys. Rev. A [**61**]{}, 012105 (1999).
A. Rossi, G. Vallone, F. De Martini, and P. Mataloni, Phys. Rev. A [**78**]{}, 012345 (2008).
|
{
"pile_set_name": "arxiv"
}
|
---
abstract: 'Wide-area data and algorithms in large power systems are creating new opportunities for implementation of measurement-based dynamic load modeling techniques. These techniques improve the accuracy of dynamic load models, which are an integral part of transient stability analysis. Measurement-based load modeling techniques commonly assume response error is correlated to system or model accuracy. Response error is the difference between simulation output and phasor measurement units (PMUs) samples. This paper investigates similarity measures, output types, simulation time spans, and disturbance types used to generate response error and the correlation of the response error to system accuracy. This paper aims to address two hypotheses: 1) can response error determine the total system accuracy? and 2) can response error indicate if a dynamic load model being used at a bus is sufficiently accurate? The results of the study show only specific combinations of metrics yield statistically significant correlations, and there is a lack of pattern of combinations of metrics that deliver significant correlations. Less than 20% of all simulated tests in this study resulted in statistically significant correlations. These outcomes highlight concerns with common measurement-based load modeling techniques, raising awareness to the importance of careful selection and validation of similarity measures and response output metrics. Naive or untested selection of metrics can deliver inaccurate and misleading results.'
author:
- 'Phylicia Cicilio, and Eduardo Cotilla-Sanchez, [^1][^2]'
bibliography:
- 'ref.bib'
title: 'Evaluating Measurement-Based Dynamic Load Modeling Techniques and Metrics'
---
Introduction
============
The introduction of phasor measurement units (PMUs) and advanced metering infrastructure (AMI) has ushered in the era of big data to electrical utilities. The ability to capture high-resolution data from the electrical grid during disturbances enables the more widespread use of measurement-based estimation techniques for validation of dynamic models such as loads. Transient stability studies use dynamic load models. These studies are key for ensuring electrical grid reliability and are leveraged for planning and operation purposes [@Shetye]. It is imperative that dynamic load models be as representative of the load behavior as possible to ensure that transient stability study results are accurate and useful. However, developing dynamic load models is challenging, as they attempt to represent uncertain and changing physical and human systems in an aggregate model.
Several methods exist for determining load model parameters, such as measurement-based techniques using power systems sensor data [@Kim; @Kontis; @Zhang; @Renmu; @Choi_2; @Ma_2], and methods that use parameter sensitivities and trajectory sensitivities [@Choi; @Kim; @Son; @Ma; @Zhang]. A common practice in measurement-based techniques is to use system response outputs, such as bus voltage magnitude, from PMU data and simulation data and compare the output with a similarity measure, such as Euclidean distance. The error between PMU data and simulation output is referred to as response error in this paper. As investigated in [@siming_thesis], the underlying assumption that reducing response error results in a more accurate model and system is not guaranteed.
This paper examines the relationship between response error and system and model accuracy to highlight concerns with common measurement-based technique practices. The methods used in the study examine whether the selection of a load model is accurate at a given bus. Measurement-based techniques typically perform dynamic load model parameter tuning to improve accuracy. In parameter tuning, significant inter-dependencies and sensitivities exist between many dynamic load model parameters [@Choi; @Kim; @Son; @Ma; @Zhang], which is one of the reasons why dynamic load model parameter tuning is challenging. This study compares the selection of two loads models, the dynamic composite load model (CLM) and the static ZIP model instead of parameter tuning. The static ZIP model is the default load model chosen by power system simulators and represents loads with constant impedance, current, and power. The CLM load model has become an industry standard, particularly for the western United States, which represents aggregate loads including induction machine motor models, the ZIP model, and power electronics [@Kosterev; @Renmu]. The choice of changing the load model is made to compare known differences in responses from load motor models with the CLM model and static load models with the ZIP model. By comparing load model selection, the presence of a correlation between response error and system accuracy will be assessed.
This study performs two experiments to address two main hypotheses. The first experiment is a system level experiment to test hypothesis 1) can response error determine the total system accuracy of how many load models at buses in the system are accurate? The second experiment is a bus level experiment to test hypothesis 2) can response error indicate if a load model being used at a bus is accurate? The results from these experiments demonstrate that it can’t be assumed that response error and system accuracy are correlated. The main contribution of this paper is to identify the need for validation of techniques and metrics used in dynamic load modeling, as frequently used metrics can deliver inaccurate and meaningless results.
The remainder of this paper is organized as follows. Section II discusses the use of dynamic load models in industry and those used in this paper. In Section III, similarity measures are discussed in relevance to power systems time series data. Section IV details the methodology used to evaluate the system level experiment of hypothesis 1. Section V provides and discusses the results from system level experiment. Section VI details the methodology used to evaluate the bus level experiment of hypothesis 2. These results are provided and discussed in Section VII. In conclusion, Section VIII discusses the implication of the results found in this study and calls for attention to the importance of careful selection and validation of measurement-based technique metrics.
Similarity Measures {#similarity_section}
===================
A similarity measure compares how similar data objects, such as time series vectors, are to each other. A key component of measurement-based techniques is to use a similarity measure to calculate the response error. Then typically, an optimization or machine learning algorithm reduces this response error to improve the models or parameters in the system. Several measurement-based dynamic load model estimation studies employ Euclidean distance as a similarity measure [@Renmu; @Visconti; @Kong] . However, there are characteristics of power systems time series data which should be ignored or not emphasized, such as noise, which are instead captured by Euclidean distance. Power system time series data characteristics include noise, initialization differences, and oscillations at different frequencies. These characteristics result in shifts and stretches in output amplitude and time as detailed in Table \[similarity\_measures\].
-- ------------------------------------------------------------------------------------------------ --
**Amplitude & **Time\
**Shift & initialization differences, discontinuities & different/unknown initialization time\
**Stretch & noise & oscillations at different frequencies\
********
-- ------------------------------------------------------------------------------------------------ --
: Examples of amplitude and time shifting and stretching [@siming_thesis][]{data-label="similarity_measures"}
The characteristics listed in Table \[similarity\_measures\] are the effect of specific phenomena in the system. For example, differences in control parameters in motor models and potentially also playback between motor models can cause oscillations at different frequencies. Certain changes in output are important to capture as they have reliability consequences to utilities. An increase in the initial voltage swing after a disturbance can trip protection equipment. An increase in the time it takes for the frequency to cross or return to 60 Hz in the United States has regulatory consequences resulting in fines. Response error produced by similarity measures should capture these important changes. Other changes to output, such as noise, should be ignored.
Different situations when comparing simulation data to simulation data versus comparing simulation data to PMU data cause some characteristics listed in Table \[similarity\_measures\]. Comparing simulation data to simulation data occurs in theoretical studies, and comparing simulation data to PMU data would be the application for utilities. Initialization differences and differences in initialization time can occur when comparing simulation data to PMU data due to the difficulty in perfectly matching steady-state values. However, when comparing simulation data to simulation data, initialization differences and differences in initialization time likely highlight errors in the simulation models, parameters, or values.
Similarity measures have the capability to be invariant to time shift and stretch or amplitude shift and stretch. Table \[similarity\_measures2\] lists the similarity measures examined in this study with their corresponding capabilities. These similarity measures are chosen to test the sensitivities to all four quadrants of Table \[similarity\_measures\].
[c| p[1.1cm]{} p[1.1cm]{} p[0.9cm]{} p[0.9cm]{} ]{}\
& **Amplitude Shift & **Amplitude Stretch & **Time Shift & **Time Stretch\
**Euclidean Distance & & & &\
**Manhattan Distance & & & &\
**Dynamic Time Warping & & & $\bullet$ & $\bullet$\
**Cosine Distance & $\bullet$ & & &\
**Correlation Coefficient & $\bullet$ & $\bullet$ & &\
******************
Euclidean distance and Manhattan distance are norm-based measures which are variant to time and amplitude shifting and stretching. Euclidean distance is one of the most commonly used similarity measures in measurement-based techniques. These norm based distances can range from 0 to $\infty$. The cosine similarity takes the cosine of the angle between the two vectors to determine the similarity. By only using the angle between the vectors, this similarity is invariant to amplitude shifting [@siming_thesis]. This similarity can range from -1 to 1.
The Pearson correlation coefficient is invariant to amplitude shifting and stretching and also ranges from -1 to 1 [@siming_thesis].
Dynamic time warping (DTW) identifies the path between two vectors of the lowest cumulative Euclidean distance by shifting the time axis. DTW is invariant to local and global time shifting and stretching [@Kong]. The DTW algorithm used in this study is only invariant to time shifting. DTW can range from 0 to $\infty$.
Figure \[example\_plots\] and \[example\_comparison\] show how amplitude and time shifting and stretching affect the error produced by similarity measures. The time series plots in Figure \[example\_plots\] show a sine wave with corresponding amplitude or time shift or stretch. The similarity measures calculate the difference between each of the time series subplots. The error generated for each similarity measure is normalized for comparison. The error is normalized separately for each similarity measure, so the sum of the error from the amplitude and time shift and stretch sums to one. Figure \[example\_comparison\] compares the error results from each of the subplot scenarios.
[0.22]{} ![Example time series with amplitude and time shift and stretch[]{data-label="example_plots"}](Images/Sin_Data_Plot_Amplitude_Stretch.eps "fig:"){height="1.3in"}
[0.22]{} ![Example time series with amplitude and time shift and stretch[]{data-label="example_plots"}](Images/Sin_Data_Plot_Amplitude_Shift.eps "fig:"){height="1.3in"}
[0.22]{} ![Example time series with amplitude and time shift and stretch[]{data-label="example_plots"}](Images/Sin_Data_Plot_Time_Stretch.eps "fig:"){height="1.3in"}
[0.22]{} ![Example time series with amplitude and time shift and stretch[]{data-label="example_plots"}](Images/Sin_Data_Plot_Time_Shift.eps "fig:"){height="1.3in"}
![Comparison of similarity measures[]{data-label="example_comparison"}](Images/error_comparsion_bar.eps){width="1\linewidth"}
The results in Figure \[example\_comparison\] demonstrate the abilities of each similarity measure. The similarity measures are denoted as: Euclidean distance (ED), Manhattan distance (MH), dynamic time warping (DTW), cosine distance (COS), and correlation coefficient (COR). Correlation coefficient has negligible error produced with both amplitude shift and stretch. Cosine distance has negligible error with amplitude stretch. Dynamic time warping has negligible error with time shift. These results provide an example of what can be expected when they are used with simulation or PMU time series data.
System Level Experiment Methodology
===================================
The system level experiment is setup to determine whether system response error can determine the total system accuracy. This addresses the question: is it possible to determine if any models or the approximate percentage of models in the system are inaccurate and need to be updated, with out needing to test at each individual bus? This is determined by calculating the correlation between system accuracy, as defined in Equation \[system\_accuracy\], and system response error described below.
This experiment is performed within the RTS96 test system [@RTS96; @Lassetter], using Siemens PSS/E software. Fourteen CLMs are randomly placed on loads in the system enhancing the RTS96 case to create a load model benchmark system. The remaining 37 loads are modeled with the static ZIP load model. Test systems are generated by replacing some ZIP load models from the benchmark system with CLM in the test system and some CLM in the benchmark system to ZIP load models in the test system. Switching load models creates “inaccurate” and “accurate” load models as a method to change the accuracy of the system. The “inaccurate” load models are those in the test system that are different from the benchmark system. The buses with the same load models in the test system and benchmark system are “accurate” load models. Switching these load models will also create difference responses, as described in Section I.
A hundred of benchmark and test systems are created using the randomized placement of CLMs, based on a uniform random distribution, to reduce the sensitivity of the results to location of the CLM in the system. The percentage of buses in the test system with accurate load models is called the system accuracy. System accuracy is defined in Equation \[system\_accuracy\] and is also used in the Bus Level Experiment.
\[system\_accuracy\] [accuracy]{.nodecor}\_[[system]{.nodecor}]{} =
An example benchmark and test system pair at 50% system accuracy will have half of the CLMs removed from the benchmark system. The removed CLMs will be replaced with ZIP load models. System accuracy quantifies how many dynamic load models in the system are accurate. Accurate dynamic load models in the test systems are those models which are the same as those in the benchmark system.
A bus fault is used to create a dynamic response in the system. Over a hundred simulations are performed where the location of the fault is randomized to reduce the sensitivity of fault location in comparison to CLM location. The bus fault is performed by applying a three-phase to ground fault with a duration of 0.1 s. During this fault, there is an impedance change at the bus fault causing the voltage to drop at the bus and a change in power flows throughout the system. The fault is cleared 0.1 seconds after it is created, and the power flows returns to a steady-state.
The output captured from the simulations are voltage magnitude, voltage angle, and frequency from all of the load buses, and line flow active power and reactive power. The output from the benchmark system is compared to the test systems using the similarity measures outlined in Section \[similarity\_section\]. The response error generated by DTW, cosine distance, and correlation coefficient are a single measure for the entire time span of each output at each bus. The response error from Manhattan and Euclidean distance is generated at every time step in the time span. The error at each time step is then summed across the time span to create a single response error similar to the other similarity measures. The generation of response error for Manhattan and Euclidean distance is shown by Equation \[response\_accuracy\].
$$\label{response_accuracy}
\centering
\textnormal{error}_{\textnormal{response}} = \sum_{t=1}^Ts[t]$$
Similar to response error, system response error is calculated from the difference between the output of buses between the benchmark and test systems. However, system response error is a single metric which is the sum of all the response errors from each bus.
Three time spans are tested: 3 seconds, 10 seconds, and 30 seconds. The disturbance occurs at 0.1 seconds and cleared at 0.2 seconds for all the scenarios. These time spans are chosen to test the sensitivity to the transient event occurring in the first 3 seconds, and sensitivity to the dynamic responses out to 30 seconds.
The Pearson correlation coefficient is calculated between system accuracy and system response error using the student t-test, to determine the relationship between the two. The student t-test is a statistical test to determine if two groups of results being compared have means which are statistically different. The output of the Pearson correlation coefficient is the r and p-value. The r-value denotes the direction and strength of the relationship. R-values range from -1 to 1, where -1 to -0.5 signifies a strong negative relationship and 0.5 to 1 signifies a strong positive relationship between the groups. For this experiment, a strong negative relationship implies that as the system accuracy increases the system response error decreases. This is the relationship typically assumed by those performing measurement-based techniques. The p-value is the value which determines if the two results are different. A p-value of less than 0.05 signifies a statistically significant difference between the two groups of results being compared. Therefore, a p-value less than 0.05 signifies a statistically significant relationship quantified by the r-value.
System Level Experiment Results
===============================
In this section, the correlation between response and system accuracy is calculated to evaluate the ability of various time spans, output types, and similarity measures to predict system accuracy as used in measurement-based techniques.
An example outputs from these results is visualized in Figures \[high\_accuracy\] and \[low\_accuracy\]. The plots compare the reactive power times series data from a bus in the benchmark system and test systems at two levels of system accuracy in a system undergoing a bus fault at the same bus. Figure \[low\_accuracy\] shows the benchmark and test system responses with low system accuracy, 8%. Figure \[high\_accuracy\] shows the responses with high system accuracy, 92%.
![Reactive Power Time Series Plot of Low System Accuracy and High Response Error with Generator Outage[]{data-label="high_accuracy"}](Images/Q_timeseries_low.eps){width=".9\linewidth"}
![Reactive Power Time Series Plot of High System Accuracy and Low Response Error with Generator Outage[]{data-label="low_accuracy"}](Images/Q_timeseries_high_clean.eps){width=".9\linewidth"}
The response from the high system accuracy test system has a better curve fit to the benchmark system than the low system accuracy test system. This visual comparison confirms that with an appropriate similarity measure the response error should decrease as system accuracy increases.
The results from all the simulations determining correlation between system accuracy and response error as grouped by the metrics used are shown as r-values in Figure \[R\_LO\_3\]. R-values of less than -0.5 are highlighted in orange to show they represent a strong relationship. R-values greater than -0.5, which do not have a strong relationship, are in white. All resulting p-values are found to be lower than 0.05, meaning all r-value relationships are statistically significant. The similarity measures listed in the plots use the same abbreviations as in Figure \[example\_comparison\]. The output types listed in the plots are abbreviated with: voltage angle (ANG), voltage magnitude (V), frequency (F), line active power flow (P), and line reactive power flow (Q).
[0.5]{} ![Bus fault R-values for system level experiment for time spans: a) 3 seconds, b) 10 seconds, c) 30 seconds[]{data-label="R_LO_3"}](Images/BF_3s_new.eps "fig:"){height="1.6in"}
[0.5]{} ![Bus fault R-values for system level experiment for time spans: a) 3 seconds, b) 10 seconds, c) 30 seconds[]{data-label="R_LO_3"}](Images/BF_10s_new.eps "fig:"){height="1.6in"}
[0.5]{} ![Bus fault R-values for system level experiment for time spans: a) 3 seconds, b) 10 seconds, c) 30 seconds[]{data-label="R_LO_3"}](Images/BF_30s_new.eps "fig:"){height="1.6in"}
Out of the 75 combinations of metrics tested in this experiment, only 12% yielded statistically significant differences. Considering the visual verification that indeed response error should decrease as system accuracy increases from Figures \[high\_accuracy\] and \[low\_accuracy\], the lack of strong negative correlations seen in Figure \[R\_LO\_3\] are concerning. Only the three and ten-second time span simulations have strong correlation relationships, none of the thirty-second scenarios have strong relationships. During a thirty-second simulation, the last ten to thirty seconds of the output response will flatten to a steady-state value. Therefore, in a thirty-second simulation there are many error data points that might contain flat steady-state responses limiting curve fitting opportunities and reducing a correlation relationship. This can explain why none of the thirty-second scenarios have strong relationships.
The distribution of the r-values from the overall strongest correlation relationship, with an r-value of -0.5199, is examined to further investigate the correlation results. Figure \[distribution\] visualizes the distribution of the response error for this r-value at the tested levels of system accuracy.
![R-value distribution[]{data-label="distribution"}](Images/Distribution_sample.eps){width="1\linewidth"}
The response error in figure \[distribution\] is normalized for a clearer comparison. A general negative correlation is seen, where there is lower response error at higher system accuracy. However, there are several outliers in the data preventing a stronger overall correlation, particularly between system accuracy levels 0% and 70%. This suggests at lower system accuracy levels the correlation is not as high as in the overall distribution. To test this, the correlation between system accuracy ranges is calculated to highlight where the weakest correlation regions exist. Table \[correlation\_ranges\] outlines the correlation at the following system accuracy ranges.
[p[1.5cm]{} p[1.5cm]{} p[1.5cm]{} p[1.5cm]{}]{}\
0-30% & 38%-54% & 62%-77% & 84%-100%\
\
-0.0632 & -0.2964 & 0.0322 & -0.4505\
Seen in Table \[correlation\_ranges\], the correlation is greatly degraded at the low levels of the system accuracy ranges, even reversing the r-value relationship from negative to positive between levels 62% and 77%. An ideal scenario would have a constant strong negative correlation through all system accuracy levels. This highlights a potential low effectiveness of measurement-based techniques using these testing conditions at low system accuracy levels. Overall, the results from this experiment highlight the lack of correlation between response error and system accuracy across all metrics.
The application of the system level experiment is to use any of the metrics combinations that showed strong negative relationships in a measurement-based optimization program. Such an optimization program could change the dynamic load models in the system to reduce system response error in order to improve system accuracy. However, in order for such an optimization program to successfully improve system accuracy, there needs to be a strong negative correlation between system accuracy and system response error. Additionally, even with an overall strong negative correlation, Table \[correlation\_ranges\] shows that such an optimization program may determine a local minimum at a lower accuracy level to be the global minimum due to the lower correlation relationship strength found at lower accuracy levels.
These results identify the need for measurement-based techniques, and potentially other power systems time series data curve fitting techniques, to evaluate the assumption that the system response error is correlated to the system accuracy. It cannot be assumed measurement-based techniques using similarity measures yield meaningful results. Any optimization or other estimation technique using the reduction of system response error will not yield accurate results of findings without a strong correlation between system response error and system accuracy.
Bus Level Experiment Methodology
================================
The bus level experiment is setup to determine whether response error from an individual load bus can indicate if a load model being used at the bus is accurate. In comparison to the system level experiment which looked at system wide model accuracy, this experiment looks at model accuracy at the bus level. The results of this experiment are the p-values from the student t-test, indicating whether there is a statistical difference between the response error from buses with accurate and inaccurate load models. The p-value is the value which determines if the two results are different. A p-value of less than 0.05 signifies a statistically significant difference between the two groups of results being compared.
The same system and system setup are used in this experiment as in the system level experiment. This experiment excludes comparing the output from line flow active power and reactive power with the previously used outputs of frequency, voltage angle, and voltage magnitude of the buses. In this experiment the simulations are performed at various levels of system accuracy to reduce the sensitivity of the results to the system accuracy. By reducing the sensitivity of the results to fault placement and system accuracy, the results focus the correlation to between response error and load model accuracy. All other metrics remain the same as the system level experiment.
The response error from all the simulations are compared by output type, time span, and similarity measure, and binned into groups of buses with accurate load models and buses with inaccurate load models. A t-test is performed on the binned response error to determine if there is a statistically significant difference between the error from buses with accurate load models and buses with inaccurate load models. The results of this experiment are the p-values from the response error separated by disturbance scenario, output type, time span, and similarity measure.
Bus Level Experiment Results
============================
The bus level experiment tests whether there is a statistical difference between the response error at individual buses with the accuracy of the load models at the buses. The p-values are calculated using response error from the output types, time spans, and similarity measures. Figure \[Bus\_level\_BF\] shows these p-values.
[0.5]{} ![Bus fault P-values for bus level experiment for time spans: a) 3 seconds, b) 10 seconds, c) 30 seconds[]{data-label="Bus_level_BF"}](Images/Map_BF_3s.eps "fig:"){height="1.6in"}
[0.5]{} ![Bus fault P-values for bus level experiment for time spans: a) 3 seconds, b) 10 seconds, c) 30 seconds[]{data-label="Bus_level_BF"}](Images/Map_BF_10s.eps "fig:"){height="1.6in"}
[0.5]{} ![Bus fault P-values for bus level experiment for time spans: a) 3 seconds, b) 10 seconds, c) 30 seconds[]{data-label="Bus_level_BF"}](Images/Map_BF_30s.eps "fig:"){height="1.6in"}
Less than 15% of the combinations of time span, output type, and similarity measure have significant p-values. It is noted that the combinations of metrics best used for this experimental setup are different than those in the system level experiment. This experiment highlights a serious concern for other experiments using measurement-based techniques. Only select combinations of metrics in this experiment yielded significant differences, and this same result is likely present with other measurement-based experiments whether they involve changing load models, changing load model parameters, or changes in other dynamic models.
The direct application of this experiment is to use any of the disturbance type, output type, time span, and similarity measure combinations that showed significant p-values in a measurement-based machine learning technique to identify if a bus in the system need a load model updated or a different load model. There needs to be a significant difference between response errors from buses with poor fitting or inaccurate load models and those which are accurate for such a machine learning algorithms to give meaningful results, whether it be from simulation or PMU outputs. In this case, if the machine learning algorithm was using a combination of metrics that did not have a proven significant difference between response error from buses with inaccurate and accurate load models, the machine learning algorithm would be unable to accurately tell the difference between the groups, causing the results to be inaccurate.
The results from this experiment confirm the same conclusion from the system level experiment that there needs to be verification testing showing that the chosen measurement-based metrics used to calculate error will capture true differences between incorrect models and correct models. It cannot be assumed that any combination of metrics used in measurement-based techniques will yield meaningful results.
Conclusion
==========
This paper investigates common metrics used in measurement-based dynamic load modeling techniques to generate response error. These metrics include similarity measures, output types, and simulation time spans. The correlation between response error and accuracy is evaluated by comparing the system accuracy to system response error with the system level experiment and load model accuracy to bus response error with the bus level experiment. Both experiments demonstrated there is a lack of combinations of metrics that deliver significant findings. It is noted that the combinations of metrics best used in the bus level experiment are different than those in the system level experiment. This same result is likely to be found with other measurement-based experiments whether they involve changing load models, changing load model parameters, or changes in other dynamic models. These experiments expose a significant concern for measurement-based technique validity. This study raises awareness of the importance of careful selection and validation of similarity measures and response output metrics used, noting that naive or untested selection of metrics can deliver inaccurate and meaningless results.
These results implicate that optimization or machine learning algorithms that use measurement-based techniques without validating their metrics to ensure correlation between error and accuracy may not generate accurate or meaningful results. These methods to determine the effectiveness of the use of these common metrics are specific to these experiments of model accuracy. Future work can expand these methods to dynamic model parameter tuning experiments.
[Phylicia Cicilio]{} (S’15) received the B.S. degree in chemical engineering in 2013 from the University of New Hampshire, Durham, NH, USA. She received the M.S. degree in electrical and computer engineering in 2017 from Oregon State University, Corvallis, OR, USA, where she is currently working toward the Ph.D. degree in electrical and computer engineering.
She is currently a Graduate Fellow at Idaho National Laboratory, Idaho Falls, ID, USA. Her research interests included power system reliability, dynamic modeling, and rural electrification.
[Eduardo Cotilla-Sanchez]{} (S’08-M-12-SM-19) received the M.S. and Ph.D. degrees in electrical engineering from the University of Vermont, Burlington, VT, USA, in 2009 and 2012, respectively.
He is currently an Associate Professor in the School of Electrical Engineering and Computer Science, Oregon State University, Corvallis, OR, USA. His primary field of research is electrical infrastructure resilience and protection, in particular, the study of cascading outages.
Prof. Cotilla-Sanchez is the Vice-Chair of the IEEE Working Group on Cascading Failures and President of the Society of Hispanic Professional Engineers Oregon Chapter.
[^1]: This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1314109-DGE.
[^2]: The authors are with the School of Electrical Engineering and Computer Science, Oregon State University, Corvallis, OR 97331 USA e-mail: (ciciliop@oregonstate.edu; ecs@oregonstate.edu).
|
{
"pile_set_name": "arxiv"
}
|
---
author:
- 'Daisuke Kadoh,'
- Katsumasa Nakayama
bibliography:
- 'Refs.bib'
title: Direct computational approach to lattice supersymmetric quantum mechanics
---
We would like to thank Yoshinobu Kuramashi, Yoshifumi Nakamura, Shinji Takeda, Yuya Shimizu, Yusuke Yoshimura, Hikaru Kawauchi, and Ryo Sakai for valuable comments on TNR formulations which are closely related with this study. D.K also thank Naoya Ukita for encouraging our study. This work is supported by JSPS KAKENHI Grant Numbers JP16K05328 and the MEXT-Supported Program for the Strategic Research Foundation at Private Universities Topological Science (Grant No. S1511006).
|
{
"pile_set_name": "arxiv"
}
|
---
abstract: 'Mesh is an important and powerful type of data for 3D shapes and widely studied in the field of computer vision and computer graphics. Regarding the task of 3D shape representation, there have been extensive research efforts concentrating on how to represent 3D shapes well using volumetric grid, multi-view and point cloud. However, there is little effort on using mesh data in recent years, due to the complexity and irregularity of mesh data. In this paper, we propose a mesh neural network, named MeshNet, to learn 3D shape representation from mesh data. In this method, face-unit and feature splitting are introduced, and a general architecture with available and effective blocks are proposed. In this way, MeshNet is able to solve the complexity and irregularity problem of mesh and conduct 3D shape representation well. We have applied the proposed MeshNet method in the applications of 3D shape classification and retrieval. Experimental results and comparisons with the state-of-the-art methods demonstrate that the proposed MeshNet can achieve satisfying 3D shape classification and retrieval performance, which indicates the effectiveness of the proposed method on 3D shape representation.'
author:
- |
Yutong Feng,^1^ Yifan Feng,^2^ Haoxuan You,^1^ Xibin Zhao^1^[^1], Yue Gao^1^\
^1^BNRist, KLISS, School of Software, Tsinghua University, China.\
^2^School of Information Science and Engineering, Xiamen University\
{feng-yt15, zxb, gaoyue}@tsinghua.edu.cn, {evanfeng97, haoxuanyou}@gmail.com\
title: 'MeshNet: Mesh Neural Network for 3D Shape Representation'
---
Introduction
============
Three-dimensional (3D) shape representation is one of the most fundamental topics in the field of computer vision and computer graphics. In recent years, with the increasing applications in 3D shapes, extensive efforts [@wu20153d; @chang2015shapenet] have been concentrated on 3D shape representation and proposed methods are successfully applied for different tasks, such as classification and retrieval.
For 3D shapes, there are several popular types of data, including volumetric grid, multi-view, point cloud and mesh. With the success of deep learning methods in computer vision, many neural network methods have been introduced to conduct 3D shape representation using volumetric grid [@wu20153d; @maturana2015voxnet], multi-view [@su2015multi] and point cloud [@qi2017pointnet]. PointNet [@qi2017pointnet] proposes to learn on point cloud directly and solves the disorder problem with per-point Multi-Layer-Perceptron (MLP) and a symmetry function. As shown in Figure 1, although there have been recent successful methods using the types of volumetric grid, multi-view and point cloud, for the mesh data, there are only early methods using handcraft features directly, such as the Spherical Harmonic descriptor (SPH) [@kazhdan2003rotation], which limits the applications of mesh data.
![**The developing history of 3D shape representation using different types of data.** The X-axis indicates the proposed time of each method, and the Y-axis indicates the classification accuracy.[]{data-label="fig:intro"}](history2.pdf){width="3.6in"}
{width="6.5in"}
Mesh data of 3D shapes is a collection of vertices, edges and faces, which is dominantly used in computer graphics for rendering and storing 3D models. Mesh data has the properties of complexity and irregularity. The complexity problem is that mesh consists of multiple elements, and different types of connections may be defined among them. The irregularity is another challenge for mesh data processing, which indicates that the number of elements in mesh may vary dramatically among 3D shapes, and permutations of them are arbitrary. In spite of these problems, mesh has stronger ability for 3D shape description than other types of data. Under such circumstances, how to effectively represent 3D shapes using mesh data is an urgent and challenging task.
In this paper, we present a mesh neural network, named MeshNet, that learns on mesh data directly for 3D shape representation. To deal with the challenges in mesh data processing, the faces are regarded as the unit and connections between faces sharing common edges are defined, which enables us to solve the complexity and irregularity problem with per-face processes and a symmetry function. Moreover, the feature of faces is split into spatial and structural features. Based on these ideas, we design the network architecture, with two blocks named spatial and structural descriptors for learning the initial features, and a mesh convolution block for aggregating neighboring features. In this way, the proposed method is able to solve the complexity and irregularity problem of mesh and represent 3D shapes well.
We apply our MeshNet method in the tasks of 3D shape classification and retrieval on the ModelNet40 [@wu20153d] dataset. And the experimental results show that MeshNet achieve significant improvement on 3D shape classification and retrieval using mesh data and comparable performance with recent methods using other types of 3D data.
The key contributions of our work are as follows:
- We propose a neural network using mesh for 3D shape representation and design blocks for capturing and aggregating features of polygon faces in 3D shapes.
- We conduct extensive experiments to evaluate the performance of the proposed method, and the experimental results show that the proposed method performs well on the 3D shape classification and retrieval task.
Related Work
============
Mesh Feature Extraction
-----------------------
There are plenty of handcraft descriptors that extract features from mesh. @lien1984symbolic calculate moments of each tetrahedron in mesh [@lien1984symbolic], and @zhang2001efficient develop more functions applied to each triangle and add all the resulting values as features [@zhang2001efficient]. @hubeli2001multiresolution extend the features of surfaces to a multiresolution setting to solve the unstructured problem of mesh data [@hubeli2001multiresolution]. In SPH [@kazhdan2003rotation], a rotation invariant representation is presented with existing orientation dependent descriptors. Mesh difference of Gaussians (DOG) introduces the Gaussian filtering to shape functions.[@zaharescu2009surface] Intrinsic shape context (ISC) descriptor [@kokkinos2012intrinsic] develops a generalization to surfaces and solves the problem of orientational ambiguity.
Deep Learning Methods for 3D Shape Representation
-------------------------------------------------
With the construction of large-scale 3D model datasets, numerous deep descriptors of 3D shapes are proposed. Based on different types of data, these methods can be categorized into four types.
*Voxel-based method.* 3DShapeNets [@wu20153d] and VoxNet [@maturana2015voxnet] propose to learn on volumetric grids, which partition the space into regular cubes. However, they introduce extra computation cost due to the sparsity of data, which restricts them to be applied on more complex data. Field probing neural networks (FPNN) [@li2016fpnn], Vote3D [@wang2015voting] and Octree-based convolutional neural network (OCNN) [@wang2017cnn] address the sparsity problem, while they are still restricted with input getting larger.
*View-based method.* Using 2D images of 3D shapes to represent them is proposed by Multi-view convolutional neural networks (MVCNN) [@su2015multi], which aggregates 2D views from a loop around the object and applies 2D deep learning framework to them. Group-view convolutional neural networks (GVCNN) [@feng2018gvcnn] proposes a hierarchical framework, which divides views into different groups with different weights to generate a more discriminative descriptor for a 3D shape. This type of method also expensively adds the computation cost and is hard to be applied for tasks in larger scenes.
*Point-based method.* Due to the irregularity of data, point cloud is not suitable for previous frameworks. PointNet [@qi2017pointnet++] solves this problem with per-point processes and a symmetry function, while it ignores the local information of points. PointNet++ [@qi2017pointnet++] adds aggregation with neighbors to solve this problem. Self-organizing network (SO-Net) [@li2018so], kernel correlation network (KCNet) [@shen2018mining] and PointSIFT [@jiang2018pointsift] develop more detailed approaches for capturing local structures with nearest neighbors. Kd-Net [@klokov2017escape] proposes another approach to solve the irregularity problem using k-d tree.
*Fusion method.* These methods learn on multiple types of data and fusion the features of them together. FusionNet [@hegde2016fusionnet] uses the volumetric grid and multi-view for classification. Point-view network (PVNet) [@you2018pvnet] proposes the embedding attention fusion to exploit both point cloud data and multi-view data.
Method
======
In this block, we present the design of MeshNet. Firstly, we analyze the properties of mesh, propose the methods for designing network and reorganize the input data. We then introduce the overall architecture of MeshNet and some blocks for capturing features of faces and aggregating them with neighbor information, which are then discussed in detail.
Overall Design of MeshNet
-------------------------
We first introduce the mesh data and analyze its properties. Mesh data of 3D shapes is a collection of vertices, edges and faces, in which vertices are connected with edges and closed sets of edges form faces. In this paper, we only consider triangular faces. Mesh data is dominantly used for storing and rendering 3D models in computer graphics, because it provides an approximation of the smooth surfaces of objects and simplifies the rendering process. Numerous studies on 3D shapes in the field of computer graphic and geometric modeling are taken based on mesh.
![**Initial values of each face.** There are four types of initial values, divided into two parts: center, corner and normal are the face information, and neighbor index is the neighbor information.[]{data-label="fig:input"}](input_data3.pdf){width="3.2in"}
Mesh data shows stronger ability to describe 3D shapes comparing with other popular types of data. Volumetric grid and multi-view are data types defined to avoid the irregularity of the native data such as mesh and point cloud, while they lose some natural information of the original object. For point cloud, there may be ambiguity caused by random sampling and the ambiguity is more obvious with fewer amount of points. In contrast, mesh is more clear and loses less natural information. Besides, when capturing local structures, most methods based on point cloud collect the nearest neighbors to approximately construct an adjacency matrix for further process, while in mesh there are explicit connection relationships to show the local structure clearly. However, mesh data is also more irregular and complex for the multiple compositions and varying numbers of elements.
To get full use of the advantages of mesh and solve the problem of its irregularity and complexity, we propose two key ideas of design:
- **Regard face as the unit.** Mesh data consists of multiple elements and connections may be defined among them. To simplify the data organization, we regard face as the only unit and define a connection between two faces if they share a common edge. There are several advantages of this simplification. First is that one triangular face can connect with no more than three faces, which makes the connection relationship regular and easy to use. More importantly, we can solve the disorder problem with per-face processes and a symmetry function, which is similar to PointNet [@qi2017pointnet], with per-face processes and a symmetry function. And intuitively, face also contains more information than vertex and edge.
- **Split feature of face.** Though the above simplification enables us to consume mesh data similar to point-based methods, there are still some differences between point-unit and face-unit because face contains more information than point. We only need to know “where you are" for a point, while we also want to know “what you look like" for a face. Correspondingly, we split the feature of faces into **spatial feature** and **structural feature**, which helps us to capture features more explicitly.
Following the above ideas, we transform the mesh data into a list of faces. For each face, we define the initial values of it, which are divided into two parts (illustrated in Fig \[fig:input\]):
- Face Information:
- **Center**: coordinate of the center point
- **Corner**: vectors from the center point to three vertices
- **Normal**: the unit normal vector
- Neighbor Information:
- **Neighbor Index**: indexes of the connected faces (filled with the index of itself if the face connects with less than three faces)
In the final of this section, we present the overall architecture of MeshNet, illustrated in Fig \[fig:pipeline\]. A list of faces with initial values is fed into two blocks, named **spatial descriptor** and **structural descriptor**, to generate the initial spatial and structural features of faces. The features are then passed through some **mesh convolution** blocks to aggregate neighboring information, which get features of two types as input and output new features of them. It is noted that all the processes above work on each face respectively and share the same parameters. After these processes, a pooling function is applied to features of all faces for generating global feature, which is used for further tasks. The above blocks will be discussed in following sections.
Spatial and Structural Descriptors
----------------------------------
We split feature of faces into spatial feature and structural feature. The spatial feature is expected to be relevant to the spatial position of faces, and the structural feature is relevant to the shape information and local structures. In this section, we present the design of two blocks, named spatial and structural descriptors, for generating initial spatial and structural features.
#### Spatial descriptor
The only input value relevant to spatial position is the center value. In this block, we simply apply a shared MLP to each face’s center, similar to the methods based on point cloud, and output initial spatial feature.
#### Structural descriptor: face rotate convolution
We propose two types of structural descriptor, and the first one is named face rotate convolution, which captures the “inner” structure of faces and focus on the shape information of faces. The input of this block is the corner value.
The operation of this block is illustrated in Fig \[fig:rc\]. Suppose the corner vectors of a face are $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$, we define the output value of this block as follows: $$g(\frac{1}{3}(f(\mathbf{v}_1, \mathbf{v}_2) + f(\mathbf{v}_2, \mathbf{v}_3) + f(\mathbf{v}_3, \mathbf{v}_1))),$$ where $f(\cdot, \cdot) : \mathbb{R}^3 \times \mathbb{R}^3 \rightarrow \mathbb{R}^{K_1}$ and $g(\cdot) : \mathbb{R}^{K_1} \rightarrow \mathbb{R}^{K_2}$ are any valid functions.
This process is similar to a convolution operation, with two vectors as the kernel size, one vector as the stride and $K_1$ as the number of kernels, except that translation of kernels is replaced by rotation. The kernels, represented by $f(\cdot, \cdot)$, rotates through the face and works on two vectors each time.
With the above process, we eliminate the influence caused by the order of processing corners, avoid individually considering each corner and also leave full space for mining features inside faces. After the rotate convolution, we apply an average pooling and a shared MLP as $g(\cdot)$ to each face, and output features with the length of $K_2$.
![**The face rotate convolution block.** Kernels rotate through the face and are applied to pairs of corner vectors for the convolution operation.[]{data-label="fig:rc"}](rc.pdf){width="3.0in"}
#### Structural descriptor: face kernel correlation
Another structural descriptor we design is the face kernel correlation, aiming to capture the “outer” structure of faces and explore the environments where faces locate. The method is inspired by KCNet [@shen2018mining], which uses kernel correlation (KC) [@tsin2004correlation] for mining local structures in point clouds. KCNet learns kernels representing different spatial distributions of point sets, and measures the geometric affinities between kernels and neighboring points for each point to indicate the local structure. However, this method is also restricted by the ambiguity of point cloud, and may achieve better performance in mesh.
In our face kernel correlation, we select the normal values of each face and its neighbors as the source, and learnable sets of vectors as the reference kernels. Since all the normals we use are unit vectors, we model vectors of kernels with parameters in the spherical coordinate system, and parameters $(\theta, \phi)$ represent the unit vector $(x, y, z)$ in the Euclidean coordinate system: $$\left\{
\begin{array}{lr}
x=\sin\theta\cos\phi \\
y=\sin\theta\sin\phi \\
z=\cos\theta
\end{array},
\right.$$ where $\theta \in [0, \pi]$ and $\phi \in [0, 2\pi)$.
We define the kernel correlation between the i-th face and the k-th kernel as follows: $$KC(i, k) = \frac{1}{|\mathcal{N}_i||\mathcal{M}_k|}\sum\limits_{\mathbf{n} \in \mathcal{N}_i}\sum\limits_{\mathbf{m} \in \mathcal{M}_k}K_{\sigma}(\mathbf{n}, \mathbf{m}),$$ where $\mathcal{N}_i$ is the set of normals of the i-th face and its neighbor faces, $\mathcal{M}_k$ is the set of normals in the k-th kernel, and $K_{\sigma}(\cdot,\cdot)\ : \mathbb{R}^3 \times \mathbb{R}^3 \rightarrow \mathbb{R}$ is the kernel function indicating the affinity between two vectors. In this paper, we generally choose the Gaussian kernel: $$K_{\sigma}(\mathbf{n}, \mathbf{m}) = \exp(-\frac{\left\| \mathbf{n} - \mathbf{m}\right \|^2}{2\sigma^2}),$$ where $\left\|\cdot\right\|$ is the length of a vector in the Euclidean space, and $\sigma$ is the hyper-parameter that controls the kernels’ resolving ability or tolerance to the varying of sources.
With the above definition, we calculate the kernel correlation between each face and kernel, and more similar pairs will get higher values. Since the parameters of kernels are learnable, they will turn to some common distributions on the surfaces of 3D shapes and be able to describe the local structures of faces. We set the value of $KC(i,k)$ as the k-th feature of the i-th face. Therefore, with $M$ learnable kernels, we generate features with the length of $M$ for each face.
Mesh Convolution
----------------
The mesh convolution block is designed to expand the receptive field of faces, which denotes the number of faces perceived by each face, by aggregating information of neighboring faces. In this process, features related to spatial positions should not be included directly because we focus on faces in a local area and should not be influenced by where the area locates. In the 2D convolutional neural network, both the convolution and pooling operations do not introduce any positional information directly while aggregating with neighboring pixels’ features. Since we have taken out the structural feature that is irrelevant to positions, we only aggregate them in this block.
Aggregation of structural feature enables us to capture structures of wider field around each face. Furthermore, to get more comprehensive feature, we also combine the spatial and structural feature together in this block. The mesh convolution block contains two parts: **combination of spatial and structural features** and **aggregation of structural feature**, which respectively output the new spatial and structural features. Fig \[fig:meshconv\] illustrates the design of this block.
![**The mesh convolution.** “Combination” donates the combination of spatial and structural feature. “Aggregation” denotes the aggregation of structural feature, in which “Gathering” denotes the process of getting neighbors’ features and “Aggregation for one face” denotes different methods of aggregating features.[]{data-label="fig:meshconv"}](meshconv2.pdf){width="3.5in"}
#### Combination of Spatial and Structural Features
We use one of the most common methods of combining two types of features, which concatenates them together and applies a MLP. As we have mentioned, the combination result, as the new spatial feature, is actually more comprehensive and contains both spatial and structural information. Therefore, in the pipeline of our network, we concatenate all the combination results for generating global feature.
#### Aggregation of Structural Feature
With the input structural feature and neighbor index, we aggregate the feature of a face with features of its connected faces. Several aggregation methods are listed and discussed as follows:
- **Average pooling:** The average pooling may be the most common aggregation method, which simply calculates the average value of features in each channel. However, this method sometimes weakens the strongly-activated areas of the original features and reduce the distinction of them.
- **Max pooling:** Another pooling method is the max pooling, which calculates the max value of features in each channel. Max pooling is widely used in 2D and 3D deep learning frameworks for its advantage of maintaining the strong activation of neurons.
- **Concatenation:** We define the concatenation aggregation, which concatenates the feature of face with feature of each neighbor respectively, passes these pairs through a shared MLP and applies a max pooling to the results. This method both keeps the original activation and leaves space for the network to combine neighboring features.
We finally use the concatenation method in this paper. After aggregation, another MLP is applied to further fusion the neighboring features and output new structural feature.
Implementation Details
----------------------
Now we present the details of implementing MeshNet, illustrated in Fig 2, including the settings of hyper-parameters and some details of the overall architecture.
The spatial descriptor contains fully-connected layers (64, 64) and output a initial spatial feature with length of 64. Parameters inside parentheses indicate the dimensions of layers except the input layer. In the face rotate convolution, we set $K_1=32$ and $K_2 = 64$, and correspondingly, the functions $f(\cdot, \cdot)$ and $g(\cdot)$ are implemented by fully-connected layers (32, 32) and (64, 64). In the face kernel correlation, we set $M = 64$ (64 kernels) and $\sigma = 0.2$.
We parameterize the mesh convolution block with a four-tuple $(in_1, in_2, out_1, out_2)$, where $in_1$ and $out1$ indicate the input and output channels of spatial feature, and the $in_2$ and $out2$ indicates the same of structural feature. The two mesh convolution blocks used in the pipeline of MeshNet are configured as $(64, 131, 256, 256)$ and $(256, 256, 512, 512)$.
Experiments
===========
In the experiments, we first apply our network for 3D shape classification and retrieval. Then we conduct detailed ablation experiments to analyze the effectiveness of blocks in the architecture. We also investigate the robustness to the number of faces and the time and space complexity of our network. Finally, we visualize the structural features from the two structural descriptors.
3D Shape Classification and Retrieval
-------------------------------------
We apply our network on the ModelNet40 dataset [@wu20153d] for classification and retrieval tasks. The dataset contains 12,311 mesh models from 40 categories, in which 9,843 models for training and 2,468 models for testing. For each model, we simplify the mesh data into no more than 1,024 faces, translate it to the geometric center, and normalize it into a unit sphere. Moreover, we also compute the normal vector and indexes of connected faces for each face. During the training, we augment the data by jittering the positions of vertices by a Gaussian noise with zero mean and 0.01 standard deviation. Since the number of faces varies among models, we randomly fill the list of faces to the length of 1024 with existing faces for batch training.
-------------------------------- -------- ------ ------
Acc mAP
(%) (%)
3DShapeNets [@wu20153d] volume 77.3 49.2
VoxNet [@maturana2015voxnet] volume 83.0 -
FPNN [@li2016fpnn] volume 88.4 -
LFD [@chen2003visual] view 75.5 40.9
MVCNN [@su2015multi] view 90.1 79.5
Pairwise [@johns2016pairwise] view 90.7 -
PointNet [@qi2017pointnet] point 89.2 -
PointNet++ [@qi2017pointnet++] point 90.7 -
Kd-Net [@klokov2017escape] point 91.8 -
KCNet [@shen2018mining] point 91.0 -
SPH [@kazhdan2003rotation] mesh 68.2 33.3
MeshNet mesh 91.9 81.9
-------------------------------- -------- ------ ------
: Classification and retrieval results on ModelNet40.[]{data-label="tab:application"}
For classification, we apply fully-connected layers (512, 256, 40) to the global features as the classifier, and add dropout layers with drop probability of 0.5 before the last two fully-connected layers. For retrieval, we calculate the L2 distances between the global features as similarities and evaluate the result with mean average precision (mAP). We use the SGD optimizer for training, with initial learning rate 0.01, momentum 0.9, weight decay 0.0005 and batch size 64.
---------------- -------------- -------------- -------------- -------------- -------------- --------------
Spatial $\checkmark$ $\checkmark$ $\checkmark$ $\checkmark$ $\checkmark$
Structural-FRC $\checkmark$ $\checkmark$ $\checkmark$ $\checkmark$
Structural-FKC $\checkmark$ $\checkmark$ $\checkmark$ $\checkmark$
Mesh Conv $\checkmark$ $\checkmark$ $\checkmark$ $\checkmark$ $\checkmark$
Accuracy (%) 83.5 88.2 87.0 89.9 90.4 91.9
---------------- -------------- -------------- -------------- -------------- -------------- --------------
: Classification results of ablation experiments on ModelNet40.[]{data-label="tab:ablation"}
Aggregation Method Accuracy (%)
-------------------- --------------
Average Pooling 90.7
Max Pooling 91.1
Concatenation 91.9
: Classification results of different aggregation methods on ModelNet40.[]{data-label="tab:aggregation"}
Table \[tab:application\] shows the experimental results of classification and retrieval on ModelNet40, comparing our work with representative methods. It is shown that, as a mesh-based representation, MeshNet achieves satisfying performance and makes great improvement compared with traditional mesh-based methods. It is also comparable with recent deep learning methods based on other types of data.
Our performance is dedicated to the following reasons. With face-unit and per-face processes, MeshNet solves the complexity and irregularity problem of mesh data and makes it suitable for deep learning method. Though with deep learning’s strong ability to capture features, we do not simply apply it, but design blocks to get full use of the rich information in mesh. Splitting features into spatial and structural features enables us to consider the spatial distribution and local structure of shapes respectively. And the mesh convolution blocks widen the receptive field of faces. Therefore, the proposed method is able to capture detailed features of faces and conduct the 3D shape representation well.
On the Effectiveness of Blocks
------------------------------
To analyze the design of blocks in our architecture and prove the effectiveness of them, we conduct several ablation experiments, which compare the classification results while varying the settings of architecture or removing some blocks.
For the spatial descriptor, labeled as “Spatial” in Table \[tab:ablation\], we remove it together with the use of spatial feature in the network, and maintain the aggregation of structural feature in the mesh convolution.
For the structural descriptor, we first remove the whole of it and use max pooling to aggregate the spatial feature in the mesh convolution. Then we partly remove the face rotate convolution, labeled as “Structural-FRC” in Table \[tab:ablation\], or the face kernel correlation, labeled as “Structural-FKC”, and keep the rest of pipeline to prove the effectiveness of each structural descriptor.
For the mesh convolution, labeled as “Mesh Conv” in Table \[tab:ablation\], we remove it and use the initial features to generate the global feature directly. We also explore the effectiveness of different aggregation methods in this block, and compare them in Table \[tab:aggregation\]. The experimental results show that the adopted concatenation method performs better for aggregating neighboring features.
On the Number of Faces
----------------------
The number of faces in ModelNet40 varies dramatically among models. To explore the robustness of MeshNet to the number of faces, we regroup the test data by the number of faces with interval 200. In Table \[tab:facenum\], we list the proportion of the number of models in each group, together with the classification results. It is shown that the accuracy is absolutely irrelevant to the number of faces and shows no downtrend with the decrease of it, which proves the great robustness of MeshNet to the number of faces. Specifically, on the 9 models with less than 50 faces (the minimum is 10), our network achieves 100% classification accuracy, showing the ability to represent models with extremely few faces.
Number of Faces Proportion (%) Accuracy (%)
----------------- ---------------- --------------
$[ 1000, 1024)$ 69.48 92.00
$[800, 1000)$ 6.90 92.35
$[600, 800)$ 4.70 93.10
$[400, 600)$ 6.90 91.76
$[200, 400)$ 6.17 90.79
$[0, 200)$ 5.84 90.97
: Classification results of groups with different number of faces on ModelNet40.[]{data-label="tab:facenum"}
On the Time and Space Complexity
--------------------------------
Table \[tab:timespace\] compares the time and space complexity of our network with several representative methods based on other types of data for the classification task. The column labeled “\#params” shows the total number of parameters in the network and the column labeled “FLOPs/sample” shows the number of floating-operations conducted for each input sample, representing the space and time complexity respectively.
It is known that methods based on volumetric grid and multi-view introduce extra computation cost while methods based on point cloud are more efficient. Theoretically, our method works with per-face processes and has a linear complexity to the number of faces. In Table \[tab:timespace\], MeshNet shows comparable effectiveness with methods based on point cloud in both time and space complexity, leaving enough space for further development.
------------------------------- ---------- ------------
\#params FLOPs /
(M) sample (M)
PointNet [@qi2017pointnet] 3.5 440
Subvolume [@qi2016volumetric] 16.6 3633
MVCNN [@su2015multi] 60.0 62057
MeshNet 4.25 509
------------------------------- ---------- ------------
: Time and space complexity for classification.[]{data-label="tab:timespace"}
![**Feature visualization of structural feature.** Models from the same column are colored with their values of the same channel in features. **Left**: Features from the face rotate convolution. **Right**: Features from the face kernel correlation.[]{data-label="fig:vis"}](vis1.pdf){width="3.3"}
Feature Visualization
---------------------
To figure out whether the structural descriptors successfully capture our features of faces as expected, we visualize the two types of structural features from face rotate convolution and face kernel correlation. We randomly choose several channels of these features, and for each channel, we paint faces with colors of different depth corresponding to their values in this channel.
The left of Fig \[fig:vis\] visualizes features from face rotate convolution, which is expected to capture the “inner” features of faces and concern about the shapes of them. It is clearly shown that these faces with similar look are also colored similarly, and different channels may be activated by different types of triangular faces.
The visualization results of features from face kernel correlation are in the right of Fig \[fig:vis\]. As we have mentioned, this descriptor captures the “outer” features of each face and is relevant to the whole appearance of the area where the face locates. In the visualization, faces in similar types of areas, such as flat surfaces and steep slopes, turn to have similar features, regardless of their own shapes and sizes.
Conclusions
===========
In this paper, we propose a mesh neural network, named MeshNet, which learns on mesh data directly for 3D shape representation. The proposed method is able to solve the complexity and irregularity problem of mesh data and conduct 3D shape representation well. In this method, the polygon faces are regarded as the unit and features of them are split into spatial and structural features. We also design blocks for capturing and aggregating features of faces. We conduct experiments for 3D shape classification and retrieval and compare our method with the state-of-the-art methods. The experimental result and comparisons demonstrate the effectiveness of the proposed method on 3D shape representation. In the future, the network can be further developed for more computer vision tasks.
Acknowledgments
===============
This work was supported by National Key R&D Program of China (Grant No. 2017YFC0113000), National Natural Science Funds of China (U1701262, 61671267), National Science and Technology Major Project (No. 2016ZX01038101), MIIT IT funds (Research and application of TCN key technologies) of China, and The National Key Technology R&D Program (No. 2015BAG14B01-02).
Chang, A. X.; Funkhouser, T.; Guibas, L.; Hanrahan, P.; Huang, Q.; Li, Z.; Savarese, S.; Savva, M.; Song, S.; Su, H.; et al. 2015. . .
Chen, D.-Y.; Tian, X.-P.; Shen, Y.-T.; and Ouhyoung, M. 2003. . In [*Computer Graphics Forum*]{}, volume 22, 223–232. Wiley Online Library.
Feng, Y.; Zhang, Z.; Zhao, X.; Ji, R.; and Gao, Y. 2018. . In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, 264–272.
Hegde, V., and Zadeh, R. 2016. . .
Hubeli, A., and Gross, M. 2001. . In [*Proceedings of the Conference on Visualization*]{}, 287–294. IEEE Computer Society.
Jiang, M.; Wu, Y.; and Lu, C. 2018. . .
Johns, E.; Leutenegger, S.; and Davison, A. J. 2016. . In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, 3813–3822.
Kazhdan, M.; Funkhouser, T.; and Rusinkiewicz, S. 2003. . In [*Symposium on Geometry Processing*]{}, volume 6, 156–164.
Klokov, R., and Lempitsky, V. 2017. . In [*Proceedings of the IEEE International Conference on Computer Vision*]{}, 863–872. IEEE.
Kokkinos, I.; Bronstein, M. M.; Litman, R.; and Bronstein, A. M. 2012. . In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, 159–166. IEEE.
Li, Y.; Pirk, S.; Su, H.; Qi, C. R.; and Guibas, L. J. 2016. . In [*Advances in Neural Information Processing Systems*]{}, 307–315.
Li, J.; Chen, B. M.; and Lee, G. H. 2018. . In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, 9397–9406.
Lien, S.-l., and Kajiya, J. T. 1984. . 4(10):35–42.
Maturana, D., and Scherer, S. 2015. . In [*2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)*]{}, 922–928. IEEE.
Qi, C. R.; Su, H.; Nie[ß]{}ner, M.; Dai, A.; Yan, M.; and Guibas, L. J. 2016. . In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, 5648–5656.
Qi, C. R.; Su, H.; Mo, K.; and Guibas, L. J. 2017a. . In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, 77–85.
Qi, C. R.; Yi, L.; Su, H.; and Guibas, L. J. 2017b. . In [*Advances in Neural Information Processing Systems*]{}, 5099–5108.
Shen, Y.; Feng, C.; Yang, Y.; and Tian, D. . In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, volume 4.
Su, H.; Maji, S.; Kalogerakis, E.; and Learned-Miller, E. 2015. . In [*Proceedings of the IEEE International Conference on Computer Vision*]{}, 945–953.
Tsin, Y., and Kanade, T. 2004. . In [*European Conference on Computer Vision*]{}, 558–569. Springer.
Wang, D. Z., and Posner, I. 2015. In [*Robotics: Science and Systems*]{}, volume 1.
Wang, P.-S.; Liu, Y.; Guo, Y.-X.; Sun, C.-Y.; and Tong, X. 2017. . 36(4):72.
Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; and Xiao, J. 2015. . In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, 1912–1920.
You, H.; Feng, Y.; Ji, R.; and Gao, Y. 2018. . In [*Proceedings of the 26th ACM International Conference on Multimedia*]{}, 1310–1318. ACM.
Zaharescu, A.; Boyer, E.; Varanasi, K.; and Horaud, R. 2009. . In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, 373–380. IEEE.
Zhang, C., and Chen, T. 2001. . In [*Proceedings of the IEEE International Conference on Image Processing*]{}, volume 3, 935–938. IEEE.
[^1]: Corresponding authors
|
{
"pile_set_name": "arxiv"
}
|
---
abstract: |
From formal and practical analysis, we identify new challenges that self-adaptive systems pose to the process of quality assurance. When tackling these, the effort spent on various tasks in the process of software engineering is naturally re-distributed. We claim that all steps related to testing need to become self-adaptive to match the capabilities of the self-adaptive system-under-test. Otherwise, the adaptive system’s behavior might elude traditional variants of quality assurance. We thus propose the paradigm of scenario coevolution, which describes a pool of test cases and other constraints on system behavior that evolves in parallel to the (in part autonomous) development of behavior in the system-under-test. Scenario coevolution offers a simple structure for the organization of adaptive testing that allows for both human-controlled and autonomous intervention, supporting software engineering for adaptive systems on a procedural as well as technical level.
-adaptive system, software engineering, quality assurance, software evolution
author:
- Thomas Gabor$^1$
- Marie Kiermeier$^1$
- Andreas Sedlmeier$^1$
- Bernhard Kempter$^2$
- Cornel Klein$^2$
- Horst Sauer$^2$
- Reiner Schmid$^2$
- Jan Wieghardt$^2$
bibliography:
- 'references.bib'
title: |
Adapting Quality Assurance\
to Adaptive Systems:\
The Scenario Coevolution Paradigm
---
at (current page.south) ;
Introduction
============
Until recently, the discipline of software engineering has mainly tackled the process through which humans develop software systems. In the last few years, current break-throughs in the fields of artificial intelligence and machine learning have enabled new possibilities that have previously been considered infeasible or just too complex to tap into with “manual” coding: Complex image recognition, natural language processing, or decision making as it is used in complex games are prime examples. The resulting applications are pushing towards a broad audience of users. However, as of now, they are mostly focused on non-critical areas of use, at least when implemented without further human supervision. Software artifacts generated via machine learning are hard to analyze, causing a lack of trustworthiness for many important application areas.
We claim that in order to reinstate levels of trustworthiness comparable to well-known classical approaches, we need not essentially reproduce the principles of classical software test but need to develop a new approach towards software testing. We suggest to develop a system and its test suite in a competitive setting where each sub-system tries to outwit the other. We call this approach *scenario coevolution* and attempt to show the necessity of such an approach. We hope that trust in that dynamic (or similar ones) can help to build a new process for quality assurance, even for hardly predictable systems.
Following a top-down approach to the issue, we start in Section \[sec:formal\] by introducing a formal framework for the description of systems. We augment it to also include the process of software and system development. Section \[sec:related-work\] provides a short overview on related work. From literature review and practical experience, we introduce four core concepts for the engineering of adaptive systems in Section \[sec:concepts\]. In order to integrate these with our formal framework, Section \[sec:scenarios\] contains an introduction of our notion of scenarios and their application to an incremental software testing process. In Section \[sec:applications\] we discuss which effect scenario coevolution has on a selection of practical software engineering tasks and how it helps implement the core concepts. Finally, Section \[sec:conclusion\] provides a short conclusion.
Formal Framework {#sec:formal}
================
In this section we introduce a formal framework as a basis for our analysis. We first build upon the framework described in [@holzl2011towards] to define adaptive systems and then proceed to reason about the influence of their inherent structure on software architecture.
Describing Adaptive Systems
---------------------------
We roughly adopt the formal definitions of our vocabulary related to the description of systems from [@holzl2011towards]: We describe a system as an arbitrary relation over a set of variables.
\[def:system\] Let $I$ be a (finite or infinite) set, and let $\mathcal{V} = (V_i)_{i \in I}$ be a family of sets. A *system* of type $\mathcal{V}$ is a relation $S$ of type $\mathcal{V}$.
Given a System $S$, an element $s \in S$ is called the state of the system. For practical purposes, we usually want to discern various parts of a system’s state space. For this reason, parts of the system relation of type $\mathcal{V}$ given by an index set $J \subseteq I$, i.e., $(V_j)_{j \in J}$, may be considered *inputs* and other parts given by a different index set may be considered *outputs* [@holzl2011towards]. Formally, this makes no difference to the system. Semantically, we usually compute the output parts of the system using the input parts.
We introduce two more designated sub-spaces of the system relation: *situation* and *behavior*. These notions correspond roughly to the intended meaning of inputs and outputs mentioned before. The situation is the part of the system state space that fully encapsulates all information the system has about its state. This may include parts that the system does have full control over (which we would consider counter-intuitive when using the notion of “input”). The behavior encapsulates the parts of the system that can only be computed by applying the system relation. Likewise, this does *not* imply that the system has full control over the values. Furthermore, a system may have an *internal state*, which is parts of the state space that are neither included in the situation nor in the behavior. When we are not interested in the internal space, we can regard a system as a mapping from situations to behavior, written $S = X \stackrel{Z}{\leadsto} Y$ for situations $X$ and behaviors $Y$, where $Z$ is the internal state of the system $S$. Using these notions, we can more aptly define some properties on systems. Further following the line of thought presented in [@holzl2011towards], we want to build systems out of other systems. At the core of software engineering, there is the principle of re-use of components, which we want to mirror in our formalism.
Let $S_1$ and $S_2$ be systems of types $\mathcal{V}_1 = (V_{1,i})_{i \in I_1}$ and $\mathcal{V}_2 = (V_{2,i})_{i \in I_2}$, respectively. Let $\mathcal{R}(\mathcal{V})$ be the domain of all relations over $\mathcal{V}$. A *combination operator* $\otimes$ is a function such that $S_1 \otimes S_2 \in \mathcal{R}(\mathcal{V})$ for some family of sets $\mathcal{V}$ with $V_{1,1}, ..., V_{1,m}, V_{2,1}, ..., V_{2,n} \in \mathcal{V}$.[^1] The application of a combination operator is called *composition*. The arguments to a combination operator are called *components*.
Composition is not only important to model software architecture within our formalism, but it also defines the formal framework for interaction: Two systems interact when they are combined using a combination operator $\otimes$ that ensures that the behavior of (at least) one system is recognized within the situation of (at least) another system.
Let $S = S_1 \otimes S_2$ be a composition of type $\mathcal{V}$ of systems $S_1$ and $S_2$ of type $\mathcal{V}_1$ and $\mathcal{V}_2$, respectively, using a combination operator $\otimes$. If there exist a $V_1 \in \mathcal{V}_1$ and a $V_2 \in \mathcal{V}_2$ and a relation $R \in V_1 \times V_2$ so that for all states $s \in S$, $(proj(s, V_1), proj(s, V_2)) \in R$, then the components $S_1$ and $S_2$ interact with respect to $R$.
We can model an open system $S$ as a combination $S = C \otimes E$ of a core system $C$ and its environment $E$, both being modeled as systems again.
Hiding some of the complexity described in [@holzl2011towards], we assume we have a logic $\mathfrak{L}$ in which we can express a system goal $\gamma$. We can always decide if $\gamma$ holds for a given system, in which case we write $S \models \gamma$ for $\gamma(S) = \top$. Based on [@holzl2011towards], we can use this concept to define an adaptation domain:
\[def:adaptation-domain\] Let $S$ be a system. Let $\mathcal{E}$ be a set of environments that can be combined with $S$ using a combination operator $\otimes$. Let $\Gamma$ be a set of goals. An *adaptation domain* $\mathcal{A}$ is a set $\mathcal{A} \subseteq \mathcal{E} \times \Gamma$. $S$ can adapt to $\mathcal{A}$, written $S \Vdash \mathcal{A}$ iff for all $(E, \gamma) \in \mathcal{A}$ it holds that $S \otimes E \models \gamma$.
\[def:adaptation-space\] Let $\mathcal{E}$ be a set of environments that can be combined with $S$ using a combination operator $\otimes$. Let $\Gamma$ be set of goals. An *adaptation space* $\mathfrak{A}$ is a set $\mathfrak{A} \subseteq \mathfrak{P}(\mathcal{E}, \Gamma)$.
We can now use the notion of an adaptation space to define a preorder on the adaptivity of any two systems.
\[def:adaptation\] Given two systems $S$ and $S'$, $S'$ is at least as adaptive as $S$, written $S \sqsubseteq S'$ iff for all adaptation spaces $\mathcal{A} \in \mathfrak{A}$ it holds that $S \Vdash \mathcal{A} \Longrightarrow S' \Vdash \mathcal{A}$.
Both Definitions \[def:adaptation-domain\] and \[def:adaptation-space\] can be augmented to include soft constraints or optimization goals. This means that in addition to checking against boolean goal satisfaction, we can also assign each system $S$ interacting with an environment $E$ a *fitness* $\phi(S \otimes E) \in F$, where $F$ is the type of fitness values. We assume that there exists a preorder $\preceq$ on $F$, which we can use to compare two fitness values. We can then generalize Definition \[def:adaptation-domain\] and \[def:adaptation-space\] to respect these optimization goals.
\[def:adaptation-domain-opt\] Let $S$ be a system. Let $\mathcal{E}$ be a set of environments that can be combined with $S$ using a combination operator $\otimes$. Let $\Gamma$ be a set of Boolean goals. Let $F$ be a set of fitness values and $\preceq$ be a preorder on $F$. Let $\Phi$ be a a set of fitness functions with codomain $F$. An *adaptation domain* $\mathcal{A}$ is a set $\mathcal{A} \subseteq \mathcal{E} \times \Gamma \times \Phi$. $S$ can adapt to $\mathcal{A}$, written $S \Vdash \mathcal{A}$ iff for all $(E, \gamma, \phi) \in \mathcal{A}$ it holds that $S \otimes E \models \gamma$.
Note that in Definition \[def:adaptation-domain-opt\] we only augmented the data structure for adaptation domains but did not actually alter the condition to check for the fulfillment of an adaptation domain. This means that for an adaptation domain $\mathcal{A}$, a system needs to fulfill all goals in $\mathcal{A}$ but is not actually tested on the fitness defined by $\phi$. We could define a fitness threshold $f$ we require a system $S$ to surpass in order to adapt to $\mathcal{A}$ in the formalism. But such a check, written $f \preceq \phi(S \otimes E)$, could already be included in the Boolean goals if we use a logic that is expressive enough.
Instead, we want to use the fitness function as soft constraints: We expect the system to perform as well as possible on this metric, but we do not (always) require a minimum level of performance. However, we can use fitness to define a fitness preorder on systems:
\[def:optimization\] Given two systems $S$ and $S'$ as well as an adaptation space $\mathcal{A}$, $S'$ is at least as optimal as $S$, written $S \preceq_\mathcal{A} S'$, iff for all $(E, \gamma, \phi) \in \mathcal{A}$ it holds that $\phi(S \otimes E) \preceq \phi(S' \otimes E)$.
\[def:adaptation-opt\] Given two systems $S$ and $S'$, $S'$ is at least as adaptive as $S$ with respect to optimization, written $S \sqsubseteq^* S'$ iff for all adaptation domains $\mathcal{A} \in \mathfrak{A}$ it holds that $S \Vdash \mathcal{A} \Longrightarrow S' \Vdash \mathcal{A}$ and $S \preceq_\mathcal{A} S'$.
Note that so far our notions of adaptivity and optimization are purely extensional, which originates from the black box perspective on adaptation assumed in [@holzl2011towards].
Constructing Adaptive Systems
-----------------------------
We now shift the focus of our analysis a bit away from the question “When is a system adaptive?” towards the question “How is a system adaptive?”. This refers to both questions of software architecture (i.e., which components should we use to make an adaptive system?) and questions of software engineering (i.e., which development processes should we use to develop an adaptive system?). We will see that with the increasing usage of methods of artificial intelligence, design-time engineering and run-time adaptation increasingly overlap [@wirsing2015software].
\[def:adaptation-sequence\] A series of $|I|$ systems $\mathcal{S} = (S_i)_{i\in I}$ with index set $I$ with a preorder $\leq$ on the elements of $I$ is called an *adaptation sequence* iff for all $i, j \in I$ it holds that $i \leq j \Longrightarrow S_i \sqsubseteq^* S_j$
Note that we used adaptation with optimization in Definition \[def:adaptation-sequence\] so that a sequence of systems $(S_i)_{i\in I}$ that each fulfill the same hard constraints ($\gamma$ within a singleton adaptation space $\mathfrak{A} = \{\{(E, \gamma, \phi)\}\}$) can form an adaptation sequence iff for all $i, j \in I$ it holds that $i \leq j \Longrightarrow \phi(S_i \otimes E) \preceq \phi(S_j \otimes E)$. This is the purest formulation of an optimization process within our formal framework.[^2]
Such an adaptation sequence can be generated by continuously improving a starting system $S_0$ and adding each improvement to the sequence. Such a task can both be performed by a team of human developers or standard optimization algorithms as they are used in artificial intelligence. Only in the latter case, we want to consider that improvement happening within our system boundaries. Unlike the previously performed black-box analysis of systems, the presence of an optimization algorithm within the system itself does have implications for the system’s internal structure. We will thus switch to a more “grey box” analysis in the spirit of [@bruni2012conceptual].
\[def:self-adaptation\] A system $S_0$ is called *self-adaptive* iff the sequence $(S_i)_{i \in \mathbb{N}, i < n}$ for some $n \in \mathbb{N}$ with $S_i = S_0 \otimes S_{i-1}$ for $0 < i < n$ and some combination operator $\otimes$ is an adaptation sequence.
Note that we could define the property of self-adaptation more generally by again constructing an index set on the sequence $(S_i)$ instead of using $\mathbb{N}$, but chose not to do so to not further clutter the notation. For most practical purposes, the adaptation is going to happen in discrete time steps anyway. It is also important to be reminded that despite its notation, the combination operator $\otimes$ does not need to be symmetric and likely will not be in this case, because when constructing $S_0 \otimes S_{i-1}$ we usually want to pass the previous instance $S_{i-1}$ to the general optimization algorithm encoded in $S_0$.[^3] Furthermore, it is important to note that the constant sequence $(S)_{i \in \mathbb{N}}$ is an adaptation sequence according to our previous definition and thus every system is self-adaptive with respect to a combination operator $X \otimes Y =_\text{def} X$. However, we can construct non-trivial adaptation sequence using partial orders $\sqsubset$ and $\prec$ instead of $\sqsubseteq$ and $\preceq$. As these can easily be constructed, we do not further discuss their definitions in this paper. In [@holzl2011towards] a corresponding definition was already introduced for $\sqsubset$.
The formulation of the adaptation sequence used to prove self-adaptivity naturally implies some kind of temporal structure. So basing said structure around $\mathbb{N}$ implies a very simple, linear and discrete model of time. More complex temporal evolution of systems is also already touched upon in [@holzl2011towards]. As noted, there may be several ways to define such a temporal structure on systems. We refer to related and future work for a more intricate discussion on this matter.
So, non-trivial self-adaptation does imply some structure for any self-adaptive system $S$ of type $\mathcal{V} = (V_i)_{i \in I}$: Mainly, there needs to be a subset of the type $\mathcal{V}' \subseteq \mathcal{V}$ that is used to encode the whole relation behind $S$ so that the already improved instances can sufficiently be passed on to the general adaptation mechanism.
For a general adaptation mechanism (as we previously assumed to be part of a system) to be able to improve a system’s adaptivity, it needs to be able to access some representation of its goals and its fitness function. This provides a grey-box view of the system. We remember that we assumed we could split a system $S$ into situation $X$, internal state $Z$ and behavior $Y$, written $S = X \stackrel{Z}{\leadsto} Y$. If $S$ is self-adaptive, it can form a non-trivial adaptation sequence by improving on its goals or its fitness. In the former case, we can now assume that there exists some relation $G \subseteq X \cup Z$ so that $S \models \gamma \iff G \models \gamma$ for a fixed $\gamma$ in a singleton-space adaptation sequence. In the latter case, we can assume that there exists some relation $F \subseteq X \cup Z$ so that $\phi(S) = \phi(F)$ for a fixed $\phi$ in a singleton-space adaptation sequence.
Obviously, when we want to construct larger self-adaptive systems using self-adaptive components, the combination operator needs to be able to combine said sub-systems $G$ and/or $F$ as well. In the case where the components’ goals and fitnesses match completely, the combination operator can just use the same sub-system twice. However, including the global goals or fitnesses within each local component of a system does not align with common principles in software architecture (such as encapsulation) and does not seem to be practical for large or open systems (where no process may ensure such a unification). Thus, constructing a component-based self-adaptive system requires a combination operator that can handle potentially conflicting goals and fitnesses. We again define such a system for a singleton adaptation space $\mathfrak{A} = \{\{(E, \gamma, \phi)\}\}$ and leave the generalization to all adaptation spaces out of the scope of this paper.
\[def:mas\] Given a system $S = S_1 \otimes ... \otimes S_n$ that adapts to $\mathcal{A} = \{(E, \gamma, \phi)\}$. Iff for each $1 \leq i \leq n$ with $i, n \in \mathbb{N}, n > 1$ there is an adaptation domain $\mathcal{A}_i = \{(E_i, \gamma_i, \phi_i)\}$ so that (1) $E_i = E \otimes S_1 \otimes ... \otimes S_{i-1} \otimes S_{i+1} \otimes ... \otimes S_n$ and (2) $\gamma_i \neq \gamma$ or $\phi_i \neq \phi$ and (3) $S_i$ adapts to $\mathcal{A}_i$, then $S$ is a *multi-agent system* with agents $S_1, ..., S_n$.
For practical purposes, we usually want to use the notion of multi-agent systems in a transistive way, i.e., we can call a system a multi-agent system as soon as any part of it is a multi-agent system according to Definition\[def:mas\]. Formally, $S$ is a multi-agent system if there are systems components $S', R$ so that $S = S' \otimes R$ and $S'$ is a multi-agent system. We argue that this transitivity is not only justified but a crucial point for systems development of adaptive systems: Agents tend to utilize their environment to fulfill their own goals and can thus “leak” their goals into other system components. Not that Condition (2) of Definition \[def:mas\] ensures that not every system constructed by composition is regarded a multi-agent system; it is necessary to feature agents with (at least slightly) differing adaptation properties.
For the remainder of this paper, we will apply Definition \[def:mas\] “backwards”: Whenever we look at a self-adaptive system $S$, whose goals or fitnesses can be split into several sub-goals or sub-fitnesses we can regard $S$ as a multi-agent system. Using this knowledge, we can apply design patterns from multi-agent systems to all self-adaptive systems without loss of generality. Furthermore, we need to be aware that especially if we do not explicitly design multi-agent coordination between different sub-goals, such a coordination will be done implicitly. Essentially, there is no way around generalizing software engineering approaches for self-adaptive systems to potentially adversarial components.
Related Work {#sec:related-work}
============
Many researchers and practitioners in recent years have already been concerned about the changes necessary to allow for solid and reliable software engineering processes for (self-)adaptive systems. Central challenges were collected in [@salehie2009self], where issues of quality assurance are already mentioned but the focus is more on bringing about complex adaptive behavior in the first place. The later research roadmap of [@de2013software] puts a strong focus on interaction patterns of already adaptive systems (both between each other and with human developers) and already dedicates a section to verification and validation issues, being close in mind to the perspective of this work. We fall in line with the roadmap further specified in [@bures2015software; @belzner2016software; @bures2017software].
While this work largely builds upon [@holzl2011towards], there have been other approaches to formalize the notion of adaptivity: [@oreizy1999architecture] discusses high-level architectural patterns that form multiple inter-connected adaptation loops. In [@arcaini2015modeling] such feedback loops are based on the MAPE-K model [@kephart2003vision]. While these approaches largely focus on the formal construction of adaptive systems, there have also been approaches that assume a (more human-centric or at least tool-centric) software engineering perspective [@elkhodary2010fusion; @andersson2013software; @gabor2016simulation; @weyns2017software]. We want to discuss two of those on greater detail:
In the results of the *ASCENS* (Autonomous Service Component ENSembles) project [@wirsing2015software], the interplay between human developers and autonomous adaptation has been formalized in a life-cycle model featuring separate states for each the development progress of each respective feedback cycle. Classical software development tasks and self-adaptation (as well as self-monitoring and self-awareness) are regarded as equally powerful contributing mechanisms for the production of software. Both can be employed in junction to steer the development process. In addition, ASCENS built upon a (in parts) similar formal notion of adaptivity [@bruni2012conceptual; @nicola2014formal] and sketched a connection between adaptivity in complex distributed systems and multi-goal multi-agent learning [@holzl2015reasoning].
*ADELFE* (Atelier de Développement de Logiciels à Fonctionnalité Emergente) is a toolkit designed to augment current development processes to account for complex adaptive systems [@bernon2003tools; @bernon2005engineering]. For this purpose, ADELFE is based on the Rational Unified Process (RUP) [@kruchten2004rational] and comes with tools for various tasks of software design. From a more scientific point of view, ADELFE is also based on the theory of adaptive multi-agent systems. For ADELFE, multi-agent systems are used to derive a set of stereotypes for components, which ease modeling for according types of systems. It thus imposes stronger restrictions on system design than our approach intends to.
Besides the field of software engineering, the field of artificial intelligence research is currently (re-)discovering a lot of the same issues the discipline of of engineering for complex adaptive systems faced: The highly complex and opaque nature of machine learning algorithms and the resulting data structures often forces black-box testing and makes possible guarantees weak. When online learning is employed, the algorithm’s behavior is subject to great variance and testing usually needs to work online as well. The seminal paper [@amodei2016concrete] provides a good overview of the issues. When applying artificial intelligence to a large variety of products, rigorous engineering for this kind of software seems to be one of the major necessities lacking at the moment.
Core Concepts of Future Software Engineering {#sec:concepts}
============================================
Literature makes it clear that one of the main issues of the development of self-adapting systems lies with *trustworthiness*. Established models for checking systems (i.e., verification and validation) do not really fit the notion of a constantly changing system. However, these established models represent all the reason we have at the moment to trust the systems we developed. Allowing the system more degrees of freedom thus hinders the developers’ ability to estimate the degree of maturity of the system they design, which poses a severe difficulty for the engineering progress, when the desired premises or the expected effects of classical engineering tasks on the system-under-development are hard to formulate.
To aid us control the development/adaptation progress of the system, we define a set of *principles*, which are basically patterns for process models. They describe the changes to be made in the engineering process for complex, adaptive systems in relation to more classical models for software and systems engineering.
\[con:parallelism\] The system and its test suite should develop in parallel from the start with controlled moments of interchange of information. Eventually, the test system is to be deployed alongside the main system so that even during runtime, on-going online tests are possible [@calinescu2012self]. This argument has been made for more classical systems as well and thus classical software test is, too, no longer restricted to a specific phase of software development. However, in the case of self-learning systems, it is important to focus on the evolution of test cases: The capabilities of the system might not grow as experienced test designers expect them to compared to systems entirely realized by human engineering effort. Thus, it is important to conceive and formalize how tests in various phases relate to each other.
\[con:antagonism\] Any adaptive systems must be subject to an equally adaptive test. Overfitting is a known issue for many machine learning techniques. In software development for complex adaptive systems, it can happen on a larger scale: Any limited test suite (we expect our applications to be too complex to run a complete, exhaustive test) might induce certain unwanted biases. Ideally, once we know about the cases our system has a hard time with, we can train it specifically for these situations. For the so-hardened system the search mechanism that gave us the hard test cases needs to come up with even harder ones to still beat the system-under-test. Employing autonomous adaptation at this stage is expected to make that arms race more immediate and faster than it is usually achieved with human developers and testers alone.
\[con:automated\] Since the realization of tasks concerning adaptive components usually means the application of a standard machine learning process, a lot of the development effort regarding certain tasks tends to shift to an earlier phase in the process model. The most developer time when applying machine learning techniques, e.g., tends to be spent on gathering information about the problem to solve and the right setup of parameters to use; the training of the learning agent then usually follows one of a few standard procedures and can run rather automatically. However, preparing and testing the component’s adaptive abilities might take a lot of effort, which might occur in the design and test phase instead of the deployment phase of the system life-cycle.
\[con:general\] To provide room for and exploit the system’s ability to self-adapt, many artifacts produced by the engineering process tend to become more general in nature, i.e., they tend to feature more open parameters or degrees of freedom in their description. In effect, in the place of single artifacts in a classical development process, we tend to find families of artifacts or processes generating artifacts when developing a complex adaptive system. As we assume that the previously only static artifact is still included in the set of artifacts available in its place now, we call this shift “generalization” of artifacts. Following this change, many of the activities performed during development shift their targets from concrete implementations to more general artifact, i.e., when building a test suite no longer yields a series of runnable test cases but instead produces a test case generator. When this principle is broadly applied, the development activities shift towards “meta development”. The developers are concerned with setting up a process able to find good solutions autonomously instead of finding the good solutions directly.
Scenarios {#sec:scenarios}
=========
We now want to include the issue of testing adaptive systems in our formal framework. We recognize that any development process for systems following the principles described in Section \[sec:formal\] produces two central types of artifacts: The first one is a system $S = X \stackrel{Z}{\leadsto} Y$ with a specific desired behavior $Y$ so that it manages to adapt to a given adaptation space. The second is a set of situations, test cases, constraints, and checked properties that this system’s behavior has been validated against. We call artifacts of the second type by the group name of *[scenarios]{}*.
\[def:scenario\] Let $S = X \stackrel{Z}{\leadsto} Y$ be a system and $\mathcal{A} = \{(E, \gamma, \phi)\}$ a singleton adaptation domain. A tuple $c = (X, Y, g, f), g \in \{\top, \bot \}, f \in \text{cod}(\phi)$ with $g = \top \iff S \otimes E \models \gamma$ and $f = \phi(S \otimes E)$ is called *scenario*.[^4]
Semantically, scenarios represent the experience gained about the system’s behavior during development, including both successful ($S \vDash \gamma$) and unsuccessful ($S \nvDash \gamma$) test runs. As stated above, since we expect to operate in test spaces we cannot cover exhaustively, the knowledge about the areas we did cover is an important asset and likewise result of the systems engineering process.
Effectively, as we construct and evolve a system $S$ we want to construct and augment a set of scenarios $C = \{c_1, ..., c_n\}$ alongside with it. $C$ is also called a *scenario suite* and can be seen as a toolbox to test $S$’s adaptation abilities with respect to a fixed adaptation domain $\mathcal{A}$.
While formally abiding to Definition \[def:scenario\], scenarios can be encoded in various ways in practical software development, such as:
#### Sets of data points of expected or observed behavior.
Given a system $S' = X' \leadsto Y'$ whose behavior is desirable (for example a trained predecessor of our system or a watchdog component), we can create scenarios $(X', Y', g', f')$ with $g' = \top \iff S' \otimes E_i \models \gamma_i$ and $f' = \phi_i(S' \otimes E_i)$ for an arbitrary amount of elements $(E_i, \gamma_i, \phi_i)$ of an adaptation domain $\mathcal{A} = \{(E_1, \gamma_1, \phi_1), ..., (E_n, \gamma_n, \phi_n)\}$.
#### Test cases the system mastered.
In some cases, adaptive systems may produce innovative behavior before we actively seek it out. In this cases, it is helpful to formalize the produced results once they have been found so that we can ensure that the system’s gained abilities are not lost during further development or adaptation. Formally, this case matches the case for “observed behavior” described above. However, here the test case $(X, Y, g, f)$ already existed as a scenario, so we just need to update $g$ and $f$ (with the new and better values) and possibly $Y$ (if we want to fix the observed behavior).
#### Logical formulae and constraints.
Commonly, constraints can be directly expressed in the adaptation domain. Suppose we build a system against an adaptation domain $\mathcal{A} = \{(E_1, \gamma_1, \phi_1), ..., (E_n, \gamma_n, \phi_n)\}$. We can impose a hard constraint $\zeta$ on the system in this domain by constructing a constrained adaptation domain $\mathcal{A'} = \{(E_1, \gamma_1 \land \zeta, \phi_1), ..., (E_n, \gamma_n \land \zeta, \phi_n)\}$ given that the logic of $\gamma_1, ..., \gamma_n, \zeta$ meaningfully supports an operation like the logical “and” $\land$. Likewise a soft constraint $\psi$ can be imposed via $\mathcal{A'} = \{(E_1, \gamma_1, \max(\phi_1, \psi), ), ..., \allowbreak(E_n, \gamma_n, \max(\phi_n, \psi))\}$ given the definition of the operator $\max$ that trivially follows from using the relation $\preceq$ on fitness values. Scenarios $(X', Y', g', f')$ can then be generated against the new adaptation domain $\mathcal{A}$ by taking pre-existing scenarios $(X, Y, g, f)$ and setting $X' = X, Y' = Y, g = \top, f = \psi((X \leadsto Y) \otimes E)$.
#### Requirements and use case descriptions (including the system’s degree of fulfilling them).
If properly formalized, a requirement or use case description contains all the information necessary to construct an adaptation domain and can thus be treated as the logical formulae in the paragraph above. However, use cases are in practical development more prone to be incomplete views on the adaptation domain. We thus may want to stress the point that we do not need to update all elements of an adaptation domain when applying a constraint, i.e., when including a use case. We can also just add the additional hard constraint $\zeta$ or soft constraint $\psi$ to some elements of $\mathcal{A}$.
#### Predictive models of system properties.
For the most general case, assume that we have a prediction function $p$ so that $p(X) \approx Y$, i.e., the function can roughly return the behavior $S = X \leadsto Y$ will or should show given $X$. We can thus construct the predicted system $S' = X \leadsto p(X)$ and construct a scenario $(X, p(X), g, f)$ with $g = \top \iff S' \otimes E \models \gamma$ and $f = \phi(S' \otimes E)$.
####
All of these types of artifacts will be subsumed under the notion of [[scenarios]{}]{}. We can use them to further train and improve the system and to estimate its likely behavior as well as to perform tests (and ultimately verification and validation activities).
*[[Scenario]{}]{} coevolution* describes the process of developing a set of scenarios to test a system during the system-under-tests’s development. Consequently, it needs to be designed and controlled as carefully as the evolution of system behavior [@arcuri2007coevolving; @fraser2013whole].
Let $c_1 = (X_1, Y_1, g_1, f_1)$ and $c_2 = (X_2, Y_2,\allowbreak g_1, f_2)$ be scenarios for a system $S$ and an adaptation domain $\mathcal{A}$. Scenario $c_2$ is *at least as hard* as $c_1$, written $c_1 \leq c_2$, iff $g_1 = \top \implies g_2 = \top$ and $f_1 \leq f_2$.
Let $C = \{c_1, ..., c_m\}$ and $C' = \{c_1', ..., c_n'\}$ be sets of scenarios, also called scenarios suites. Scenario suite $C'$ is *at least as hard* as $C$, written $C \sqsubseteq C'$, iff for all scenarios $c \in C$ there exists a scenario $c'\in C'$ so that $c \leq c'$.
\[def:scenario-sequence\] Let $\mathcal{S} = (S_i)_{i\in I}, I = \{1, ..., n\}$ be an adaptation sequence for a singleton adaptation space $\mathfrak{A} = \{\mathcal{A}\}$. A series of sets $\mathcal{C} = (C_i)_{i \in I}$ is called a scenario sequence iff for all $i \in I, i < n$ it holds that $C_i$ is a scenario suite for $S_i$ and $\mathcal{A}$ and $C_i \sqsubseteq C_{i+1}$.
We expect each phase of development to further alter the set of [[scenarios]{}]{} just as it does alter the system behavior. The [[scenarios]{}]{} produced and used at a certain phase in development must match the current state of progress. Valid [[scenarios]{}]{} from previous phases should be kept and checked against the further specialized system. When we do not delete any [[scenarios]{}]{} entirely, the continued addition of [[scenarios]{}]{} will ideally narrow down allowed system behavior to the desired possibilities. Eventually, we expect all activities of system test to be expressible as the generation or evaluation of scenarios. New scenarios may simply be thought up by system developers or be generated automatically. Finding the right [[scenarios]{}]{} to generate is another optimization problem to be solved during the development of any complex adaptive system. [[Scenario]{}]{} evolution represents a cross-cutting concern for all phases of system development. Treating [[scenarios]{}]{} as first-class citizen among the artifacts produced by system development thus yields changes in tasks throughout the whole process model.
Applications of Scenario Coevolution {#sec:applications}
====================================
Having both introduced a formal framework for adaptation and the testing of adaptive systems using scenarios, we show in this section how these frameworks can be applied to aid the trustworthiness of complex adaptive systems for practical use.
Criticality Focus
-----------------
It is very important to start the scenario evolution process alongside the system evolution, so that at each stage there exists a set of scenarios available to test the system’s functionality and degree of progress (see Concept \[con:parallelism\]). This approach mimics the concept of agile development where between each sprint there exists a fully functional (however incomplete) version of the system. The ceoncept of [[scenario]{}]{} evolution integrates seamlessly with agile process models.
In the early phases of development, the common artifacts of requirements engineering, i.e., formalized requirements, serve as the basis for the scenario evolution process. As long as the adaptation space $\mathfrak{A}$ remains constant (and with it the system goals), system development should form an adaptation sequence. Consequently, scenario evolution should then form a scenario sequence for that adaptation sequence. This means (according to Definition \[def:scenario-sequence\]), the scenario suite is augmented with newly generated scenarios (for new system goals or just more specialized subgoals) or with scenarios with increased requirements on fitness.[^5] Ideally, the scenario evolution process should lead the learning components on the right path towards the desired solution. The ability to re-assign fitness priorities allows for an arms race between adaptive system and scenario suite (see Concept \[con:antagonism\]).
#### Augmenting Requirements.
Beyond requirements engineering, it is necessary to include knowledge that will be generated during training and learning by the adaptive components. Mainly, recognized scenarios that work well with early version of the adaptive system should be used as checks and tests when the system becomes more complex. This approach imitates the optimization technique of importance sampling on a systems engineering level. There are two central issues that need to be answered in this early phase of the development process:
- Behavior Observation: How can system behavior be generated in a realistic manner? Are the formal specifications powerful enough? Can we employ human-labeled experience?
- Behavior Assessment: How can the quality of observed behavior be adequately assessed? Can we define a model for the users’ intent? Can we employ human-labeled review?
#### Breaking Down Requirements.
A central task of successful requirements engineering is to split up the use cases in atomic units that ideally describe singular features. In the dynamic world, we want to leave more room for adaptive system behavior. Thus, the requirements we formulate tend to be more general in notion. It is thus even more important to split them up in meaningful ways in order to derive new sets of scenarios. The following design axes (without any claim to completeness) may be found useful to break down requirements of adaptive systems:
- Scope and Locality: Can the goal be applied/checked locally or does it involve multiple components? Which components fall into the scope of the goal? Is emergent system behavior desirable or considered harmful?
- Decomposition and Smoothness: Can internal (possibly more specific) requirements be developed? Can the overall goal be composed from a clear set of subgoals? Can the goal function be smoothened, for example by providing intermediate goals? Can subgoal decomposition change dynamically via adaptation or is it structurally static?
- Uncertainty and Interaction: Are all goals given with full certainty? Is it possible to reason about the relative importance of goal fulfillment for specific goals a priori? Which dynamic goals have an interface with human users or other systems?
Adaptation Cooldown
-------------------
We call the problem domain available to us during system design the *off-site domain*. It contains all [[scenarios]{}]{} we think the system might end up in and may thus even contain contradicting [[scenarios]{}]{}, for example. In all but the rarest cases, the situations one single instance of our system will face in its operating time will be just a fraction the size of the covered areas of the off-site domain. Nonetheless, it is also common for the system’s real-world experience to include [[scenarios]{}]{} not occurring in the off-site domain at all; this mainly happens when we were wrong about some detail in the real world. Thus, the implementation of an adaptation technique faces a problem not unlike the *exploration/exploitation dilemma* [@vcrepinvsek2013exploration], but on a larger scale: We need to decide, if we opt for a system fully adapted to the exact off-site domain or if we opt for a less specialized system that leaves more room for later adaptation at the customer’s site. The point at which we stop adaptation happening on off-site [[scenarios]{}]{} is called the off-site adaptation border and is a key artifact of the development process for adaptive systems.
In many cases, we may want the system we build to be able to evolve beyond the exact use cases we knew about during design time. The system thus needs to have components capable of *run-time* or *online adaptation*. In the wording of this work, we also talk about *on-site adaptation* stressing that in this case we focus on adaptation processes that take place at the customer’s location in a comparatively specific domain instead of the broader setting in a system development lab. Usually, we expect the training and optimization performed on-site (if any) to be not as drastic as training done during development. (Otherwise, we would probably have not specified our problem domain in an appropriate way.) As the system becomes more efficient in its behavior, we want to gradually reduce the amount of change we allow. In the long run, adaptation should usually work at a level that prohibits sudden, unexpected changes but still manages to handle any changes in the environment within a certain margin. The recognized need for more drastic change should usually trigger human supervision first.
\[def:adaptation-sequence-spaces\] Let $S$ be a system. A series of $|I|$ adaptation spaces $\mathbb{A} = (\mathfrak{A}_i)_{i\in I}$ with index set $I$ with a preorder $\leq$ on the elements of $I$ is called an *adaptation domain sequence* iff for all $i, j \in I, i \leq j$ it holds that: $S$ adapts to $\mathfrak{A}_j$ implies that $S$ adapts to $\mathfrak{A}_i$.
System development constructs an adaptation space sequence (c.f. Concept \[con:general\]), i.e., a sequence of increasingly specific adaptation domains. Each of those can be used to run an adaptation sequence (c.f. Definition \[def:adaptation-sequence\]) and a scenario sequence (c.f. Definition \[def:scenario-sequence\], Concept \[con:antagonism\]) to test it.
For the gradual reduction of the allowed amount of adaptation for the system we use the metaphor of a “cool-down” process: The adaptation performed on-site should allow for less change than off-site adaptation. And the adaptation allowed during run-time should be less than what we allowed during deployment. This ensures that decisions that have once been deemed right by the developers are hard to change later by accident or by the autonomous adaptation process.
Eternal Deployment
------------------
For high trustworthiness, development of the test cases used for the final system test should be as decoupled from the on-going scenario evolution as possible, i.e., the data used in both processes should overlap as little as possible. Of course, following this guideline completely results in the duplication of a lot of processes and artifacts. Still, it is important to accurately keep track of the influences on the respective sets of [[scenarios]{}]{}. A clear definition of the off-site adaptation border provides a starting point for when to branch off a [[scenario]{}]{} evolution process that is independent of possible [[scenario]{}]{}-specific adaptations on the system-under-test’s side. Running multiple independent system tests (cf. ensemble methods [@dietterich2000ensemble; @hart2017constructing]) is advisable as well. However, the space of available independently generated data is usually very limited. For the deployment phase, it is thus of key importance to carry over as much information as possible about the genesis of the system we deploy into the run-time, where it can be used to look up the traces of observed decisions. The reason to do this now is that we usually expect the responsibility for the system to change at this point: Whereas previously, any system behavior was overseen by the developers who could potentially backtrack any phenomenon to all previous steps in the system development process, now we expect on-site maintenance to be able to handle any potential problem with the system in the real world, requiring more intricate preparation for maintenance tasks (c.f. Concept \[con:automated\]). We thus need to endow these new people with the ability to properly understand what the system does and why.
Our approach follows the vision of *eternal system design* [@nierstrasz2008change], which is a fundamental change in the way to treat deployment: We no longer ship a single artifact as the result of a complex development process, but we ship an image of the process itself (cf. Concept \[con:general\]). As a natural consequence, we can only ever add to an eternal system but hardly remove changes and any trace of them entirely. Using an adequate combination operator, this meta-design pattern is already implemented in the way we construct adaptation sequences (c.f. Definition \[def:adaptation-sequence\]): For example, given a system $S_i$ we could construct $S_{i+1} = X \stackrel{Z}{\leadsto} Y$ in a way so that $S_i$ is included in $S_{i+1}$’s internal state $Z$.
As of now, however, the design of eternal systems still raises many unanswered questions in system design. We thus resort to the notion of [[scenarios]{}]{} only as a sufficient system description to provide explanatory power at run-time and recommend to apply standard “destructive updates” to all other system artifacts.
Conclusion {#sec:conclusion}
==========
We have introduced a new formal model for adaptation and test processes using our notion of scenarios. We connected this model to concrete challenges and arising concepts in software engineering to show that our approach of scenario coevolution is fit to tackle (a first few) of the problems when doing quality assurance for complex adaptive systems.
As already noted throughout the text, a few challenges still persist. Perhaps most importantly, we require an adequate data structure both for the coding of systems and for the encoding of test suites and need to prove the practical feasibility of an optimization process governing the software development life-cycle. For performance reasons, we expect that some restrictions on the general formal framework will be necessary. In this work, we also deliberately left out the issue of meta-processes: The software development life-cycle can itself be regarded as system according to Definition \[def:system\]. While this may complicate things at first, we also see potential in not only developing a process of establishing quality and trustworthiness but also a generator for such processes (akin to Concept \[con:general\]).
Systems with a high degree of adaptivity and, among those, systems employing techniques of artificial intelligence and machine learning will become ubiquitous. If we want to trust them as we trust engineered systems today, the methods of quality assurance need to rise to the challenge: Quality assurance needs to adapt to adaptive systems!
[^1]: In [@holzl2011towards], there is a more strict definition on how the combination operator needs to handle the designated inputs and outputs of its given systems. Here, we opt for a more general definition.
[^2]: Strictly speaking, an optimization *process* would further assume there exists an optimization relation $o$ from systems to systems so that for all $i, j \in I$ it holds that $i \leq j \Longrightarrow o(S_i, S_j)$. But for simplicity, we consider the sequence of outputs of the optimization process a sufficient representation of the whole process.
[^3]: Constructing a sequence $S_i := S_{i-1} \otimes S_{i-1}$ might be viable formulation as well, but is not further explored in this work.
[^4]: If we are only interested in the system’s performance and not *how* it was achieved, we can redefine a scenario to leave out $Y$.
[^5]: Note that every change in $\mathfrak{A}$ starts new sequences.
|
{
"pile_set_name": "arxiv"
}
|
1942 Iowa Pre-Flight Seahawks football team
The 1942 Iowa Pre-Flight Seahawks football team represented the United States Navy pre-flight aviation training school at the University of Iowa as an independent during the 1942 college football season. The team compiled a 7–3 record and outscored opponents by a total of 211 to 121. The 1942 team was known for its difficult schedule, including Notre Dame, Michigan, Ohio State, Minnesota, Indiana, Nebraska, and Missouri. The team was ranked No. 2 among the service teams in a poll of 91 sports writers conducted by the Associated Press.
The Navy's pre-flight aviation training school opened on April 15, 1942, with a 27-minute ceremony during which Iowa Governor George A. Wilson turned over certain facilities at the University of Iowa to be used for the training of naval aviators. At the time, Wilson said, "We are glad it is possible to place the facilities of this university and all the force and power of the state of Iowa in a service that is today most vital to safeguarding our liberties." The first group of 600 air cadets was schedule to arrive on May 28.
Bernie Bierman, then holding the rank of major, was placed in charge of the physical conditioning program at the school. Bieman had been the head coach of Minnesota from 1932 to 1941 and served as the head coach of the Iowa Pre-Flight team in 1942. Larry Snyder, previously the track coach at Ohio State, was assigned as Bierman's assistant. Don Heap, Dallas Ward, Babe LeVoir, and Trevor Reese were assigned as assistant coaches for the football team.
In June 1942, Bierman addressed the "misconception" that the Iowa pre-flight school was "merely a place for varsity athletics." He said: "Our purpose here is to turn out the toughest bunch of flyers the world has ever seen and not first class athletes."
Two Seahawks were named to the 1942 All-Navy All-America football team: George Svendsen at center and Dick Fisher at left halfback. In addition, Bill Kolens (right tackle), Judd Ringer (right end), George Benson (quarterback), and Bill Schatzer (left halfback) were named to the 1942 All-Navy Preflight Cadet All-America team.
Schedule
Roster
References
Iowa Pre-Flight
Category:Iowa Pre-Flight Seahawks football seasons
Iowa Pre
|
{
"pile_set_name": "wikipedia_en"
}
|
In the Community
Nearby Schools
3208 Perdot Avenue, Rosamond, CA 93560 (MLS# SR16727560) is a
Single Family property with 4 bedrooms, 2 full bathrooms and 1 partial bathroom.
3208 Perdot Avenue is currently listed for $294,990 and was received on October 17, 2016.
Want to learn more about 3208 Perdot Avenue?
Do you have questions about finding other
Single Family
real estate for sale
in Rosamond?
You can browse all Rosamond real estate or
contact a Coldwell Banker agent to request more information.
|
{
"pile_set_name": "pile-cc"
}
|
---
abstract: |
The paper proposes a method for measuring available bandwidth, based on testing network packets of various sizes (Variable Packet Size method, VPS). The boundaries of applicability of the model have been found, which are based on the accuracy of measurements of packet delays, also we have derived a formula of measuring the upper limit of bandwidth. The computer simulation has been performed and relationship between the measurement error of available bandwidth and the number of measurements has been found. Experimental verification with the use of RIPE Test Box measuring system has shown that the suggested method has advantages over existing measurement techniques. *Pathload* utility has been chosen as an alternative technique of measurement, and to ensure reliable results statistics by SNMP agent has been withdrawn directly from the router.\
author:
-
-
title: Simulation technique for available bandwidth estimation
---
Available bandwidth, RIPE Test Box, packet size, end-to-end delay, variable delay component.
Introduction
============
Various real-time applications in the Internet, especially transmission audio and video information, become more and more popular. The major factors defining quality of the service are quality of the equipment (the codec and a video server) and available bandwidth in Internet link. ISP providers should provide the required bandwidth for voice and video applications to guarantee the submission of demanded services in the global network.
In this paper, network path is defined as a sequence of links (hops), which forward packets from the sender to the receiver. There are various definitions for the throughput metrics, but we will use the approaches accepted in [@s5; @s9; @s10].
Two bandwidth metrics that are commonly associated with a path are the capacity $C$ and the available bandwidth $B_{av}$ (see Fig \[f1\]). The [*capacity C*]{} is the maximum IP-layer throughput that the path can provide to a flow, when there is no competing traffic load (cross traffic). The [*available bandwidth*]{} $B_{av}$, on the other hand, is the maximum IP-layer throughput that the path can provide to a flow, given the path’s current cross traffic load. The link with the minimum transmission rate determines the capacity of the path, while the link with the minimum unused capacity limits the available bandwidth. Moreover measuring available bandwidth is important to provide information to network applications on how to control their incoming and outgoing traffic and fairly share the network bandwidth.
![Illustration of throughput metrics[]{data-label="f1"}](il1){height="2.525cm"}
Another related throughput metric is the [*Bulk-Transfer-Capacity*]{} (BTC). BTC of a path in a certain time period is the throughput of a bulk TCP transfer, when the transfer is only limited by the network resources and not by limitations at the end-systems. The intuitive definition of BTC is the expected long-term average data rate (bits per second) of a single ideal TCP implementation over the path in question.
In order to construct a perfect picture of a global network (monitoring and bottlenecks troubleshooting) and develop the standards describing new applications, modern measuring infrastructure should be installed. In this paper we are describing usage of RIPE Test Box measurement system, which is widely used [@s7].
According to [@s7] this system doesn’t measure the available bandwidth, but it collects the numerical values, which characterize key network parameters such as packet delay, jitter, routing path, etc.
In this paper we attempt to provide universal and simple model that allow us to estimate available bandwidth based on received data from RIPE Test Box measurement infrastructure. The method is based on [*Variable Packet Size*]{} (VPS) method and was used in [@s6]. This method allows us to estimate network capacity of a hop $i$ by using connection between the Round-Trip Time (RTT) and packet size $W$.
The model and its applicability
===============================
The well-known expression for throughput metric describing the relation between a network delay and the packet size is Little’s Law [@s13]: $$B_{av}=W/D,
\label{e1}$$ where $B_{av}$ is available bandwidth, $W$ is the size of transmitted packet and $D$ is network packet delay (One Way Delay). This formula is ideal for calculating the bandwidth between two points on the network that are connected without any routing devices. In general case delay value is caused by constant network factors as propagation delay, transmission delay, per-packet router processing time, etc [@s9].
According to [@s1], Little’s Law could be modified with $D^{fixed}$: $$B_{av}=W/(D-D^{fixed}),
\label{e2}$$ where $D^{fixed}$ is [*minimum fixed delay*]{} for the packet size $W$. The difference between the delays $D$ and $D^{fixed}$ is the [*variable delay component*]{} $d_{var}$. In paper [@s3] it was shown that variable delay is exponentially distributed.
Choi [@s2], Hohn [@s12] showed that minimum fixed delay component $D^{fixed}(W)$ for the packet size $W$ is an linear (or affine) function of its size: $$D^{fixed}(W)=W\sum_{i=1}^h 1/C_i + \sum_{i=1}^h \delta_i,
\label{e3}$$ where $C_i$ is capacity of appropriate link and $\delta_i$ is propagation delay. To prove this assumption authors experimentally found the minimum fixed delays for packets of the same size for three different routes and constructed function of dependence of a delay from the packet size $W$.
In order to eliminate minimum fixed delay $D^{fixed}(W)$ from Eqn. (\[e2\]) we are suggesting to test network link with packets of different sizes [@s1], so that the packet size varied at the maximum possible value without possible router fragmentation. Then Eqn. (\[e2\]) could be modified to a suitable form for measuring available bandwidth: $$B_{av}=\frac{W_2-W_1}{D_2-D_1}
\label{e4}$$
This method allows us to find a way to eliminate the measurement limitations of variable delay component $d_{var}$. The variable delay component is the cause rather large measurement errors of other methods, which will be described in the last section of this paper.
Proposed model is quite simple, but it’s still difficult to find accurate measuring infrastructure. The first problem concerns the applicability of the model, i.e., what range of throughput metrics can be measured using with this method. The second issue is number of measurements (group of packets) needed to achieve a given accuracy.
First problem could be solved by using measurement error (based on delay accuracy measurement): $$\eta=\frac{\Delta B}{B}=\frac{2\Delta D}{D_2-D_1},
\label{e5}$$ where $\eta$ is relative error of measurement available bandwidth, $\Delta B$ is absolute error of measurement available bandwidth and $\Delta D$ is precision of measuring the packet delay.
With this expression we can easily find an upper bound $\bar{B}$ for available bandwidth: $$\bar{B}=\frac{W_2-W_1}{2\Delta D}\eta
\label{e6}$$
Thus, with the RIPE Test Box, which allows to find the delay to within 2 microseconds $\Delta D=2\cdot10^{-6}$ second precision, we can measure the available bandwidth to the upper bound $\ensuremath{\bar{B}}=300$ [*Mbps*]{} with relative error $\eta=10\%$. Moreover, if we were using standard utility *ping*, with a relative error $\eta=25\%$ and precision of 1 millisecond $\Delta D=10^{-3}$ second, we could get results for network available bandwidth up to $1.5$ [*Mbps*]{}.
Experimental comparison of different methods
============================================
In this part of the paper we would like to show results based on comparing different methods of measuring available bandwidth by using obtained results of experiments.
The experiment was divided in three stages. In the first stage we have used RIPE Test Box measurement system with two different packet sizes. Number of measurement systems in global measurement infrastructure reaches 80, these points are covering major Internet world’s centers of the Internet, reaching highest density in Europe. The measurement error of packet delay is 2-12 $\mu s$ [@s14]. In order to prepare the experiments, three Test Boxes have been installed in Moscow, Samara and Rostov on Don in Russia during 2006-2008 years in support of RFBR grant 06-07-89074. For further analysis we collected several data sets containing up to 3000 data results in different directions, including Samara - Amsterdam (tt01.ripe.net - tt143.ripe.net). Based on these data, we calculated available bandwidth and dependence of measurement error on the number of measurements (see Fig \[f3\]).
The second stage was comparing data obtained with our method, with the results of traditional methods of throughputs measurement. *Pathload* was selected as a tool that implements a traditional method of measurement product [@s10]. This software is considered one of the best tools to assess the available bandwidth. *Pathload* uses Self-Loading Periodic Streams (SLoPS). It is based on client-server architecture, which is its disadvantage, since you want to install the utility on both hosts. *Pathload* advantage is that it does not require root privileges, as the utility sends only UDP packets.
The results of measurements *pathload* displayed as a range of values rather than as a single value. Mid-range corresponds to the average throughput, and the band appreciates the change in available bandwidth during the measurements.
The third stage involves the comparison of data obtained in the first and second stages with the data directly from the router SSAU which serves the narrowest part of the network.
{height="6cm"}
The experiment between points tt143.ripe.net (Samara State Aerospace University) and tt146.ripe.net (FREENet, Moscow) consists of three parts:
1. Measuring the available bandwidth by testing pairs of packets of different sizes using the measuring system RIPE Test Box (packet size of 100 and 1100 bytes);
2. Measurements of available bandwidth using the utility *pathload*;
3. Measuring the available bandwidth by MRTG on the router SSAU which serves the narrowest hop of routing (see Fig \[f2\]).
It is worth noting that all the three inspections should be conducted simultaneously in order to maximize the reliability of the statistics. The structure of the measuring system RIPE Test Box meets all the requirements of our method - it allows to change the size of the probe packet and find high-precision delay.
By default, the test packet size is 100 bytes. There are special settings that allow adding testing packets of up to 1500 bytes to the desired frequency. In our case it is reasonable to add a packet size of 1100 bytes. It should be noted that testing of these packets does not begin until the next day after sending a special request.
In order to gain access to the test results it is necessary to apply for remote access (*telnet*) to the RIPE Test Box on port 9142. The data includes information about the desired delay packets of different sizes. In order to extract the data it is necessary to identify the packet on receiving and transmitting sides.
First, it should to explore sender’s side:
------------------------------------------------------- ----------------
SNDP 9 1263374005 -h tt01.ripe.net -p 6000 -n 1024 -s 1353080538
SNDP 9 1263374005 -h tt146.ripe.net -p 6000 -n 100 -s **1353080554**
SNDP 9 1263374005 -h tt103.ripe.net -p 6000 -n 100 -s 1353080590
------------------------------------------------------- ----------------
: The data of sending box[]{data-label="t1"}
The last value in line is the serial number of packet. It should to be remembered to get a packet already on the receiving side of the channel. Below is a sample line on the receiving side.
[ll]{} [RCDP 12 2 89.186.245.200 55730 193.233.1.69 6000]{} & [1263374005.779364]{}\
[**0.009001** 0X2107 0X2107 **1353080554** 0.000001 0.000001]{}\
\
[RCDP 12 2 200.19.119.120 57513 193.233.1.69 6000]{} & [1263374005.905792]{}\
[0.160090 0X2107 0X2107 1353080554 0.000003 0.000001]{}\
For a given number of packet is easy to find the packet delay. In this case it is 0.009001 sec. The following packet 1353091581 is the size of 1100 bytes and the delay is 0.027033 seconds. Thus, the difference is 0.018032 seconds. Other values are processed packet delay similar.
The mean value $D_2-D_1$ should be used in Eqn. (\[e4\]), so it’s necessary to average several values, going consistently. In the present experiment, the averaged difference $D_{av}(1100)-D_{av}(100)$ amounted to 0.000815 seconds in the direction $tt143\rightarrow tt146$. Then the available bandwidth can be calculated as:
$$B_{av}(tt143\rightarrow tt146)=\frac{8\times 1000}{0.000815}=9.8 \textit{ Mbps}$$ The average difference in the direction $tt146\rightarrow tt143$ was 0.001869 seconds. Then the available bandwidth will be:
$$B_{av}(tt146\rightarrow tt143)=\frac{8\times 1000}{0.001869}=4.28 \textit{ Mbps}$$
Measuring of *pathload* utility was with periodically troubles, even though that had been opened all necessary ports. In the direction of measurement $tt146\rightarrow tt143$ the program has not get any results despite all our attempts. It is idle and filled the channel chain packets. The *pathload* results give a large spread of values, clearly beyond the capacity of the investigated channel. The other measurements with utilities *pathChirp* and *IGI* were also unsuccessful. Programs give errors and refused to measure the available bandwidth.
Therefore, it was decided to compare the results obtained by different methods with data obtained directly from the router. *Traceroute* utility determines “bottleneck” of route path between SSAU and Institute of Organic Chemistry at the Russian Academy of Sciences. It was an external SSAU router which bandwidth was limited up to $30$ [*Mbps*]{}. SNMP agent collects statistic of the border router SSAU.
All data are presented in Table \[t3\] indicating the time of the experiment.
--- ------------ -------------------------- ------------------------------------------------------------------------------------------------------------------------ ---------------------- ------------------
N Date Direction Available bandwidth Available bandwidth Data from router
(data of RIPE Test Box) (data of *pathload*)
1 13.01.2010 $tt143\rightarrow tt146$ $10.0\pm2.2$ *[Mbps]{} & $21.9\pm14.2$ *[Mbps]{} & $12.1\pm2.5$ *[Mbps]{}\
2 & 13.01.2010 & $tt146\rightarrow tt143$ & $4.4\pm1.2$ *[Mbps]{} & & $7.8\pm3.8$ *[Mbps]{}\
3 & 23.01.2010 & $tt143\rightarrow tt146$ & $20.3\pm5.1$ *[Mbps]{} & $41.2\pm14.0$ *[Mbps]{} & $18.7\pm1.1$ *[Mbps]{}\
4 & 23.01.2010 & $tt146\rightarrow tt143$ & $9.3\pm2.7$ *[Mbps]{} & & $11.3\pm2.6$ *[Mbps]{}\
5 & 06.02.2010 & $tt143\rightarrow tt146$ & $9.2\pm1.4$ *[Mbps]{} & $67\pm14$ *[Mbps]{} & $12.0\pm2.0$ *[Mbps]{}\
6 & 06.02.2010 & $tt146\rightarrow tt143$ & $3.5\pm1.2$ *[Mbps]{} & & $4.5\pm2.0$ *[Mbps]{}\
***************
--- ------------ -------------------------- ------------------------------------------------------------------------------------------------------------------------ ---------------------- ------------------
Table \[t3\] shows that the results obtained by our method and router data are in a good agreement, while the *pathload* measurements differ. The study of the statistical type of delay [@s3] provides an answer to the question why this is happening. The dispersion of measurements results speaks presence of a variable part of delay $d_{var}$. This utility uses Self-Loading Periodic Streams (SLoPS) like most others. This method consists in the generation of packets chain with redundant frequency when time packet delivery will significantly increase due to long queues at the routers. In this case, the transmitter starts to reduce the frequency of packets generation until the queue it disappears. Next, the process will be repeated for as long as the average frequency of packets generation will not approach the available bandwidth. The main disadvantage of this technique is unreliable measurements because they have not considered the influence of the variable part of delay. This is the reason for fantastic $90$ [*Mbps*]{} *pathload* result for channel with a $30$ [*Mbps*]{} capacity.
The required number of measurements
===================================
The main disadvantage of most modern tools is a large spread of values of available bandwidth. Measurement mechanisms of throughput utilities do not take into account the effect of the variable part of the delay. Unfortunately, in all developed utilities compensations mechanism for the random component’s delay isn’t provided.
Any method that gives accurate results should contain mechanism for smoothing the impact $d_{var}$. In order to understand the effect of the variable part on the measurement results we turn to the following experiment. The series of measurements have been made between RIPE Test Boxes: tt01.ripe.net (Amsterdam, Holland) and tt143.ripe.net (Samara State Aerospace University, Russian Federation). It was received about 3000 values of delay packet size of 100 and 1024 bytes in both directions. Using the presented method quantities of available bandwidth have been calculated for cases where the averaging is performed on 20, 50 and 100 pairs of values. On Fig. \[f3\] the schedule of the available bandwidth calculated for various conditions of averaging is represented.
{height="8cm"}
Apparently from the schedule, beatings of the calculated available bandwidth remain critical at 20 averaged values. At 50 it is less noticeable, and at 100 the values the curve is almost equalized. There was a clear correlation between the number of measurements and variation of the calculated available bandwidth. The beats are caused by the variable part of the delay; its role is reduced as the number of measurements.
In this section the necessary number of measurements is calculated using two methods: from experimental data of the RIPE Test Box and by simulation knowing the distribution type for network delay.
Based on data obtained from tt01 and tt143 Boxes we were computed standard deviations (SD) $\sigma_n(B)$ of available bandwidth.
Data are presented in Table \[t4\] and graphically depicted in Fig. \[f4\].
Number of measurements, $n$ 5 10 20 30 40 50 70 100 200 300
----------------------------- ------------ ------------ ------------ ----------- ----------- ----------- ----------- ----------- ----------- -----------
Standard deviations, \*[22.2]{} \*[14.9]{} \*[10.2]{} \*[8.3]{} \*[7.3]{} \*[6.7]{} \*[5.7]{} \*[4.9]{} \*[2.9]{} \*[2.3]{}
$\sigma_n(B)$ ([*Mbps*]{})
The average value
of available bandwidth,
$B_{av}$ ([*Mbps*]{})
{height="7.5cm"}
Figure 4 shows that it is necessary to take at least 50 measurements (the delay difference for 50 pairs of packets). In this case, the calculated value exceeds twice the capacity of SD, i.e.: $B\geq 2\sigma_n(B)$.
A more accurate result can be obtained using the generating functions for describing the delay packets. In paper [@s3] it is shown that the delay distribution is described by exponential law and the following generating function can be used for delay emulation: $$D=D_{min}+W/B-(1/\lambda)ln(1-F(D,W)),
\label{e7}$$ where $\lambda=1/(D_{av}-D_{min})$. The function $F(W,D)$ is a standard random number generator in the interval $[0;1)$.
Knowledge of the generating function allows calculating the tabulated values of $\eta^{T}_{n}$ from Eqn. \[e5\]. Earlier standard deviation $\sigma^{T}_{n}(D_2-D_1)$ for the delay difference is found taking $\lambda^T=1000 s^{-1}$. Calculation will hold for the following values: $\Delta W^T=W_2-W_1=1000$ [*bytes*]{}, $B^T=10$ [*Mbps*]{}, which corresponds to $D^{T}_{2}-D^{T}_{1}=8\cdot 10^{-4}$[*s*]{}.
Number of measurements, $n$ 5 10 20 30 50 100 200
-------------------------------------- ------------- ------------- ------------- ------------- ------------- ------------- -------------
Standard deviations, \*[0.661]{} \*[0.489]{} \*[0.354]{} \*[0.284]{} \*[0.195]{} \*[0.111]{} \*[0.075]{}
$\sigma^{T}_{n}(D_2-D_1)$ ([*ms*]{})
For the $\sigma^{T}_{n}(D_2-D_1)$ values from Table \[t5\] values of $\eta^{T}_{n}$ could be found (see Table \[t6\]).
--------------------------- ------------ ------------ ------------ ------------ ------------ ------------ -----------
Number $n$ \*[5]{} \*[10]{} \*[20]{} \*[30]{} \*[50]{} \*[100]{} \*[200]{}
of measurements,
Measurement \*[82.6]{} \*[61.1]{} \*[44.2]{} \*[35.5]{} \*[24.4]{} \*[13.9]{} \*[9.4]{}
error, $\eta^{T}_{n}$ (%)
--------------------------- ------------ ------------ ------------ ------------ ------------ ------------ -----------
: Dependence of error on the number of measurements[]{data-label="t6"}
During the real experimental measured quantities $\lambda^{exp}$, $D^{exp}_{2}-D^{exp}_{1}$, and $B^{exp}$ take arbitrary values, but correction factors can easily calculate the required number of measurements:
$$\eta^{T}_{n}=k(D_2-D_1)\cdot k(\lambda)\cdot \eta^{exp}_{n},
\label{e8}$$
where $k(\lambda)=\lambda^{exp}/\lambda^T$, and $k(D_2-D_1)=(D^{exp}_{2}-D^{exp}_{1})/(D^{T}_{2}-D^{T}_{1})$.
Substituting in Eqn. \[e8\] values of the coefficients $k(D_2-D_1)$, $k(\lambda)$ and the desired accuracy of measurements $\eta^{exp}$ we compare the obtained values with the tabulated $\eta^{T}_{n}$ and find the number of measurements $n$ required to achieve a given error.
Conclusion
==========
In this paper we found a way to measure available bandwidth by data of delays that could be collected by RIPE Test Box. This method consists in the fact that comparing the average end-to-end delays for packets of different sizes, we can calculate the available bandwidth.
We carried out a further study of the model and found the limits of its applicability, which depend on the accuracy of measurement delays. The experiment results were obtained with our method and the alternative one. The benchmark tool has been selected utility *pathload*. The paper shows that the accuracy of calculations available bandwidth depends on variable delay component, $d_{var}$.
The experiments and computer simulation, using the generating function of the delay were conducted. They have shown that achieving a given error requires to average large number of measurements. We found a relationship between accuracy and the number of measurements to ensure the required level of accuracy.
In the future we plan to implement this method in mechanism of the measurement infrastructure RIPE Test Box.
Acknowledgment {#acknowledgment .unnumbered}
==============
We would like to express special thanks Prasad Calyam, Gregg Trueb from the University of Ohio, Professor Richard Sethmann, Stephan Gitz and Till Schleicher from Hochschule Bremen University of Applied Sciences, Dmitry Sidelnikov from the Institute of Organic Chemistry at the Russian Academy of Sciences for their invaluable assistance in the measurements. We would also like to express special thanks all the staff of technical support RIPE Test Box especially Ruben fon Staveren and Roman Kaliakin for constant help in the event of questions concerning the measurement infrastructure.
[14]{}
Platonov A.P., Sidelnikov D.I., Strizhov M.V., Sukhov A.M., Estimation of available bandwidth and measurement infrastructure for Russian segment of Internet, Telecommunications, 2009, 1, pp.11-16.
Choi, B.-Y., Moon, S., Zhang, Z.-L., Papagiannaki, K. and Diot, C.: Analysis of Point-To-Point Packet Delay In an Operational Network. In: Infocom 2004, Hong Kong, pp. 1797-1807 (2004).
A.M. Sukhov, N Kuznetsova, What type of distribution for packet delay in a global network should be used in the control theory? 2009; arXiv: 0907.4468.
Padhye, J., Firoiu, V., Towsley, D., Kurose, J.: Modeling TCP Throughput: A Simple Model and its Empirical Validation. In: Proc. SIGCOMM Symp. Communications Architectures and Protocols, pp. 304-314 (1998).
Dovrolis C., Ramanathan P., and Moore D., Packet-Dispersion Techniques and a Capacity-Estimation Methodology, IEEE/ACM Transactions on Networking, v.12, n. 6, December 2004, p. 963-977.
Downey A.B., Using Pathchar to estimate internet link characteristics, in Proc. ACM SICCOMM, Sept. 1999, pp. 222-223.
Ripe Test Box, http://ripe.net/projects/ttm/
Jacobson, V. Congestion avoidance and control. In Proceedings of SIGCOMM 88 (Stanford, CA, Aug. 1988), ACM.
Prasad R.S., Dovrolis C., and B. A. Mah B.A., The effect of layer-2 store-and-forward devices on per-hop capacity estimation, in Proc. IEEE INFOCOM, Mar. 2003, pp. 2090-2100.
Jain, M., Dovrolis, K.: End-to-end Estimation of the Available Bandwidth Variation Range. In: SIGMETRICS’05, Ban, Alberta, Canada (2005).
Crovella, M.E. and Carter, R.L.: Dynamic Server Selection in the Internet. In: Proc. of the Third IEEE Workshop on the Architecture and Implementation of High Performance Communication Subsystems (1995).
N. Hohn, D. Veitch, K. Papagiannaki and C. Diot, Bridging Router Performance And Queuing Theory, Proc. ACM SIGMETRICS, New York, USA, Jun 2004.
Kleinrock, L. Queueing Systems, vol. II. John Wiley & Sons, 1976.
Georgatos, F., Gruber, F., Karrenberg, D., Santcroos, M., Susanj, A., Uijterwaal, H. and Wilhelm R., Providing active measurements as a regular service for ISP’s. In: PAM2001.
|
{
"pile_set_name": "arxiv"
}
|
Topic: reinvent midnight madness
Amazon announced a new service at the AWS re:Invent Midnight Madness event. Amazon Sumerian is a solution that aims to make it easier for developers to build virtual reality, augmented reality, and 3D applications. It features a user friendly editor, which can be used to drag and drop 3D objects and characters into scenes. Amazon … continue reading
|
{
"pile_set_name": "pile-cc"
}
|
Mind The Gap
America’s British population has taken to the web to voice its displeasure at news that U.S. candy giant Hershey has successfully blocked our much loved U.K.-produced chocolate from being exported to the land of the free.
So the Oscar nominations were announced this morning, and, as expected, the great British hope, Atonement, was nominated for Best Picture. However, its two stars, James McAvoy and Keira Knightley, were omitted for the top acting …
The British dominance of Hollywood has been a big story throughout award season. But one could be forgiven for mistaking last night’s Oscars for a World Cup match, and, predictably, Britain got beaten by Mexico.
Latest Interviews
The Latest from Mind The Gap
America’s British population has taken to the web to voice its displeasure at news that U.S. candy giant Hershey has successfully blocked our much loved U.K.-produced chocolate from being exported to the land of the free.
|
{
"pile_set_name": "pile-cc"
}
|
---
abstract: 'We propose a construction of string cohomology spaces for Calabi-Yau hypersurfaces that arise in Batyrev’s mirror symmetry construction. The spaces are defined explicitly in terms of the corresponding reflexive polyhedra in a mirror-symmetric manner. We draw connections with other approaches to the string cohomology, in particular with the work of Chen and Ruan.'
address:
- 'Department of Mathematics, Columbia University, New York, NY 10027, USA'
- 'Max-Planck-Institut für Mathematik, Bonn, D-53111, Germany'
author:
- 'Lev A. Borisov'
- 'Anvar R. Mavlyutov'
title: 'String cohomology of Calabi-Yau hypersurfaces via Mirror Symmetry'
---
Å[[A]{}]{}
\[section\] \[thm\][Lemma]{} \[thm\][Corollary]{} \[thm\][Proposition]{}
\[thm\][Remark]{} \[thm\][Example]{} \[thm\][Definition]{} \[thm\][Conjecture]{} \[thm\][Conjectural Definition]{}
Introduction {#section.intro}
============
The notion of orbifold cohomology has appeared in physics as a result of studying the string theory on orbifold global quotients, (see [@dhvw]). In addition to the usual cohomology of the quotient, this space was supposed to include the so-called twisted sectors, whose existence was predicted by the modular invariance condition on the partition function of the theory. Since then, there have been several attempts to give a rigorous mathematical formulation of this cohomology theory. The first two, due to [@bd] and [@Batyrev.cangor], tried to define the topological invariants of certain algebraic varieties (including orbifold global quotients) that should correspond to the dimensions of the Hodge components of a conjectural string cohomology space. These invariants should have the property arising naturally from physics: they are preserved by partial crepant resolutions; moreover, they coincide with the usual Hodge numbers for smooth varieties. Also, these invariants must be the same as those defined by physicists for orbifold global quotients. In [@Batyrev.cangor; @Batyrev.nai], Batyrev has successfully solved this problem for a large class of singular algebraic varieties. The first mathematical definition of the orbifold cohomology [*space*]{} was given in [@cr] for arbitrary orbifolds. Moreover, this orbifold cohomology possesses a product structure arising as a limit of a natural quantum product. It is still not entirely clear if the dimensions of the Chen-Ruan cohomology coincide with the prescription of Batyrev whenever both are defined, but they do give the same result for reduced global orbifolds.
In this paper, we propose a construction of string cohomology spaces for Calabi-Yau hypersurfaces that arise in the Batyrev mirror symmetry construction (see [@b2]), with the spaces defined rather explicitly in terms of the corresponding reflexive polyhedra. A peculiar feature of our construction is that instead of a single string cohomology space we construct a finite-dimensional family of such spaces, which is consistent with the physicists’ picture (see [@Greene]). We verify that this construction is consistent with the previous definitions in [@bd], [@Batyrev.cangor] and [@cr], in the following sense. The (bigraded) dimension of our space coincides with the definitions of [@bd] and [@Batyrev.cangor]. In the case of hypersurfaces that have only orbifold singularities, we recover Chen-Ruan’s orbifold cohomology as one special element of this family of string cohomology spaces. We also conjecture a partial natural ring structure on our string cohomology space, which is in correspondence with the cohomology ring of crepant resolutions. This may be used as a real test of the Chen-Ruan orbifold cohomology ring. We go further, and conjecture the B-model chiral ring on the string cohomology space. This is again consistent with the description of the B-model chiral ring of smooth Calabi-Yau hypersurfaces in [@m2].
Our construction of the string cohomology space for Calabi-Yau hypersurfaces is motivated by Mirror Symmetry. Namely, the description in [@m3] of the cohomology of semiample hypersurfaces in toric varieties applies to the smooth Calabi-Yau hypersurfaces in [@b2]. Analysis of Mirror Symmetry on this cohomology leads to a natural construction of the string cohomology space for all semiample Calabi-Yau hypersurfaces. As already mentioned, our string cohomology space depends not only on the complex structure (the defining polynomial $f$), but also on some extra parameter we call $\omega$. For special values of this parameter of an orbifold Calabi-Yau hypersurface, we get the orbifold Dolbeault cohomology of [@cr]. However, for non-orbifold Calabi-Yau hypersurfaces, there is no natural special choice of $\omega$, which means that the general definition of the string cohomology space should depend on some mysterious extra parameter. In the situation of Calabi-Yau hypersurfaces, the parameter $\omega$ corresponds to the defining polynomial of the mirror Calabi-Yau hypersurface. In general, we expect that this parameter should be related to the “stringy complexified Kähler class”, which is yet to be defined.
In an attempt to extend our definitions beyond the Calabi-Yau hypersurface case, we give a conjectural definition of string cohomology vector spaces for stratified varieties with $\QQ$-Gorenstein toroidal singularities that satisfy certain restrictions on the types of singular strata. This definition involves intersection cohomology of the closures of strata, and we check that it produces spaces of correct bigraded dimension. It also reproduces orbifold cohomology of a $\QQ$-Gorenstein toric variety as a special case.
Here is an outline of our paper. In Section \[s:sth\], we examine the connection between the original definition of the [*string-theoretic*]{} Hodge numbers in [@bd] and the [*stringy*]{} Hodge numbers in [@Batyrev.cangor]. We point out that these do not always give the same result and argue that the latter definition is the more useful one. In Section \[section.mirr\], we briefly review the mirror symmetry construction of Batyrev, mainly to fix our notations and to describe the properties we will use in the derivation of the string cohomology. Section \[section.anvar\] describes the cohomology of semiample hypersurfaces in toric varieties and explains how mirror symmetry provides a conjectural definition of the string cohomology of Calabi-Yau hypersurfaces. It culminates in Conjecture \[semiampleconj\], where we define the stringy Hodge spaces of semiample Calabi-Yau hypersurfaces in complete toric varieties. We spend most of the remainder of the paper establishing the expected properties of the string cohomology space. Sections \[s:hd\] and \[section.brel\] calculate the dimensions of the building blocks of our cohomology spaces. In Section \[section.brel\], we develop a theory of deformed semigroup rings which may be of independent interest. This allows us to show in Section \[section.bbo\] that Conjecture \[semiampleconj\] is compatible with the definition of the stringy Hodge numbers from [@Batyrev.cangor]. In the non-simplicial case, this requires the use of $G$-polynomials of Eulerian posets, whose relevant properties are collected in the Appendix. Having established that the dimension is correct, we try to extend our construction to the non-hypersurface case. Section \[section.general\] gives another conjectural definition of the string cohomology vector space in a somewhat more general situation. It hints that the intersection cohomology and the perverse sheaves should play a prominent role in future definitions of string cohomology. In Section \[s:vs\], we connect our work with that of Chen-Ruan [@cr] and Poddar [@p]. Finally, in Section \[section.vertex\], we provide yet another description of the string cohomology of Calabi-Yau hypersurfaces, which was inspired by the vertex algebra approach to Mirror Symmetry.
[*Acknowledgments.*]{} We thank Victor Batyrev, Robert Friedman, Mainak Poddar, Yongbin Ruan and Richard Stanley for helpful conversations and useful references. The second author also thanks the Max-Planck Institut für Mathematik in Bonn for its hospitality and support.
String-theoretic and stringy Hodge numbers {#s:sth}
==========================================
The [*string-theoretic*]{} Hodge numbers were first defined in the paper of Batyrev and Dais (see [@bd]) for varieties with Gorenstein toroidal or quotient singularities. In subsequent papers [@Batyrev.cangor; @Batyrev.nai] Batyrev defined [*stringy*]{} Hodge numbers for arbitrary varieties with log-terminal singularities. To our knowledge, the relationship between these two concepts has never been clarified in the literature. The goal of this section is to show that the string-theoretic Hodge numbers coincide with the stringy ones under some conditions on the singular strata.
We begin with the definition of the string-theoretic Hodge numbers.
\[d:bd\] [@bd] Let $X = \bigcup_{i \in I} X_i$ be a stratified algebraic variety over $\CC$ with at most Gorenstein toroidal singularities such that for each $i \in I$ the singularities of $X$ along the stratum $X_i$ of codimension $k_i$ are defined by a $k_i$-dimensional finite rational polyhedral cone $\sigma_i$; i.e., $X$ is locally isomorphic to $${\CC}^{k-k_i} \times U_{\sigma_i}$$ at each point $x \in X_i$, where $U_{\sigma_i}$ is a $k_i$-dimensional affine toric variety which is associated with the cone $\sigma_i$ (see [@d]), and $k=\dim X$. Then the polynomial $$E^{\rm BD}_{\rm
st}(X;u,v) := \sum_{i \in I} E(X_i;u,v) \cdot S(\sigma_i,uv)$$ is called the [*string-theoretic E-polynomial*]{} of $X$. Here, $$S(\sigma_i,t):=(1-t)^{\dim \sigma_i}\sum_{n\in \sigma_i} t^{\deg
n}= (t-1)^{\dim \sigma_i}\sum_{n\in {\rm int} \sigma_i} t^{-\deg
n}$$ where $\deg$ is the linear function on $\sigma_i$ that takes value $1$ on the generators of one-dimensional faces of $\sigma_i$, and ${\rm int}\sigma_i$ is the relative interior of $\sigma_i$. If we write $E_{\rm st}(X; u,v)$ in the form $$E^{\rm
BD}_{\rm st}(X;u,v) = \sum_{p,q} a_{p,q} u^{p}v^{q},$$ then the numbers $h^{p,q{\rm (BD)}}_{\rm st}(X) := (-1)^{p+q}a_{p,q}$ are called the [*string-theoretic Hodge numbers*]{} of $X$.
\[r:epol\] The E-polynomial in the above definition is defined for an arbitrary algebraic variety $X$ as $$E(X;u,v)=\sum_{p,q}e^{p,q}u^p v^q,$$ where $e^{p,q}=\sum_{k\ge0}(-1)^k h^{p,q}(H^k_c(X))$.
Stringy Hodge numbers of $X$ are defined in terms of the resolutions of its singularities. In general, one can only define the $E$-function in this case, which may or not be a polynomial. We refer to [@Kollar] for the definitions of log-terminal singularities and related issues.
\[d:bcangor\][@Batyrev.cangor] Let $X$ be a normal irreducible algebraic variety with at worst log-terminal singularities, $\rho\, : \, Y \rightarrow
X$ a resolution of singularities such that the irreducible components $D_1, \ldots, D_r$ of the exceptional locus is a divisor with simple normal crossings. Let $\alpha_j>-1$ be the discrepancy of $D_j$, see [@Kollar]. Set $I: = \{1, \ldots, r\}$. For any subset $J \subset I$ we consider $$D_J := \left\{ \begin{array}{ll}
\bigcap_{ j \in J} D_j & \mbox{\rm if $J \neq \emptyset$}
\\
Y & \mbox{\rm if $J = \emptyset$} \end{array} \right. \;\;\;\;
\;\;\;\; \,\mbox{\rm and} \;\;\;\; \;\;\;\; D_J^{\circ} := D_J
\setminus \bigcup_{ j \in\, I \setminus J} D_j.$$ We define an algebraic function $E_{\rm st}(X; u,v)$ in two variables $u$ and $v$ as follows: $$E_{\rm st}(X; u,v) := \sum_{J \subset I}
E(D_J^{\circ}; u,v) \prod_{j \in J} \frac{uv-1}{(uv)^{a_j +1} -1}$$ (it is assumed $\prod_{j \in J}$ to be $1$, if $J =
\emptyset$). We call $E_{\rm st}(X; u,v)$ [*the stringy $E$-function of*]{} $X$. If $E_{\rm st}(X; u,v)$ is a polynomial, define the stringy Hodge numbers the same way as Definition \[d:bd\] does.
It is not obvious at all that the above definition is independent of the choice of the resolution. The original proof of Batyrev uses a motivic integration over the spaces of arcs to relate the $E$-functions obtained via different resolutions. Since the work of D. Abramovich, K. Karu, K. Matsuki, J. Włodarsczyk [@AKMW], it is now possible to check the independence from the resolution by looking at the case of a single blowup with a smooth center compatible with the normal crossing condition.
\[strataE\] Let $X$ be a disjoint union of strata $X_i$, which are locally closed in Zariski topology, and let $\rho$ be a resolution as in Definition \[d:bcangor\]. For each $X_i$ consider $$E_{\rm st}(X_i\subseteq X; u,v) := \sum_{J \subset I}
E(D_J^{\circ}\cap \rho^{-1}(X_i); u,v)
\prod_{j \in J}
\frac{uv-1}{(uv)^{a_j +1} -1}.$$ Then this $E$-function is independent of the choice of the resolution $Y$. The $E$-function of $X$ decomposes as $$E_{\rm st}(X;u,v) = \sum_i E_{\rm st}(X_i\subseteq X; u,v).$$
[*Proof.*]{} Each resolution of $X$ induces a resolution of the complement of $\bar{X_i}$. This shows that for each $X_i$ the sum $$\sum_{j,X_j\subseteq \bar{X_i}} E_{\rm st}(X_j\subseteq X; u,v)$$ is independent from the choice of the resolution and is thus well-defined. Then one uses the induction on dimension of $X_i$. The last statement is clear.
It is a delicate question what data are really necessary to calculate $E_{\rm st}(X_i\subseteq X; u,v)$. It is clear that the knowledge of a Zariski open set of $X$ containing $X_i$ is enough. However, it is not clear whether it is enough to know an analytic neighborhood of $X_i$.
We will use the above lemma to show that the string-theoretic Hodge numbers and the stringy Hodge numbers coincide in a wide class of examples.
\[BDvsB\] Let $X=\bigcup_i X_i$ be a stratified algebraic variety with at worst Gorenstein toroidal singularities as in Definition \[d:bd\]. Assume in addition that for each $i$ there is a desingularization $Y$ of $X$ so that its restriction to the preimage of $X_i$ is a locally trivial fibration in Zariski topology. Moreover, for a point $x\in X_i$ the preimage in $Y$ of an analytic neighborhood of $x$ is complex-analytically isomorphic to a preimage of a neighborhood of $\{0\}$ in $U_{\sigma_i}$ under some resolution of singularities of $U_{\sigma_i}$, times a complex disc, so that the isomorphism is compatible with the resolution morphisms. Then $$E_{\rm st}^{\rm BD}(X;u,v) = E_{\rm st}(X;u,v).$$
[*Proof.*]{} Since $E$-polynomials are multiplicative for Zariski locally trivial fibrations (see [@dk]), the above assumptions on the singularities show that $$E_{\rm st}(X_i\subseteq X; u,v) =
E(X_i;u,v)E_{\rm st}(\{0\}\subseteq U_{\sigma_i};u,v).$$ We have also used here the fact that since the fibers are projective, the analytic isomorphism implies the algebraic one, by GAGA. By the second statement of Lemma \[strataE\], it is enough to show that $$E_{\rm st}(\{0\}\subseteq U_{\sigma_i};u,v) = S(\sigma_i,uv).$$ This result follows from the proof of [@Batyrev.cangor], Theorem 4.3 where the products $$\prod_{j \in J}
\frac{uv-1}{(uv)^{a_j +1} -1}$$ are interpreted as a geometric series and then as sums of $t^{\deg(n)}$ over points $n$ of $\sigma_i$.
String-theoretic and stringy Hodge numbers coincide for nondegenerate hypersurfaces (complete intersections) in Gorenstein toric varieties.
Indeed, in this case, the toric desingularizations of the ambient toric variety induce the desingularizations with the required properties.
We will keep this corollary in mind and from now on will silently transfer all the results on string-theoretic Hodge numbers of hypersurfaces and complete intersections in toric varieties in [@bb], [@bd] to their stringy counterparts.
An example of the variety where string-theoretic and stringy Hodge numbers [*differ*]{} is provided by the quotient of $\CC^2\times E$ by the finite group of order six generated by $$r_1:(x,y;z)\mapsto (x\ee^{2\pi\ii/3},y\ee^{-2\pi\ii/3};z),~
r_2:(x,y;z)\mapsto(y,x;z+p)$$ where $(x,y)$ are coordinates on $\CC^2$, $z$ is the uniformizing coordinate on the elliptic curve $E$ and $p$ is a point of order two on $E$. In its natural stratification, the quotient has a stratum of $A_2$ singularities, so that going around a loop in the stratum results in the non-trivial automorphism of the singularity.
We expect that the stringy Hodge numbers of algebraic varieties with abelian quotient singularities coincide with the dimensions of their orbifold cohomology, [@cr]. This is not going to be true for the string-theoretic Hodge numbers. Also, the latter numbers are not preserved by the partial crepant resolutions as required by physics, see the above example. As a result, we believe that the stringy Hodge numbers are the truly interesting invariant, and that the string-theoretic numbers is a now obsolete first attempt to define them.
Mirror symmetry construction of Batyrev {#section.mirr}
=======================================
In this section, we review the mirror symmetry construction from [@b2]. We can describe it starting with a semiample nondegenerate (transversal to the torus orbits) anticanonical hypersurface $X$ in a complete simplicial toric variety ${{{\PP}_{\Sigma}}}$. Such a hypersurface is Calabi-Yau. The semiampleness property produces a contraction map, the unique properties of which are characterized by the following statement.
\[p:sem\] [@m1] Let $\PP_\Sigma$ be a complete toric variety with a big and nef divisor class $[X]\in A_{d-1}({{{\PP}_{\Sigma}}})$. Then, there exists a unique complete toric variety ${{\PP}_{\Sigma_X}}$ with a toric birational map $\pi:{{{\PP}_{\Sigma}}}@>>>{{\PP}_{\Sigma_X}}$, such that $\Sigma$ is a subdivision of $\Sigma_X$, $\pi_*[X]$ is ample and $\pi^*\pi_*[X]=[X]$. Moreover, if $X=\sum_{\rho}a_\rho D_\rho$ is torus-invariant, then $\Sigma_X$ is the normal fan of the associated polytope $$\Delta_X=\{m\in M:\langle m,e_\rho\rangle\geq-a_\rho
\text{ for all } \rho\}\subset M_{\Bbb R}.$$
Our notation is a standard one taken from [@bc; @c2]: $M$ is a lattice of rank $d$; $N=\text{Hom}(M,{\Bbb Z})$ is the dual lattice; $M_{\Bbb R}$ and $N_{\Bbb R}$ are the $\Bbb R$-scalar extensions of $M$ and $N$; $\Sigma$ is a finite rational polyhedral fan in $N_{\Bbb R}$; ${\PP}_{\Sigma}$ is a $d$-dimensional toric variety associated with $\Sigma$; $\Sigma(k)$ is the set of all $k$-dimensional cones in $\Sigma$; $e_\rho$ is the minimal integral generator of the $1$-dimensional cone $\rho\in\Sigma$ corresponding to a torus invariant irreducible divisor $D_\rho$.
Applying Proposition \[p:sem\] to the semiample Calabi-Yau hypersurface, we get that the push-forward $\pi_*[X]$ is anticanonical and ample, whence, by Lemma 3.5.2 in [@ck], the toric variety ${{\PP}_{\Sigma_X}}$ is Fano, associated with the polytope $\Delta\subset M_{\Bbb R}$ of the anticanonical divisor $\sum_{\rho} D_{\rho}$ on ${{{\PP}_{\Sigma}}}$. Then, [@m1 Proposition 2.4] shows that the image $Y:=\pi(X)$ is an ample nondegenerate hypersurface in ${{\PP}_{\Sigma_X}}={{\PP}}_\Delta$. The fact that ${{\PP}}_\Delta$ is Fano means by Proposition 3.5.5 in [@ck] that the polytope $\Delta$ is reflexive, i.e., its dual $$\Delta^*=\{n\in N_{\Bbb R}: \langle m,n\rangle\ge-1\text{ for } m\in\Delta\}$$ has all its vertices at lattice points in $N$, and the only lattice point in the interior of $\Delta^*$ is the origin $0$. Now, consider the toric variety ${{\PP}}_{\Delta^*}$ associated to the polytope $\Delta^*$ (the minimal integral generators of its fan are precisely the vertices of $\Delta$). Theorem 4.1.9 in [@b2] says that an anticanonical nondegenerate hypersurface $Y^*\subset{{\PP}}_{\Delta^*}$ is a Calabi-Yau variety with canonical singularities. The Calabi-Yau hypersurface $Y^*$ is expected to be a mirror of $Y$. In particular, they pass the topological mirror symmetry test for the stringy Hodge numbers: $$h^{p,q}_{\rm st}(Y)=h_{\rm st}^{d-1-p,q}(Y^*), 0\le p,q\le d-1,$$ by [@bb Theorem 4.15]. Moreover, all crepant partial resolutions $X$ of $Y$ have the same stringy Hodge numbers: $$h^{p,q}_{\rm st}(X)=h^{p,q}_{\rm st}(Y).$$ Physicists predict that such resolutions of Calabi-Yau varieties have indistinguishable physical theories. Hence, all crepant partial resolutions of $Y$ may be called the mirrors of crepant partial resolutions of $Y^*$. To connect this to the classical formulation of mirror symmetry, one needs to note that if there exist crepant smooth resolutions $X$ and $X^*$ of $Y$ and $Y^*$, respectively, then $$h^{p,q}(X)=h^{d-1-p,q}(X^*), 0\le p,q\le d-1,$$ since the stringy Hodge numbers coincide with the usual ones for smooth Calabi-Yau varieties. The equality of Hodge numbers is expected to extend to an isomorphism ([*mirror map*]{}) of the corresponding Hodge spaces, which is compatible with the chiral ring products of A and B models (see [@ck] for more details).
String cohomology construction for Calabi-Yau hypersurfaces {#section.anvar}
===========================================================
In this section, we show how the description of cohomology of semiample hypersurfaces in [@m3] leads to a construction of the string cohomology space of Calabi-Yau hypersurfaces. We first review the building blocks participating in the description of the cohomology in [@m3], and then explain how these building blocks should interchange under mirror symmetry for a pair of smooth Calabi-Yau hypersurfaces in Batyrev’s mirror symmetry construction. Mirror symmetry and the fact that the dimension of the string cohomology is the same for all partial crepant resolutions of ample Calabi-Yau hypersurfaces leads us to a conjectural description of string cohomology for all semiample Calabi-Yau hypersurfaces. In the next three sections, we will prove that this space has the dimension prescribed by [@bd].
The cohomology of a semiample nondegenerate hypersurface $X$ in a complete simplicial toric variety ${{{\PP}_{\Sigma}}}$ splits into the [*toric*]{} and [*residue*]{} parts: $$H^*(X)=H^*_{\rm toric}(X)\oplus H^*_{\rm res}(X),$$ where the first part is the image of the cohomology of the ambient space, while the second is the residue map image of the cohomology of the complement to the hypersurface. By [@m2 Theorem 5.1], $$\label{e:ann}
H^*_{\rm toric}(X)\cong H^*({{{\PP}_{\Sigma}}})/Ann(X)$$ where $Ann(X)$ is the annihilator of the class $[X]\in H^2({{{\PP}_{\Sigma}}})$. The cohomology of ${{{\PP}_{\Sigma}}}$ is isomorphic to $${\Bbb C}[D_\rho:\rho\in\Sigma(1)]/(P(\Sigma)+SR(\Sigma)),$$ where $$P(\Sigma)=\biggl\langle \sum_{\rho\in\Sigma(1)}\langle m,e_\rho\rangle D_\rho:
m\in M\biggr\rangle$$ is the ideal of linear relations among the divisors, and $$SR(\Sigma)=\bigl\langle D_{\rho_1}\cdots D_{\rho_k}:\{e_{\rho_1},\dots,e_{\rho_k}\}
\not\subset\sigma
\text{ for all }\sigma\in\Sigma\bigr\rangle$$ is the Stanley-Reisner ideal. Hence, $H^{*}_{\rm toric}(X)$ is isomorphic to the bigraded ring $$T(X)_{*,*}:={\Bbb C}[D_\rho:\rho\in\Sigma(1)]/I,$$ where $I=(P(\Sigma)+SR(\Sigma)):[X]$ is the ideal quotient, and $D_\rho$ have the degree $(1,1)$.
The following modules over the ring $T(X)$ have appeared in the description of cohomology of semiample hypersurfaces:
Given a big and nef class $[X]\in A_{d-1}({{{\PP}_{\Sigma}}})$ and $\sigma\in\Sigma_X$, let $$U^\sigma(X)=\biggl\langle \prod_{\rho\subset\gamma\in\Sigma}D_\rho:
{{\rm int}}\gamma\subset{{\rm int}}\sigma\biggr\rangle$$ be the bigraded ideal in ${\Bbb C}[D_\rho:\rho\in\Sigma(1)]$, where $D_\rho$ have the degree (1,1). Define the bigraded space $$T^\sigma(X)_{*,*}=U^\sigma(X)_{*,*}/I^\sigma,$$ where $$I^\sigma=\{u\in U^\sigma(X)_{*,*}:\,uvX^{d-\dim\sigma}
\in(P(\Sigma)+SR(\Sigma))\text{ for }v\in U^\sigma(X)_{\dim\sigma-*,\dim\sigma-*}\}.$$
Next, recall from [@c] that [*any*]{} toric variety ${{{\PP}_{\Sigma}}}$ has a homogeneous coordinate ring $$S({{{\PP}_{\Sigma}}})={\Bbb C}[x_\rho:\rho\in\Sigma(1)]$$ with variables $x_\rho$ corresponding to the irreducible torus invariant divisors $D_\rho$. This ring is graded by the Chow group $A_{d-1}({{{\PP}_{\Sigma}}})$, assigning $[\sum_{\rho} a_\rho D_\rho]$ to $\deg(\prod_{\rho} x_\rho^{a_\rho})$. For a Weil divisor $D$ on ${{{\PP}_{\Sigma}}}$, there is an isomorphism $H^0({{{\PP}_{\Sigma}}}, O_{{{\PP}_{\Sigma}}}(D))\cong S({{{\PP}_{\Sigma}}})_\alpha$, where $\alpha=[D]\in A_{d-1}({{{\PP}_{\Sigma}}})$. If $D$ is torus invariant, the monomials in $S({{{\PP}_{\Sigma}}})_\alpha$ correspond to the lattice points of the associated polyhedron $\Delta_D$.
In [@bc], the following rings have been used to describe the residue part of cohomology of ample hypersurfaces in complete simplicial toric varieties:
\[d:r1\] [@bc] Given $f\in S({{{\PP}_{\Sigma}}})_\beta$, set $J_0(f):=\langle x_\rho\partial f/\partial x_\rho:\rho\in\Sigma(1)\rangle$ and $J_1(f):=J_0(f):x_1\cdots x_n$. Then define the rings $R_0(f)=S({{{\PP}_{\Sigma}}})/J_0(f)$ and $R_1(f)=S({{{\PP}_{\Sigma}}})/J_1(f)$, which are graded by the Chow group $A_{d-1}({{{\PP}_{\Sigma}}})$.
In [@m3 Definition 6.5], similar rings were introduced to describe the residue part of cohomology of semiample hypersurfaces:
\[d:rs1\] [@m3] Given $f\in S({{{\PP}_{\Sigma}}})_\beta$ of big and nef degree $\beta=[D]\in A_{d-1}({{{\PP}_{\Sigma}}})$ and $\sigma\in\Sigma_D$, let $J^\sigma_0(f)$ be the ideal in $S({{{\PP}_{\Sigma}}})$ generated by $x_\rho\partial f/\partial x_\rho$, $\rho\in\Sigma(1)$ and all $x_{\rho'}$ such that $\rho'\subset\sigma$, and let $J^\sigma_1(f)$ be the ideal quotient $J^\sigma_0(f):(\prod_{\rho\not\subset\sigma}x_\rho)$. Then we get the quotient rings $R_0^\sigma(f)=S({{{\PP}_{\Sigma}}})/J_0^\sigma(f)$ and $R_1^\sigma(f)=S({{{\PP}_{\Sigma}}})/J_1^\sigma(f)$ graded by the Chow group $A_{d-1}({{{\PP}_{\Sigma}}})$.
As a special case of [@m3 Theorem 2.11], we have:
\[t:main\] Let $X$ be an anticanonical semiample nondegenerate hypersurface defined by $f\in S_\beta$ in a complete simplicial toric variety ${{{\PP}_{\Sigma}}}$. Then there is a natural isomorphism $$\bigoplus_{p,q}H^{p,q}(X)\cong\bigoplus_{p,q} T(X)_{p,q}\oplus\biggl(\bigoplus_{\sigma\in\Sigma_X}
T^\sigma(X)_{s,s}
\otimes R^\sigma_1(f)_{(q-s)\beta+\beta_1^\sigma}\biggr),$$ where $s=(p+q-d+\dim\sigma+1)/2$ and $\beta_1^\sigma=
\deg(\prod_{\rho_k\subset\sigma}x_k)$.
By the next statement, we can immediately see that all the building blocks $R^\sigma_1(f)_{(q-s)\beta+\beta_1^\sigma}$ of the cohomology of partial resolutions in Theorem \[t:main\] are independent of the resolution and intrinsic to an ample Calabi-Yau hypersurface:
\[p:iso\] [@m3] Let $X$ be a big and nef nondegenerate hypersurface defined by $f\in S_\beta$ in a complete toric variety ${{{\PP}_{\Sigma}}}$ with the associated contraction map $\pi:{{{\PP}_{\Sigma}}}@>>>{{\PP}_{\Sigma_X}}$. If $f_\sigma\in S(V(\sigma))_{\beta^\sigma}$ denotes the polynomial defining the hypersurface $\pi(X)\cap V(\sigma)$ in the toric variety $V(\sigma)\subset{{\PP}_{\Sigma_X}}$ corresponding to $\sigma\in\Sigma_X$, then, there is a natural isomorphism induced by the pull-back: $$H^{d(\sigma)-*,*-1}H^{d(\sigma)-1}(\pi(X)\cap{{\TT}_\sigma})\cong
R_1(f_{\sigma})_{*\beta^{\sigma}-\beta_0^{\sigma}}{\cong}
R^\sigma_1(f)_{*\beta-\beta_0+\beta_1^\sigma},$$ where $d(\sigma)=d-\dim\sigma$, ${{\TT}_\sigma}\subset V(\sigma)$ is the maximal torus, and $\beta_0$ and $\beta_0^{\sigma}$ denote the anticanonical degrees on ${{{\PP}_{\Sigma}}}$ and $V(\sigma)$, respectively.
Given a mirror pair $(X,X^*)$ of smooth Calabi-Yau hypersurfaces in Batyrev’s construction, we expect that, for a pair of cones $\sigma$ and $\sigma^*$ over the dual faces of the reflexive polytopes $\Delta^*$ and $\Delta$, $T^{\sigma}(X)_{s,s}$ with $s=(p+q-d+\dim\sigma+1)/2$, in $H^{p,q}(X)$ interchanges, by the mirror map (the isomorphism which maps the quantum cohomology of one Calabi-Yau hypersurface to the B-model chiral ring of the other one), with $R^{\sigma^*}_1(g)_{(p+q-\dim\sigma^*)\beta^*/2+\beta_1^{\sigma^*}}$ in $H^{d-1-p,q}(X^*)$ (note that $\dim\sigma^*=d-\dim\sigma+1$), where $g\in S({{\PP}}_{\Sigma^*})_{\beta^*}$ determines $X^*$. For the 0-dimensional cones $\sigma$ and $\sigma^*$, the interchange goes between the [*polynomial part*]{} $R_1(g)_{*\beta^*}$ of one smooth Calabi-Yau hypersurface and the toric part of the cohomology of the other one. This correspondence was already confirmed by the construction of the generalized monomial-divisor mirror map in [@m3]. On the other hand, one can deduce that the dimensions of these spaces coincide for the pair of 3-dimensional smooth Calabi-Yau hypersurfaces, by using Remark 5.3 in [@m1]. The correspondence between the toric and polynomial parts was discussed in [@ck].
Now, let us turn our attention to a mirror pair of semiample singular Calabi-Yau hypersurfaces $Y$ and $Y^*$. We know that their string cohomology should have the same dimension as the usual cohomology of possible crepant smooth resolutions $X$ and $X^*$, respectively. Moreover, the A-model and B-model chiral rings on the string cohomology should be isomorphic for $X$ and $X^*$, respectively. We also know that the polynomial $g$ represents the complex structure of the hypersurface $Y^*$ and its resolution $X^*$, and, by mirror symmetry, $g$ should correspond to the complexified Kähler class of the mirror Calabi-Yau hypersurface. Therefore, based on the mirror correspondence of smooth Calabi-Yau hypersurfaces, we make the following prediction for the small quantum ring presentation on the string cohomology space: $$\label{e:conj}
QH_{\rm st}^{p,q}(Y)\cong
\hspace{-0.05in}
\bigoplus_{(\sigma,\sigma^*)}
R_1(\omega_{\sigma^*})_{(p+q-\dim\sigma^*+2)\beta^{\sigma^*}/2-\beta_0^{\sigma^*}}\otimes
R_1(f_{\sigma})_{(q-p+d-\dim\sigma+1)\beta^\sigma/2-\beta_0^{\sigma}},$$ where the sum is by all pairs of the cones $\sigma$ and $\sigma^*$ (including 0-dimensional cones) over the dual faces of the reflexive polytopes, and where $\omega_{\sigma^*}\in S(V(\sigma^*))_{\beta^{\sigma^*}}$ is a formal restriction of $\omega\in S({{\PP}}_{\Delta^*})_{\beta^*}$, which should be related to the complexified Kähler class of the mirror (we will discuss this in Section \[s:vs\]). This construction can be rewritten in simpler terms, which will help us to give a conjectural description of the usual string cohomology space for all semiample Calabi-Yau hypersurfaces.
First, recall Batyrev’s presentation of the toric variety ${{\PP}}_\Delta$ for an [*arbitrary*]{} polytope $\Delta$ in $M$ (see [@b1], [@c2]). Consider the [*Gorenstein*]{} cone $K$ over $\Delta\times\{1\}\subset M\oplus{\Bbb Z}$. Let $S_\Delta$ be the subring of ${\Bbb C}[t_0,t_1^{\pm1},\dots,t_d^{\pm1}]$ spanned over $\Bbb C$ by all monomials of the form $t_0^k t^m=t_0^kt_1^{m_1}\cdots t_d^{m_d}$ where $k\ge0$ and $m\in k\Delta$. This ring is graded by the assignment $\deg(t_0^k t^m)=k$. Since the vector $(m,k)\in K$ if and only if $m\in k\Delta$, the ring $S_\Delta$ is isomorphic to the semigroup algebra ${\Bbb C}[K]$. The toric variety ${{\PP}}_\Delta$ can be represented as $${\rm Proj}(S_\Delta)={\rm Proj}({\Bbb C}[K]).$$ The ring $S_\Delta$ has a nice connection to the homogeneous coordinate ring $S({{\PP}}_\Delta)={\Bbb C}[x_\rho:\rho\in\Sigma_\Delta(1)]$ of the toric variety ${{\PP}}_\Delta$, corresponding to a fan $\Sigma_\Delta$. If $\beta\in A_{d-1}({{\PP}}_\Delta)$ is the class of the ample divisor $\sum_{\rho\in\Sigma_\Delta(1)} b_\rho D_\rho$ giving rise to the polytope $\Delta$, then there is a natural isomorphism of graded rings $$\label{e:isom}
{\Bbb C}[K]\cong S_\Delta\cong\bigoplus_{k=0}^\infty S({{\PP}}_\Delta)_{k\beta},$$ sending $(m,k)\in\CC[K]_k$ to $t_0^k t^m$ and $\prod_\rho x_\rho^{k b_\rho+\langle m,e_\rho\rangle}$, where $e_\rho$ is the minimal integral generator of the ray $\rho$. Now, given $f\in S({{\PP}}_\Delta)_{\beta}$, we get the ring $R_1(f)$. The polynomial $f=\sum_{m\in\Delta}f(m)
x_\rho^{b_\rho+\langle m,e_\rho\rangle}$, where $f(m)$ are the coefficients, corresponds by the isomorphisms (\[e:isom\]) to $\sum_{m\in\Delta}f(m)t_0t^m\in (S_\Delta)_1$ and $\sum_{m\in\Delta}f(m)[m,1]\in{\Bbb C}[K]_1$ (the brackets \[[ ]{}\] are used to distinguish the lattice points from the vectors over $\CC$), which we also denote by $f$. By the proof of [@bc Theorem 11.5], we have that $$(S({{\PP}}_\Delta)/J_0(f))_{k\beta}\cong
(S_\Delta/\langle t_i\partial f/\partial t_i:\, i=0,\dots,d\rangle)_k
\cong R_0(f,K)_k,$$ where $R_0(f,K)$ is the quotient of ${\Bbb C}[K]$ by the ideal generated by all “logarithmic derivatives” of $f$: $$\sum_{m\in\Delta}((m,1)\cdot n) f(m)[m,1]$$ for $n\in N\oplus{\Bbb Z}$. The isomorphisms (\[e:isom\]) induce the bijections $$S({{\PP}}_\Delta)_{k\beta-\beta_0}@>\prod_\rho x_\rho>>\langle
\prod_\rho x_\rho \rangle_{k\beta}\cong
(I_\Delta^{(1)})_k \cong {\Bbb C}[K^\circ]_k$$ ($\beta_0=\deg(\prod_\rho x_\rho)$), where $I_\Delta^{(1)}\subset S_\Delta$ is the ideal spanned by all monomials $t_0^k t^m$ such that $m$ is in the interior of $k\Delta$, and ${\Bbb C}[K^\circ]\subset{\Bbb C}[K]$ is the ideal spanned by all lattice points in the relative interior of $K$. Since the space $R_1(f)_{k\beta-\beta_0}$ is isomorphic to the image of $\langle\prod_\rho x_\rho \rangle_{k\beta}$ in $(S({{\PP}}_\Delta)/J_0(f))_{k\beta}$, $$R_1(f)_{k\beta-\beta_0}\cong R_1(f,K)_k,$$ where $R_1(f,K)$ is the image of ${\Bbb C}[K^\circ]$ in the graded ring $R_0(f,K)$.
The above discussion applies well to all faces $\Gamma$ in $\Delta$. In particular, if the toric variety $V(\sigma)\subset{{\PP}}_\Delta$ corresponds to $\Gamma$, and $\beta^\sigma\in A_{d-\dim\sigma-1}(V(\sigma))$ is the restriction of the ample class $\beta$, then $$S(V(\sigma))_{*\beta^\sigma}\cong {\Bbb C}[C],$$ where $C$ is the Gorenstein cone over the polytope $\Gamma\times\{1\}$. This induces an isomorphism $$R_1(f_\sigma)_{*\beta^\sigma-\beta_0^\sigma}\cong R_1(f_C,C),$$ where $f_C=\sum_{m\in\Gamma} f(m)[m,1]$ in ${\Bbb C}[C]_1$ is the projection of $f$ to the cone $C$.
Now, we can restate our conjecture (\[e:conj\]) in terms of Gorenstein cones: $$\bigoplus_{p,q}QH_{\rm st}^{p,q}(Y)\cong\bigoplus_{\begin{Sb}
p,q\\
(C,C^*)\end{Sb}}
R_1(\omega_{C^*},C^*)_{(p+q-d+\dim C^*+1)/2}\otimes
R_1(f_{C},C)_{(q-p+\dim C)/2},$$ where the sum is by all dual faces of the reflexive Gorenstein cones $K$ and $K^*$. This formula is already supported by Theorem 8.2 in [@bd], which for ample Calabi-Yau hypersurfaces in weighted projective spaces gives a corresponding decomposition of the stringy Hodge numbers (see Remark \[r:corrw\] in the next section). A generalization of [@bd Theorem 8.2] will be proved in Section \[section.bbo\], justifying the above conjecture in the case of ample Calabi-Yau hypersurfaces in Fano toric varieties.
It is known that the string cohomology, which should be the limit of the quantum cohomology ring, of smooth Calabi-Yau hypersurfaces should be the same as the usual cohomology. We also know the property that the quantum cohomology spaces should be isomorphic for the ample Calabi-Yau hypersurface $Y$ and its crepant resolution $X$. Therefore, it makes sense to compare the above description of $QH_{\rm st}^{p,q}(Y)$ with the description of the cohomology of semiample Calabi-Yau hypersurfaces $X$ in Theorem \[t:main\]. We can see that the right components in the tensor products coincide, by Proposition \[p:iso\] and the definition of $R_1(f_{C},C)$. On the other hand, the left components in $QH_{\rm st}^{p,q}(Y)$ for the ample Calabi-Yau hypersurface $Y$ do not depend on a resolution, while the left components $T^\sigma(X)$ in $H^{p,q}(X)$ for the resolution $X$ depend on the Stanley-Reisner ideal $SR(\Sigma)$. This hints us to the following definitions:
\[d:rings\] Let $C$ be a Gorenstein cone in a lattice $L$, subdivided by a fan $\Sigma$, and let ${\Bbb C}[C]$ and ${\Bbb C}[C^\circ]$, where $C^\circ$ is the relative interior of $C$, be the semigroup rings. Define “deformed” ring structures $\CC[C]^\Sigma$ and $\CC[C]^\Sigma$ on ${\Bbb C}[C]$ and ${\Bbb C}[C^\circ]$, respectively, by the rule: $[m_1][m_2]=[m_1+m_2]$ if $m_1,m_2\subset\sigma\in\Sigma$, and $[m_1][m_2]=0$, otherwise.
Given $g=\sum_{m\in C,\deg m=1} g(m)[m]$, where $g(m)$ are the coefficients, let $$R_0(g,C)^\Sigma=\CC[C]^\Sigma/Z\cdot\CC[C]^\Sigma$$ be the graded ring over the graded module $$R_0(g,C^\circ)^\Sigma=\CC[C^\circ]^\Sigma/Z\cdot\CC[C^\circ]^\Sigma,$$ where $Z=\{\sum_{m\in C,\deg m=1} (m\cdot n)
g(m)[m]:\,n\in {\rm Hom}(L,{\Bbb Z})\}$. Then define $R_1(g,C)^\Sigma$ as the image of the natural homomorphism $R_0(g,C^\circ)^\Sigma@>>>R_0(g,C)^\Sigma$.
In the above definition, note that if $\Sigma$ is a trivial subdivision, we recover the spaces $R_0(g,C)$ and $R_1(g,C)$ introduced earlier. Also, we should mention that the Stanley-Reisner ring of the fan $\Sigma$ can be naturally embedded into the “deformed” ring $\CC[C]^\Sigma$, and this map is an isomorphism when the fan $\Sigma$ is smooth.
Here is our conjecture about the string cohomology space of semiample Calabi-Yau hypersurfaces in a complete toric variety.
\[semiampleconj\] Let $X\subset{{{\PP}_{\Sigma}}}$ be a semiample anticanonical nondegenerate hypersurface defined by $f\in H^0({{{\PP}_{\Sigma}}},{\cal O}_{{{\PP}_{\Sigma}}}(X))\cong\CC[K]_1$, and let $\omega$ be a generic element in $\CC[K^*]_1$, where $K^*$ is the reflexive Gorenstein cone dual to the cone $K$ over the reflexive polytope $\Delta$ associated to $X$. Then there is a natural isomorphism: $$H^{p,q}_{\st}(X)\cong
\bigoplus_{C\subseteq K} R_1(\omega_{C^*},C^*)^\Sigma_{(p+q-d+\dim C^*+1)/2}
\otimes R_1(f_C,C)_{(q-p+\dim C)/2},$$ where $C^*\subseteq K^*$ is a face dual to $C$, and where $f_C$, $\omega_{C^*}$ denote the projections of $f$ and $\omega$ to the respective cones $C$ and $C^*$. (Here, the superscript $\Sigma$ denotes the subdivision of $K^*$ induced by the fan $\Sigma$.)
Since the dimension of the string cohomology for all crepant partial resolutions should remain the same and should coincide with the dimension of the quantum string cohomology space, we expect that $$\label{e:expe}
{\dim}R_1(\omega_{C^*},C^*)_{\_}^\Sigma=
{\dim}R_1(\omega_{C^*},C^*)_{\_},$$ which will be shown in Section \[section.brel\] for a projective subdivision $\Sigma$. Conjecture \[semiampleconj\] will be confirmed by the corresponding decomposition of the stringy Hodge numbers in Section \[section.bbo\]. Moreover, in Section \[s:vs\], we will derive the Chen-Ruan orbifold cohomology as a special case of Conjecture \[semiampleconj\] for ample Calabi-Yau hypersurfaces in complete simplicial toric varieties.
Hodge-Deligne numbers of affine hypersurfaces {#s:hd}
=============================================
Here, we compute the dimensions of the spaces $R_1(g,C)_{\_}$ from the previous section. It follows from Proposition \[p:iso\] that these dimensions are exactly the Hodge-Deligne numbers of the minimal weight space on the middle cohomology of a hypersurface in a torus. An explicit formula in [@dk] and [@bd] for the $E$-polynomial of a nondegenerate affine hypersurface whose Newton polyhedra is a simplex leads us to the answer for the graded dimension of $R_1(g,C)$ when $C$ is a simplicial Gorenstein cone. However, it was very difficult to compute the Hodge-Deligne numbers of an arbitrary nondegenerate affine hypersurface. This was a major technical problem in the proof of mirror symmetry of the stringy Hodge numbers for Calabi-Yau complete intersections in [@bb]. Here, we will present a simple formula for the Hodge-Deligne numbers of a nondegenerate affine hypersurface.
Before we start computing ${\rm gr.dim.}R_1(g,C)$, let us note that for a nondegenerate $g\in\CC[C]_1$ (i.e., the corresponding hypersurface in ${\rm Proj}(\CC[C])$ is nondegenerate): $${\rm gr.dim.}R_0(g,C)=S(C,t),$$ where the polynomial $S$ is the same as in Definition \[d:bd\] of the stringy Hodge numbers. This was shown in [@b1 Theorem 4.8 and 2.11] (see also [@Bor.locstring]).
When the cone $C$ is simplicial, we already know the formula for the graded dimension of $R_1(g,C)$:
\[p:simp\] Let $C$ be a simplicial Gorenstein cone, and let $g\in\CC[C]_1$ be nondegenerate. Then $${\rm gr.dim.}R_1(g,C)=\tilde S(C,t)$$ where $\tilde S(C,t)=\sum_{C_1\subseteq C} S(C_1,t)
(-1)^{\dim C-\dim C_1}$.
The polynomial $\tilde S(C,t)$ was introduced with a slightly different notation in [@bd Definition 8.1] for a lattice simplex. One can check that $\tilde S(C,t)$ in this proposition is equivalent to the one in [@bd Corollary 6.6]. [From]{} the previous section and [@b1 Proposition 9.2], we know that $$R_1(g,C)\cong{{\rm Gr}}_F W_{\dim Z_g}H^{\dim Z_g}(Z_g),$$ where $Z_g$ is the nondegenerate affine hypersurface determined by $g$ in the maximal torus of ${\rm Proj}(\CC[C])$. By [@bd Proposition 8.3], $$E(Z_g;u,v)
=\frac{(uv-1)^{\dim C-1}+(-1)^{\dim C}}{uv}+(-1)^{\dim C}
\sum_{\begin{Sb}
C_1\subseteq C\\
\dim C_1>1\end{Sb}}\frac{u^{\dim C_1}}{uv}\tilde S(C_1,u^{-1}v).$$ Now, note that the coefficients $e^{p,q}(Z_g)$ at the monomials $u^p v^q$ with $p+q=\dim Z_g$ are related to the Hodge-Deligne numbers by the calculations in [@dk]: $$e^{p,q}(Z_g)=(-1)^{\dim C}h^{p,q}(H^{\dim Z_g}(Z_g))+(-1)^{p}\delta_{pq}
C_{\dim C-1}^p,$$ where $\delta_{pq}$ is the Kronecker symbol and $C_{\dim C-1}^p$ is the binomial coefficient. Comparing this with the above formula for $E(Z_g;u,v)$, we deduce the result.
\[r:corrw\] By the above proposition, we can see that [@bd Theorem 8.2] gives a decomposition of the stringy Hodge numbers of ample Calabi-Yau hypersurfaces in weighted projective spaces in correspondence with Conjecture \[semiampleconj\].
Next, we generalize the polynomials $\tilde S(C,t)$ from Proposition \[p:simp\] to nonsimplicial Gorenstein cones in such a way that they would count the graded dimension of $R_1(g,C)$.
\[d:spol\]
Let $C$ be a Gorenstein cone in a lattice $L$. Then set $$\tilde
S(C,t) := \sum_{C_1\subseteq C} S(C_1,t) (-1)^{\dim C-\dim C_1}
G([C_1,C],t),$$ where $G$ is a polynomial (from Definition \[Gpoly\] in the Appendix) for the partially ordered set $[C_1,C]$ of the faces of $C$ that contain $C_1$.
\[tildepoincare\] It is not hard to show that the polynomial $\tilde S(C,t)$ satisfies the duality $$\tilde S(C,t) = t^{\dim C} \tilde S(C,t^{-1})$$ based on the duality properties of $S$ and the definition of $G$-polynomials. However, the next result and Proposition \[p:iso\] imply this fact.
\[p:nonsimp\] Let $C$ be a Gorenstein cone, and let $g\in\CC[C]_1$ be nondegenerate. Then $${\rm gr.dim.}R_1(g,C)=\tilde S(C,t).$$
As in the proof of Proposition \[p:simp\], we consider a nondegenerate affine hypersurface $Z_g$ determined by $g$ in the maximal torus of ${\rm Proj}(\CC[C])$. Then [@bb Theorem 3.18] together with the definition of $S$ gives $$E(Z_g;u,v)
= \frac{(uv-1)^{\dim C-1}}{uv} + \frac{(-1)^{\dim C}}{uv}
\sum_{C_2\subseteq C}
B([C_2,C]^*; u,v)S(C_2,vu^{-1})u^{\dim C_2},$$ where the polynomials $B$ are from Definition \[Q\]. We use Lemma \[BfromG\] and Definition \[d:spol\] to rewrite this as $$\begin{gathered}
E(Z_g;u,v)
= \frac{(uv-1)^{\dim C-1}}{uv}
+\frac{(-1)^{\dim C}}{uv}\times
\\
\times
\sum_{C_2\subseteq C_1\subseteq C} u^{\dim C_2}
S(C_2,u^{-1}v)
G([C_2,C_1],u^{-1}v)(-u)^{\dim C_1-\dim C_2}
G([C_1,C]^*,uv)
\\
=\frac{(uv-1)^{\dim C-1}}{uv}+\frac{(-1)^{\dim C}}{uv}
\sum_{C_1\subseteq C}u^{\dim C_1}
\tilde S(C_1,u^{-1}v)
G([C_1,C]^*,uv).\end{gathered}$$
The definition of $G$-polynomials assures that the degree of $u^{\dim C_1}G([C_1,C]^*,uv)$ is at most $\dim C$ with the equality only when $C_1=C$. Therefore, the graded dimension of $R_1(g,C)$ can be read off the same way as in the proof of Proposition \[p:simp\] from the coefficients at total degree $\dim C-2$ in the above sum.
“Deformed” rings and modules {#section.brel}
============================
While this section may serve as an invitation to a new theory of “deformed” rings and modules, the goal here is to prove the equality (\[e:expe\]), by showing that the graded dimension formula of Proposition \[p:nonsimp\] holds for the spaces $R_1(g,C)^\Sigma$ from Definition \[d:rings\]. To prove the formula we use the recent work of Bressler and Lunts (see [@bl], and also [@bbfk]). This requires us to first study Cohen-Macaulay modules over the deformed semigroup rings $\CC[C]^\Sigma$.
First, we want to generalize the nondegeneracy notion:
\[d:nond\] Let $C$ be a Gorenstein cone in a lattice $L$, subdivided by a fan $\Sigma$. Given $g=\sum_{m\in C,\deg m=1} g(m)[m]$, get $$g_j = \sum_{m\in C,\deg m=1}(m\cdot n_j) g(m)[m],\quad\text{ for }
j=1,\dots,\dim C$$ where $\{n_1,\dots,n_{\dim C}\}\subset{\rm Hom}(L,{\Bbb Z})$ descends to a basis of ${\rm Hom}(L,{\Bbb Z})/C^\perp$. The element $g$ is called [*$\Sigma$-regular (nondegenerate)*]{} if $\{g_1,\ldots,g_{\dim C}\}$ forms a regular sequence in the deformed semigroup ring $\CC[C]^\Sigma$.
\[r:nond\] When $\Sigma$ is a trivial subdivision, [@b1 Theorem 4.8] shows that the above definition is consistent with the previous notion of nondegeneracy corresponding to the transversality of a hypersurface to torus orbits.
\[t:cm\] [(i)]{} The ring $\CC[C]^\Sigma$ and its module $\CC[C^\circ]^\Sigma$ are Cohen-Macaulay.\
[(ii)]{} A generic element $g\in\CC[C]_1$ is $\Sigma$-regular. Moreover, for a generic $g$ the sequence $\{g_1,\ldots,g_{\dim C}\}$ from Definition \[d:nond\] is $\CC[C^\circ]^\Sigma$-regular.\
[(iii)]{} If $g\in\CC[C]_1$ is $\Sigma$-regular, then the sequence $\{g_1,\ldots,g_{\dim C}\}$ is $\CC[C^\circ]^\Sigma$-regular.
Part [(ii)]{} follows from the proofs of Propositions 3.1 and 3.2 in [@Bor.locstring]. The reader should notice that the proofs use degenerations defined by projective simplicial subdivisions, and any fan admits such a subdivision.
Then, part [(ii)]{} implies [(i)]{}, by the definition of Cohen-Macaulay, while part [(iii)]{} follows from [(i)]{} and Proposition 21.9 in [@e].
As a corollary of Theorem \[t:cm\], we get the following simple description of $\Sigma$-regular elements:
\[l:iff\] An element $g\in\CC[C]_1$ is $\Sigma$-regular, if and only if its restriction to all maximum-dimensional cones $C'\in\Sigma(\dim C)$ is nondegenerate in $\CC[C']$.
Since $\CC[C]^\Sigma$ is Cohen-Macaulay, the regularity of a sequence is equivalent to the quotient by the sequence having a finite dimension, by [@ma Theorem 17.4].
One can check that $\CC[C]^\Sigma$ is filtered by the modules $R_k$ defined as the span of $[m]$ such that the minimum cone that contains $m$ has dimension at least $k$. The $k$-th graded quotient of this filtration is the direct sum of $\CC[C_1^\circ]$ by all $k$-dimensional cones $C_1$ of $\Sigma$. If $g$ is nondegenerate for every cone of maximum dimension, then its projection to any cone $C_1$ is nondegenerate, and Theorem \[t:cm\] shows that it is nondegenerate for each $\CC[C_1^\circ]$. Then by decreasing induction on $k$ one shows that $R_k/\{g_1,\ldots,g_{\dim C}\}R_k$ is finite-dimensional.
In the other direction, it is easy to see that for every $C'\in \Sigma$ the $\CC[C]^\Sigma$-module $\CC[C']$ is a quotient of $\CC[C]^\Sigma$, which gives a surjection $$\CC[C]^\Sigma/\{g_1,\ldots,g_{\dim C}\}\CC[C]^\Sigma
@>>>\CC[C']/\{g_1|_{C'},\dots,g_{\dim C}|_{C'}\}\CC[C']@>>>0.$$
The above lemma implies that the property of $\Sigma$-regularity is preserved by the restrictions:
Let $C$ be a Gorenstein cone in a lattice $L$, subdivided by a fan $\Sigma$. If $g\in\CC[C]_1$ is $\Sigma$-regular, then $g\in\CC[C_1]_1$ is $\Sigma$-regular for all faces $C_1\subseteq C$.
Let $g\in\CC[C]_1$ be $\Sigma$-regular. By Lemma \[l:iff\], the restriction $g_{C'}$ is nondegenerate in $\CC[C']$ for all $C'\in\Sigma(\dim C)$. Since the property of nondegeneracy associated with a hypersurface is preserved by the restrictions, $g_{C_1'}$ is nondegenerate in $\CC[C_1']$ for all $C_1'\in\Sigma(\dim C_1)$ contained in $C_1$. Applying Lemma \[l:iff\] again, we deduce the result.
The next result generalizes [@b1 Proposition 9.4] and [@Bor.locstring Proposition 3.6].
\[Zreg\] Let $g\in\CC[C]_1$ be $\Sigma$-regular, then $R_0(g,C)^\Sigma$ and $R_0(g,C^\circ)^\Sigma$ have graded dimensions $S(C,t)$ and $t^kS(C,t^{-1})$, respectively, and there exists a nondegenerate pairing $$\langle\_,\_\rangle:
R_0(g,C)_k^\Sigma\times R_0(g,C^\circ)_{\dim C-k}^\Sigma\to R_0(g,C^\circ)_{\dim C}^\Sigma\cong\CC,$$ induced by the multiplicative $R_0(g,C)^\Sigma$-module structure.
It is easy to see that the above statement is equivalent to saying that $\CC[C^\circ]^\Sigma$ is the canonical module for $\CC[C]^\Sigma$. When $\Sigma$ consists of the faces of $C$ only, this is well-known (cf. [@d]). To deal with the general case, we will heavily use the results of [@e], Chapter 21.
We denote $A=\CC[C]^\Sigma$. For every cone $C_1$ of $\Sigma$ the vector spaces $\CC[C_1]$ and $\CC[C_1^\circ]$ are equipped with the natural $A$-module structures. By Proposition 21.10 of [@e], modified for the graded case, we get $$\Ext^i_A(\CC[C_1],w_A) \iso \Bigl\{
\begin{array}{ll}
\CC[C_1^\circ],&i=\codim(C_1)\\
0,&i\neq\codim(C_1)
\end{array}
\Bigr.$$ where $w_A$ is the canonical module of $A$.
Consider now the complex $\cal F$ of $A$-modules $$0@>>>F^0@>>>F^1@>>>\cdots @>>>F^d@>>>0$$ where $$F^n =
\bigoplus_{C_1\in\Sigma,\codim(C_1)=n} \CC[C_1]$$ and the differential is a sum of the restriction maps with signs according to the orientations. The nontrivial cohomology of $\cal F$ is located at $F^0$ and equals $\CC[C^\circ]^\Sigma$. Indeed, by looking at each graded piece separately, we see that the cohomology occurs only at $F^0$, and then the kernel of the map to $F^1$ is easy to describe. We can now use the complex $\cal F$ and the description of $\Ext^i_A(\CC[C_1],w_A)$ to try to calculate $\Hom_A(\CC[C^\circ]^\Sigma,w_A)$. The resulting spectral sequence degenerates immediately, and we conclude that $\Hom_A(\CC[C^\circ]^\Sigma,w_A)$ has a filtration such that the associated graded module is naturally isomorphic to $$\bigoplus_{C_1\in \Sigma} \CC[C_1^\circ].$$
By duality of maximal Cohen-Macaulay modules (see [@e]), it suffices to show that $\Hom_A(\CC[C^\circ]^\Sigma,w_A)\iso A$, but the above filtration only establishes that it has the correct graded pieces, so extra arguments are required. Let $C^\prime$ be a cone of $\Sigma$ of maximum dimension. We observe that $\cal F$ contains a subcomplex ${\cal F}^\prime$ such that $$F^{\prime n}=
\bigoplus_{C_1\subseteq C^\prime} \CC[C_1].$$ Similar to the case of $\cal F$, the cohomology of ${\cal F}^\prime$ occurs only at $F^{\prime 0}$ and equals $\CC[C^{\prime\circ}]$. By snake lemma, the cohomology of ${\cal F}/{\cal F}^\prime$ also occurs at the zeroth spot and equals $\CC[C^\circ]^\Sigma/\CC[C^{\prime\circ}]$. By looking at the spectral sequences again, we see that $$\Ext^{>0}(\CC[C^\circ]^\Sigma/\CC[C^{\prime\circ}],w_A)=0$$ and we have a grading preserving surjection $$\Hom_A(\CC[C^\circ]^\Sigma,w_A)
@>>>\Hom_A(\CC[C^{\prime\circ}],w_A)@>>>0.$$ Since $\Hom_A(\CC[C^\prime],w_A)\iso \CC[C^{\prime\circ}]$, duality of maximal Cohen-Macaulay modules over $A$ shows that $$\Hom_A(\CC[C^{\prime\circ}],w_A)\iso \CC[C^{\prime}]$$ so for every $m\in C^\prime$ the element $[m]$ of $A$ does not annihilate the degree zero element of $\Hom_A(\CC[C^\circ]^\Sigma,w_A)$. By looking at all $C^\prime$ together, this shows that $$\Hom_A(\CC[C^\circ]^\Sigma,w_A)\iso A$$ which finishes the proof.
\[p:nons\] Let $g\in\CC[C]_1$ be $\Sigma$-regular, then the pairing $\langle\_\,,\_\rangle$ induces a symmetric nondegenerate pairing $\{\_\,,\_\}$ on $R_1(g,C)^\Sigma$, defined by $$\{ x,y\} = \langle x,y'\rangle$$ where $y'$ is an element of $R_0(g,C^\circ)^\Sigma$ that maps to $y$.
The nondegeneracy of the pairing $\{\_\,,\_\}$ follows from that of $\langle\_\,,\_\rangle$. The pairing is symmetric, because it comes from the commutative product on $\CC[C^\circ]^\Sigma$.
\[dimtilde\] Let $C$ be a Gorenstein cone subdivided by a projective fan $\Sigma$. If $g\in\CC[C]_1$ is $\Sigma$-regular, then the graded dimension of $R_1(g,C)^\Sigma$ is $\tilde S(C,t)$.
We will use the description of Bressler and Lunts [@bl] of locally free flabby sheaves on the finite ringed topological space associated to the cone $C$. We recall here the basic definitions. Consider the set $P$ of all faces of the cone $C$. It is equipped with the topology in which open sets are subfans, i.e. the sets of faces closed under the operation of taking a face. Bressler and Lunts define a sheaf $\cal A$ of graded commutative rings on $P$ whose sections over each open set is the ring of continuous piecewise polynomial functions on the union of all strata of this set. The grading of linear functions will be set to $1$, contrary to the convention of [@bl].
They further restrict their attention to the sheaves $\cal F$ of $\cal A$-modules on $P$ that satisfy the following conditions.
$\bullet$ For every face $C_1$ of $C$, sections of $\cal F$ over the open set that corresponds to the union of all faces of $C_1$ is a free module over the ring of polynomial functions on $C_1$.
$\bullet$ ${\cal F}$ is flabby, i.e. all restriction maps are surjective.
We will use the following crucial result.
[@bl] Every sheaf $\cal F$ that satisfies the above two properties is isomorphic to a direct sum of indecomposable graded sheaves ${\cal
L}_{C_1}t^i$, where $C_1$ is a face of $C$ and $t^i$ indicates a shift in grading. For each indecomposable sheaf $L_{C_1}$ the space of global sections $\Gamma(P,{\cal L}_{C_1})$ is a module over the polynomial functions on $C$ of the graded rank $G([C_1,C]^*,t)$ where $[C_1,C]$ denotes the Eulerian subposet of $P$ that consists of all faces of $C$ that contain $C_1$.
Now let us define a sheaf ${\cal B}(g)$ on $P$ whose sections over the open subset $I\in P$ are $\CC[\cap_{i\in I}C_i]^\Sigma$. It is clearly a flabby sheaf, which can be given a grading by $\deg(\_)$. Moreover, ${\cal B}(g)$ can be given a structure of a sheaf of $\cal A$ modules as follows. Every linear function $\varphi$ on a face $C_1$ defines a logarithmic derivative $$\partial_\varphi g:=\sum_{m\in C_1,\deg m =1} \phi(m)g(m)[m]$$ of $g$, which is an element of the degree $1$ in $\CC[C_1]^\Sigma$. Then the action of $\varphi$ is given by the multiplication by $\partial_\varphi g$, and this action is extended to all polynomial functions on the cone $C_1$. Similar construction clearly applies to continuous piecewise polynomial functions for any open set of $P$.
Proposition \[Zreg\] assures that ${\cal B}(g)$ satisfies the second condition of Bressler and Lunts, and can therefore be decomposed into a direct sum of $L_{C_1}t^i$ for various $C_1$ and $i$. The definition of $R_1(g,C)^\Sigma$ implies that its graded dimension is equal to the graded rank of the stalk of ${\cal B}(g)$ at the point $C\in P$. Since the graded rank of $\cal B$ is $S(C,t)$, we conclude that $$S(C,t) = \sum_{C_1\subseteq C} {\rm gr.dim.}R_1(g_{C_1},C_1)^\Sigma G([C_1,C]^*,t).$$ To finish the proof of Theorem \[dimtilde\], it remains to apply Lemma \[Ginverse\].
Decomposition of stringy Hodge numbers for hypersurfaces {#section.bbo}
========================================================
In this section, we prove a generalization of [@bd Theorem 8.2] for all Calabi-Yau hypersurfaces, which gives a decomposition of the stringy Hodge numbers of the hypersurfaces. First, we recall a formula for the stringy Hodge numbers of Calabi-Yau hypersurfaces obtained in [@bb]. Then using a bit of combinatorics, we rewrite this formula precisely to the form of [@bd Theorem 8.2] with $\tilde S$ defined in the previous section.
The stringy Hodge numbers of a Calabi-Yau complete intersection have been calculated in [@bb] in terms of the numbers of integer points inside multiples of various faces of the reflexive polytopes $\Delta$ and $\Delta^*$ as well as some polynomial invariants of partially ordered sets. A special case of the main result in [@bb] is the following description of the stringy $E$-polynomials of Calabi-Yau hypersurfaces.
\[st.formula\] [@bb] Let $K\subset M\oplus{\Bbb Z}$ be the Gorenstein cone over a reflexive polytope $\Delta\subset M$. For every $(m,n)\in (K,K^*)$ with $m\cdot n =0$ denote by $x(m)$ the minimum face of $K$ that contains $m$ and by $x^*(n)$ the dual of the minimum face of $K^*$ that contains $n$. Also, set $A_{(m,n)}(u,v)$ be $$\frac{(-1)^{\dim(x^*(n))}}{uv}
(v-u)^{\dim(x(m))}(uv -1)^{d+1-\dim(x^*(n))}B([x(m),x^*(n)]^*;u,v)$$ where the function $B$ is defined in Definition \[Q\] in the Appendix. Then $$E_{\rm st}(Y; u,v)=
\sum_{(m,n) \in (K,K^*),m\cdot n =0}
\left(\frac{u}{v}\right)^{{\rm deg}\,m} A_{(m,n)}(u,v)
\left(\frac{1}{uv}\right)^{{\rm deg}\,n}$$ for an ample nondegenerate Calabi-Yau hypersurface $Y$ in $\PP_\Delta={\rm Proj}(\CC[K])$.
The mirror duality $E_{\st}(Y;u,v) = (-u)^{d-1}E_\st(Y^*;u^{-1},v)$ was proved in [@bb] as the immediate corollary of the above formula and the duality property $B(P; u,v) = (-u)^{\rk P} B(P^*;u^{-1},v)$. It was not noticed there that Lemma \[Ginverse\] allows one to rewrite the $B$-polynomials in terms of $G$-polynomials, which we will now use to give a formula for the $E_\st(Y;u,v)$, explicitly obeying the mirror duality. The next result is a generalization of Theorem 8.2 in [@bd] with $\tilde S$ from Definition \[d:spol\].
\[estfromtilde\] Let $Y$ be an ample nondegenerate Calabi-Yau hypersurface in $\PP_\Delta={\rm Proj}(\CC[K])$. Then $$E_\st(Y;u,v) = \sum_{C\subseteq K}
(uv)^{-1}(-u)^{\dim C}\tilde S(C,u^{-1}v)
\tilde S(C^*,uv).$$
[*Proof.*]{} First, observe that the formula for $E_{\rm st}(Y; u,v)$ from Theorem \[st.formula\] can be written as $$\sum_{m,n,C_1,C_2} \frac{(-1)^{\dim C_2^*}}{uv}
(v-u)^{\dim(C_1)}B([C_2,C_1^*];u,v)(uv -1)^{\dim C_2}
\left(\frac{u}{v}\right)^{{\rm deg} m}
\left(\frac{1}{uv}\right)^{{\rm deg}n}$$ where the sum is taken over all pairs of cones $C_1\subseteq K, C_2\subseteq K^*$ that satisfy $C_1\cdot C_2 = 0$ and all $m$ and $n$ in the relative interiors of $C_1$ and $C_2$, respectively. We use the standard duality result (see Definition \[d:bd\]) $$\sum_{n\in {\rm int}(C)}
t^{-\deg(n)}= (t-1)^{-\dim C}S(C,t)$$ to rewrite the above formula as $$\frac 1{uv}\sum_{C_1\cdot C_2=0} (-1)^{\dim(C_2^*)}u^{\dim
C_1} B([C_2,C_1^*];u,v)S(C_1,u^{-1}v)S(C_2,uv).$$ Then apply Lemma \[BfromG\] to get $$E_\st(Y;u,v) = \frac 1{uv} \sum_{C\in
K} \sum_{C_1\subseteq C,C_2\subseteq C^*}
(-1)^{\dim(C_2^*)}u^{\dim C_1} ~\times$$ $$\times~
G([C_1,C],u^{-1}v) (-u)^{\dim C_1^* -\dim C^*} G([C_2,C^*],uv)
S(C_1,u^{-1}v)S(C_2,uv).$$ It remains to use Definition \[d:spol\].
String cohomology construction via intersection cohomology {#section.general}
==========================================================
Here, we construct the string cohomology space for $\QQ$-Gorenstein toroidal varieties, satisfying the assumption of Proposition \[BDvsB\]. The motivation for this construction comes from the conjectural description of the string cohomology space for ample Calabi-Yau hypersurfaces and a look at the formula in [@bd Theorem 6.10] for the stringy $E$-polynomial of a Gorenstein variety with abelian quotient singularities. This immediately leads to a decomposition of the string cohomology space as a direct sum of tensor products of the usual cohomology of a closure of a strata with the spaces $R_1(g,C)$ from Proposition \[p:simp\]. Then the property that the intersection cohomology of an orbifold is naturally isomorphic to the usual cohomology leads us to the construction of the string cohomology space for $\QQ$-Gorenstein toroidal varieties. We show that this space has the dimension prescribed by Definition \[d:bd\] for Gorenstein complete toric varieties and the nondegenerate complete intersections in them.
\[d:orbdef\] Let $X=\bigcup_{i \in I} X_i$ be a Gorenstein complete variety with quotient abelian singularities, satisfying the assumption of Proposition \[BDvsB\]. The stringy Hodge spaces of $X$ are naturally isomorphic to $$H_{\rm st}^{p,q}(X)\cong\bigoplus_{\begin{Sb}i \in I\\
k\ge0 \end{Sb}}H^{p-k,q-k}(\overline{X}_i)\otimes
R_1(\omega_{\sigma_i},\sigma_i)_k,$$ where $\sigma_i$ is the Gorenstein simplicial cone of the singularity along the strata $X_i$, and $\omega_{\sigma_i}\in\CC[\sigma_i]_1$ are nondegenerate such that, for $\sigma_j\subset \sigma_i$, $\omega_{\sigma_i}$ maps to $\omega_{\sigma_j}$ by the natural projection $\CC[\sigma_i]@>>>\CC[\sigma_j]$.
Since $\overline{X}_i$ is a compact orbifold, the coefficient $e^{p,q}(\overline{X}_i)$ at the monomial $u^p v^q$ in the polynomial $E(\overline{X}_i;u,v)$ is equal to $(-1)^{p+q}h^{p,q}(\overline{X}_i)$, by Remark \[r:epol\]. Therefore, Proposition \[p:simp\] shows that the above decomposition of $H_{\rm st}^{p,q}(X)$ is in correspondence with [@bd Theorem 6.10], and the dimensions $h_{\rm st}^{p,q}(X)$ coincide with those from Definition \[d:bcangor\].
Since we expect that the usual cohomology must be replaced in Definition \[d:orbdef\] by the intersection cohomology for Gorenstein toroidal varieties, the next result is a natural generalization of Theorem 6.10 in [@bd].
\[8.3\] Let $X=\bigcup_{i \in I} X_i$ be a Gorenstein complete toric variety or a nondegenerate complete intersection of Cartier hypersurfaces in the toric variety, where the stratification is induced by the torus orbits. Then $$E_{\rm st}(X;u,v)= \sum_{i \in I} E_{\rm int}(\overline X_i;u,v)
\cdot \tilde S(\sigma_i,uv),$$ where $\sigma_i$ is the Gorenstein cone of the singularity along the strata $X_i$.
Similarly to Corollary 3.17 in [@bb], we have $$E_{\rm
int}(\overline X_i;u,v)=\sum_{X_j\subseteq \overline
X_i}E(X_i;u,v)\cdot G([\sigma_i\subseteq \sigma_j]^*,uv).$$ Hence, we get $$\sum_{i \in I} E_{\rm
int}(\overline X_i;u,v) \cdot \tilde S(\sigma_i,uv) = \sum_{i\in
I}\sum_{X_j\subseteq \overline X_i} E(X_j;u,v)
G([\sigma_i\subseteq \sigma_j]^*,uv)\tilde S(\sigma_i,uv)$$ $$=\sum_{j\in I} E(X_j;u,v) \Bigl(\sum_{\sigma_i\subseteq \sigma_j}
G([\sigma_i\subseteq \sigma_j]^*,uv)\tilde S(\sigma_i,uv) \Bigr) =
\sum_{j\in I} E(X_j;u,v) S(\sigma_j,uv),$$ where at the last step we have used the formula for $\tilde S$ and Lemma \[Ginverse\].
Based on the above theorem, we propose the following conjectural description of the stringy Hodge spaces for $\QQ$-Gorenstein toroidal varieties.
\[d:tordef\] Let $X=\bigcup_{i \in I} X_i$ be a $\QQ$-Gorenstein $d$-dimensional complete toroidal variety, satisfying the assumption of Proposition \[BDvsB\]. The stringy Hodge spaces of $X$ are defined by: $$H_{\rm st}^{p,q}(X):=\bigoplus_{\begin{Sb}i \in I\\
k\ge0 \end{Sb}}H_{\rm int}^{p-k,q-k}(\overline{X}_i)\otimes
R_1(\omega_{\sigma_i},\sigma_i)_k,$$ where $\sigma_i$ is the Gorenstein cone of the singularity along the strata $X_i$, and $\omega_{\sigma_i}\in\CC[\sigma_i]_1$ are nondegenerate such that, for $\sigma_j\subset \sigma_i$, $\omega_{\sigma_i}$ maps to $\omega_{\sigma_j}$ by the natural projection $\CC[\sigma_i]@>>>\CC[\sigma_j]$. Here, $p,q$ are rational numbers from $[0,d]$, and we assume that $H_{\rm int}^{p-k,q-k}(\overline{X}_i)=0$ if $p-k$ or $q-k$ is not a non-negative integer.
Toric varieties and nondegenerate complete intersections of Cartier hypersurfaces have the stratification induced by the torus orbits which satisfies the assumptions in the above definition.
String cohomology vs. Chen-Ruan orbifold cohomology {#s:vs}
===================================================
Our next goal is to compare the two descriptions of string cohomology for Calabi-Yau hypersurfaces to the Chen-Ruan orbifold cohomology. Using the work of [@p], we will show that in the case of ample orbifold Calabi-Yau hypersurfaces the three descriptions coincide. We refer the reader to [@cr] for the orbifold cohomology theory and only use [@p] in order to describe the orbifold cohomology for complete simplicial toric varieties and Calabi-Yau hypersurfaces in Fano simplicial toric varieties.
From Theorem 1 in [@p Section 4] and the definition of the orbifold Dolbeault cohomology space we deduce:
Let ${{{\PP}_{\Sigma}}}$ be a $d$-dimensional complete simplicial toric variety. Then the orbifold Dolbeault cohomology space of ${{{\PP}_{\Sigma}}}$ is $$H^{p,q}_{orb}({{{\PP}_{\Sigma}}};\CC)\cong\bigoplus_{\begin{Sb}\sigma\in\Sigma\\l\in\QQ\end{Sb}}
H^{p-l,q-l}(V(\sigma))\otimes \bigoplus_{t\in T(\sigma)_l}\CC t,$$ where $T(\sigma)_l=\{\sum_{\rho\subset\sigma}a_\rho [e_\rho]\in
N:a_\rho\in(0,1), \sum_{\rho\subset\sigma}a_\rho=l\}$ (when $\sigma=0$, set $l=0$ and $T(\sigma)_l=\CC$), and $V(\sigma)$ is the closure of the torus orbit corresponding to $\sigma\in\Sigma$. Here, $p$ and $q$ are rational numbers in $[0,d]$, and $H^{p-l,q-l}(V(\sigma))=0$ if $p-l$ or $q-l$ is not integral. (The elements of $\oplus_{0\ne\sigma\in\Sigma,l}T(\sigma)_l$ correspond to the twisted sectors.)
In order to compare this result to the description in Definition \[d:tordef\], we need to specify the $\omega_{\sigma_i}$ for the toric variety ${{{\PP}_{\Sigma}}}$. The stratification of ${{{\PP}_{\Sigma}}}$ is given by the torus orbits: ${{{\PP}_{\Sigma}}}=\cup_{\sigma\in\Sigma}\TT_\sigma$. The singularity of the variety ${{{\PP}_{\Sigma}}}$ along the strata $\TT_\sigma$ is given by the cone $\sigma$, so we need to specify a nondegenerate $\omega_\sigma\in\CC[\sigma]_1$ for each $\sigma\in\Sigma$. If $\omega_\sigma=\sum_{\rho\subset\sigma}\omega_\rho [e_\rho]$ with $\omega_\rho\ne0$, then one can deduce that $\omega_\sigma$ is nondegenerate using Remark \[r:nond\] and the fact that the nondegeneracy of a hypersurface in a complete simplicial toric variety (in this case, it corresponds to a simplex) is equivalent to the nonvanishing of the logarithmic derivatives simultaneously. So, picking any nonzero coefficients $\omega_\rho$ for each $\rho\in\Sigma(1)$ gives a nondegenerate $\omega_\sigma\in\CC[\sigma]_1$ satisfying the condition of Definition \[d:tordef\]. For such $\omega_\sigma$, note that the set $Z=\{\sum_{\rho\subset\sigma} (e_\rho\cdot m) \omega_\rho
e_\rho:\,m\in {\rm Hom}(N,{\Bbb Z})\}$ is a linear span of $e_\rho$ for $\rho\subset\sigma$. Hence, $$R_0(\omega_\sigma,\sigma)_l=(\CC[\sigma]/Z\cdot\CC[\sigma])_l\cong
\bigoplus_{t\in \tilde{T}(\sigma)_l}\CC t,$$ where $\tilde
T(\sigma)_l=\{\sum_{\rho\subset\sigma}a_\rho e_\rho\in
N:a_\rho\in[0,1), \sum_{\rho\subset\sigma}a_\rho=l\}$, and $$R_1(\omega_\sigma,\sigma)_l\cong
\bigoplus_{t\in{T}(\sigma)_l}\CC t.$$ This shows that the orbifold Dolbeault cohomology for complete simplicial toric varieties can be obtained as a special case of the description of string cohomology in Definition \[d:tordef\].
We will now explain how the parameter $\omega$ should be related to the complexified Kähler class. We do not have the definition of the “orbifold” Kähler cone even for simplicial toric varieties. However, we know the Kähler classes in $H^2({{{\PP}_{\Sigma}}},\RR)$.
Let ${{{\PP}_{\Sigma}}}$ be a projective simplicial toric variety, then $H^2({{{\PP}_{\Sigma}}},\RR)\cong PL(\Sigma)/M_\RR$, where $PL(\Sigma)$ is the set of $\Sigma$-piecewise linear functions $\varphi:N_\RR@>>>\RR$, which are linear on each $\sigma\in\Sigma$. The Kähler cone $K(\Sigma)\subset H^2({{{\PP}_{\Sigma}}},\RR)$ of ${{{\PP}_{\Sigma}}}$ consists of the classes of the upper strictly convex $\Sigma$-piecewise linear functions.
One may call $K(\Sigma)$ the “untwisted” part of the orbifold Kähler cone. So, we can introduce the [*untwisted complexified Kähler space*]{} of the complete simplicial toric variety: $$K^{\rm untwist}_\CC({{{\PP}_{\Sigma}}})= \{\omega\in
H^2({{{\PP}_{\Sigma}}},\CC):Im(\omega)\in K(\Sigma)\}/{\rm im}H^2({{{\PP}_{\Sigma}}},\ZZ).$$ Its elements may be called the [*untwisted complexified Kähler classes*]{}. We can find a generic enough $\omega\in K^{\rm
untwist}_\CC({{{\PP}_{\Sigma}}})$ represented by a complex valued $\Sigma$-piecewise linear function $\varphi_\omega:N_\CC@>>>\CC$ such that $\varphi_\omega(e_\rho)\ne0$ for $\rho\in\Sigma(1)$. Setting $\omega_\rho=\exp(\varphi_\omega(e_\rho))$ produces our previous parameters $\omega_\sigma$ for $\sigma\in\Sigma$. This is how we believe $\omega_\sigma$ should relate to the complexified Kähler classes, up to perhaps some instanton corrections.
We next turn our attention to the case of an ample Calabi-Yau hypersurface $Y$ in a complete simplicial toric variety ${{{\PP}_{\Sigma}}}$. Section 4.2 in [@p] works with a generic nondegenerate anticanonical hypersurface. However, one can avoid the use of Bertini’s theorem and state the result without “generic”. It is shown that the nondegenerate anticanonical hypersurface $X$ is a suborbifold of ${{{\PP}_{\Sigma}}}$, the twisted sectors of $Y$ are obtained by intersecting with the closures of the torus orbits and the degree shifting numbers are the same as for the toric variety ${{{\PP}_{\Sigma}}}$. Therefore, we conclude:
\[p:des\] Let $Y\subset{{{\PP}_{\Sigma}}}$ be an ample Calabi-Yau hypersurface in a complete simplicial toric variety. Then $$H^{p,q}_{orb}(Y;\CC)\cong\bigoplus_{\begin{Sb}\sigma\in\Sigma\\l\in\ZZ\end{Sb}}
H^{p-l,q-l}(Y\cap V(\sigma))\otimes
\bigoplus_{t\in T(\sigma)_l}\CC t,$$ where $T(\sigma)_l=\{\sum_{\rho\subset\sigma}a_\rho [e_\rho]\in N:a_\rho\in(0,1),
\sum_{\rho\subset\sigma}a_\rho=l\}$ (when $\sigma=0$, set $l=0$ and $T(\sigma)_l=\CC$).
As in the case of the toric variety, we pick $\omega_\sigma=\sum_{\rho\subset\sigma}\omega_\rho e_\rho$ with $\omega_\rho\ne0$. Then, by the above proposition, $$H_{\rm st}^{p,q}(Y)\cong H^{p,q}_{orb}(Y;\CC).$$
We now want to show that the description in Proposition \[p:des\] is equivalent to the one in Conjecture \[semiampleconj\]. First, note that the proper faces $C^*$ of the Gorenstein cone $K^*$ in Conjecture \[semiampleconj\] one to one correspond to the cones $\sigma\in\Sigma$. Moreover, the rings $\CC[C^*]\cong\CC[\sigma]$ are isomorphic in this correspondence. If we take $\omega\in\CC[K^*]^\Sigma_1$ to be $[0,1]+\sum_{\rho\in\Sigma(1)}\omega_\rho[e_\rho,1]$, then $\omega$ is $\Sigma$-regular and $$R_1(C^*,\omega_{C^*})_l\cong R_1(\omega_\sigma,\sigma)_l\cong
\oplus_{t\in{T}(\sigma)_l}\CC t.$$ On the other hand, the Hodge component $H^{p-l,q-l}(Y\cap V(\sigma))$ decomposes into the direct sum $$H^{p-l,q-l}_{\rm toric}(Y\cap V(\sigma))\oplus H^{p-l,q-l}_{\rm res}(Y\cap V(\sigma))$$ of the toric and residue parts. Since $Y\cap V(\sigma)$ is an ample hypersurface, from [@bc Theorem 11.8] and Section \[section.anvar\] it follows that $$H^{p-l,q-l}_{\rm res}(Y\cap V(\sigma))\cong R_1(f_C,C)_{q-l+1},$$ where $C\subset K$ is the face dual to $C^*$ which corresponds to $\sigma$, $p+q-2l=\dim Y\cap V(\sigma)=d-\dim\sigma-1=d-\dim C^*-1$. If $p+q-2l\ne d-\dim C^*-1$, then $H^{p-l,q-l}_{\rm res}(Y\cap V(\sigma))=0$. Hence, we get $$\bigoplus_{\begin{Sb}\sigma\in\Sigma\\l\in\ZZ\end{Sb}}
H_{\rm res}^{p-l,q-l}(Y\cap V(\sigma))\otimes
\bigoplus_{t\in T(\sigma)_l}\CC t
\cong \bigoplus_{0\ne C\subseteq K} R_1(\omega_{C^*},C^*)^\Sigma_{a}
\otimes R_1(f_C,C)_{b},$$ where $a=(p+q-d+\dim C^*+1)/2$ and $b=(q-p+\dim C)/2$. We are left to show that $$\label{e:lef}
\bigoplus_{\begin{Sb}\sigma\in\Sigma\\l\in\ZZ\end{Sb}}
H_{\rm toric}^{p-l,p-l}(Y\cap V(\sigma))\otimes
\bigoplus_{t\in T(\sigma)_l}\CC t
\cong R_1(\omega,K^*)^\Sigma_{p+1}.$$ Notice that the dimensions of the spaces on both sides coincide, so it suffices to construct a surjective map between them. This will follow from the following proposition.
\[p:sttor\] Let ${{{\PP}_{\Sigma}}}={\rm Proj}(\CC[K])$ be the Gorenstein Fano simplicial toric variety, where $K$ as above. Then there is a natural isomorphism: $$H^{p,p}_{\rm st}({{{\PP}_{\Sigma}}})\cong\bigoplus_{\begin{Sb}\sigma\in\Sigma\\l\in\ZZ\end{Sb}}
H^{p-l,p-l}(V(\sigma))\otimes \bigoplus_{t\in T(\sigma)_l}\CC t\cong
R_0(\omega,K^*)_p^\Sigma,$$ where $\omega=[0,1]+\sum_{\rho\in\Sigma(1)}\omega_\rho[e_\rho,1]$ with $\omega_\rho\ne0$.
First, observe that the dimensions of the spaces in the isomorphisms coincide by our definition of string cohomology, Proposition \[Zreg\] and [@bd Theorem 7.2]. So, it suffices to construct a surjective map between them.
We know the cohomology ring of the toric variety: $$H^{*}(V(\sigma))\cong
\CC[D_\rho:\rho\in\Sigma(1),\rho+\sigma\in\Sigma(\dim\sigma+1)]/
(P(V(\sigma))+SR(V(\sigma))),$$ where $$SR(V(\sigma))=\bigl\langle
D_{\rho_1}\cdots D_{\rho_k}:\{e_{\rho_1},\dots,e_{\rho_k}\}
\not\subset\tau \text{ for all
}\sigma\subset\tau\in\Sigma(\dim\sigma+1)\bigr\rangle$$ is the Stanley-Reisner ideal, and $$P(V(\sigma))=\biggl\langle
\sum_{\rho\in\Sigma(1),\rho+\sigma\in\Sigma(\dim\sigma+1)} \langle
m,e_\rho\rangle D_\rho: m\in M\cap\sigma^\perp\biggr\rangle.$$
Define the maps from $H^{p-l,p-l}(V(\sigma))\otimes \bigoplus_{t\in T(\sigma)_l}\CC t$ to $R_0(\omega,K^*)^\Sigma$ by sending $D_{\rho_1}\cdots D_{\rho_{p-l}}\otimes t$ to $\omega_{\rho_1}[e_{\rho_1}]\cdots \omega_{\rho_{p-l}} [e_{\rho_{p-l}}]\cdot t\in\CC[N]^\Sigma$. One can easily see that these maps are well defined. To finish the proof we need to show that the images cover $R_0(\omega,K^*)^\Sigma$. Every lattice point $[n]$ in the boundary of $K^*$ lies in the relative interior of a face $C\subset K^*$, and can be written as a linear combination of the minimal integral generators of $C$: $$[n]=\sum_{[e_\rho,1]\in C}(a_\rho+b_\rho)[e_\rho,1],$$ where $a_\rho\in(0,1)$ and $b_\rho$ are nonnegative integers. Let $C'\subseteq C$ be the cone spanned by those $[e_\rho,1]$ for which $a_\rho\ne0$. The lattice point $\sum_{[e_\rho,1]\in C'}(a_\rho)[e_\rho,1]$ projects to one of the elements $t$ from $T(\sigma)_l$ for some $l$ and $\sigma$ corresponding to $C'$. Using the relations $\sum_{\rho\in\Sigma(1)}\omega_\rho\langle m,e_\rho\rangle [e_\rho,1]$ in the ring $R_0(\omega,K^*)^\Sigma$, we get that $$[n]=\sum_{[e_\rho,1]\in C'}(a_\rho)[e_\rho,1]+
\sum_{\rho+\sigma\in\Sigma(\dim\sigma+1)}b'_\rho[e_\rho,1],$$ which comes from $H^{p-l,p-l}(V(\sigma))\otimes \CC t$ for an appropriate $p$. The surjectivity now follows from the fact that the boundary points of $K^*$ generate the ring $C[K^*]^\Sigma/\langle \omega\rangle$.
The isomorphism (\[e:lef\]) follows from the above proposition and the presentation: $$H_{\rm toric}^{*}(Y\cap V(\sigma))\cong H^*(V(\sigma))/{\rm Ann}([Y\cap V(\sigma)])$$ (see (\[e:ann\])). Indeed, the map constructed in the proof of Proposition \[p:sttor\] produces a well defined map between the right hand side in (\[e:lef\]) and $R_0(\omega,K^*)^\Sigma/Ann([0,1])$ because the annihilator of $[Y\cap V(\sigma)]$ maps to the annihilator of $[0,1]$. On the other hand, $$(R_0(\omega,K^*)^\Sigma/Ann([0,1]))_p\cong R_1(\omega,K^*)^\Sigma_{p+1},$$ which is induced by the multiplication by $[0,1]$ in $R_0(\omega,K^*)^\Sigma$.
We expect that the product structure on $H^{*}_{\rm st}({{{\PP}_{\Sigma}}})$ is given by the ring structure $R_0(\omega,K^*)^\Sigma$. Also, the ring structure on $R_1(\omega,K^*)^\Sigma_{*+1}$ induced from $R_0(\omega,K^*)^\Sigma/Ann([0,1])$ should give a subring of $H^{*}_{\rm st}(Y)$ for a generic $\omega$ in Conjecture \[semiampleconj\]. Moreover, $$\bigoplus_{p,q} R_1(\omega_{C^*},C^*)^\Sigma_{(p+q-d+\dim C^*+1)/2}
\otimes R_1(f_C,C)_{(q-p+\dim C)/2},$$ should be the module over the ring $R_0(\omega,K^*)^\Sigma/Ann([0,1])$: $$a\cdot(b\otimes c)=\bar{a}b\otimes c,$$ for $a\in R_0(\omega,K^*)^\Sigma/Ann([0,1])$ and $(b\otimes c)$ from a component of the above direct sum, where $\bar{a}$ is the image of $a$ induced by the projection $R_0(\omega,K^*)^\Sigma@>>>R_0(\omega_{C^*},C^*)^\Sigma$.
We can also say about the product structure on the B-model chiral ring. The space $R_1(f,K)\cong R_0(f,K)/Ann([0,1])$ in Conjecture \[semiampleconj\], which lies in the middle cohomology $\oplus_{p+q=d-1}H^{p,q}_{\rm st}(Y)$, should be a subring of the B-model chiral ring, and $$\bigoplus_{p,q} R_1(\omega_{C^*},C^*)^\Sigma_{(p+q-d+\dim C^*+1)/2}
\otimes R_1(f_C,C)_{(q-p+\dim C)/2},$$ should be the module over the ring $R_1(f,K)$, similarly to the above description in the previous paragraph.
These ring structures are consistent with the products on the usual cohomology and the B-model chiral ring $H^*(X,\bigwedge^*T_X)$ of the smooth semiample Calabi-Yau hypersurfaces $X$ in [@m3 Theorem 2.11(a,b)] and [@m2 Theorem 7.3(i,ii)].
Description of string cohomology inspired by vertex algebras {#section.vertex}
============================================================
Here we will give yet another description of the string cohomology spaces of Calabi-Yau hypersurfaces. It will appear as cohomology of a certain complex, which was inspired by the vertex algebra approach to Mirror Symmetry.
We will state the result first in the non-deformed case, and it will be clear what needs to be done in general. Let $K$ and $K^*$ be dual reflexive cones of dimension $d+1$ in the lattices $M$ and $N$ respectively. We consider the subspace $\CC[L]$ of $\CC[K]\otimes \CC[K^*]$ as the span of the monomials $[m,n]$ with $m\cdot n =0$. We also pick non-degenerate elements of degree one $f=\sum_m f_m [m]$ and $g=\sum_n g_n [n]$ in $\CC[K]$ and $\CC[K^*]$ respectively.
Consider the space $$V = \Lambda^*(N_\CC)\otimes \CC[L].$$
The space $V$ is equipped with a differential $D$ given by $$D:= \sum_{m} f_m \contr m \otimes(\pi_L\circ[m]) +
\sum_{n} g_n (\wedge n) \otimes(\pi_L\circ[n])$$ where $[m]$ and $[n]$ means multiplication by the corresponding monomials in $\CC[K]\otimes\CC[K^*]$ and $\pi_L$ denotes the natural projection to $\CC[L]$.
It is straightforward to check that $D^2=0$.
\[Dcoh\] Cohomology $H$ of $V$ with respect to $D$ is naturally isomorphic to $$\bigoplus_{C\subseteq K} \Lambda^{\dim C^*}C^*_\CC
\otimes R_1(f,C)\otimes R_1(g,C^*)$$ where $C^*_\CC$ denotes the vector subspace of $N_\CC$ generated by $C^*$.
First observe that $V$ contains a subspace $$\bigoplus_{C\subseteq K} \Lambda^* N_\CC \otimes (\CC[C^\circ]\otimes
\CC[C^{*\circ}])$$ which is invariant under $D$. It is easy to calculate the cohomology of this subspace under $D$, because the action commutes with the decomposition $\oplus_C$. For each $C$, the cohomology of $D$ on $\Lambda^* N_\CC \otimes (\CC[C^\circ]\otimes
\CC[C^{*\circ}])$ is naturally isomorphic to $$\Lambda^{\dim C^*}C^*_\CC
\otimes R_0(f,C^\circ)\otimes R_0(g,C^{*\circ}),$$ because $\Lambda^* N_\CC \otimes (\CC[C^\circ]\otimes\CC[C^{*\circ}])$ is a tensor product of the Koszul complex for $\CC[C^\circ]$ and the dual of the Koszul complex for $\CC[C^{*\circ}]$. As a result, we have a map $$\alpha:H_1\to H,~H_1:=\bigoplus_{C\subseteq K}\Lambda^{\dim C^*}C^
*_\CC
\otimes R_0(f,C^\circ)\otimes R_0(g,C^{*\circ}).$$
Next, we observe that $V$ embeds naturally into the space $$\bigoplus_{C\subseteq K} \Lambda^* N_\CC \otimes (\CC[C]\otimes
\CC[C^{*}])$$ as the subspace of the elements compatible with the restriction maps. This defines a map $$\beta:H\to\bigoplus_{C\subseteq K}\Lambda^{\dim C^*}C^
*_\CC
\otimes R_0(f,C)\otimes R_0(g,C^{*})=:H_2.$$
We observe that the composition $\beta\circ\alpha$ is precisely the map induced by embeddings $C^\circ\subseteq C$ and $C^{*\circ}\subseteq C^*$, so its image in $H_2$ is $$\bigoplus_{C\subseteq K}\Lambda^{\dim C^*}C^
*_\CC \otimes R_1(f,C)\otimes R_1(g,C^{*}).$$ As a result, what we need to show is that $\alpha$ is surjective and $\beta$ is injective. We can not do this directly, instead, we will use spectral sequences associated to two natural filtrations on $V$.
First, consider the filtration $$V=V^0\supset V^1\supset \ldots \supset V^{d+1}\supset V^{d+2}=0$$ where $V^p$ is defined as $\Lambda^*N_\CC$ tensored with the span of all monomials $[m,n]$ for which the smallest face of $K$ that contains $m$ has dimension at least $p$. It is easy to see that the spectral sequence of this filtration starts with $$H_3:=\bigoplus_{C\subseteq K}\Lambda^{\dim C^*}C^
*_\CC \otimes R_0(f,C^\circ)\otimes R_0(g,C^{*}).$$ Analogously, we have a spectral sequence from $$H_4:=\bigoplus_{C\subseteq K}\Lambda^{\dim C^{*}}C^
*_\CC \otimes R_0(f,C)\otimes R_0(g,C^{*\circ})$$ to $H$, which gives us the following diagram. $$\begin{array}{ccccc}
& & H_3 & & \\
& \nearrow & \Downarrow& \searrow & \\
H_1& \rightarrow & H & \rightarrow &H_2 \\
& \searrow & \Uparrow & \nearrow & \\
& & H_4 & &
\end{array}$$
We remark that the spectral sequences mean that $H$ is a subquotient of both $H_3$ and $H_4$, i.e. there are subspaces $I_3^+$ and $I_3^-$ of $H_3$ such that $H\simeq I_3^+/I_3^-$, and similarly for $H_4$. Moreover, the above diagram induces commutative diagrams $$\begin{array}{ccccccccccc}
& & 0 & & &\hspace{20pt} &
& & 0 & & \\
& & \downarrow& & &\hspace{20pt} &
& & \uparrow & & \\
& & I_3^- & & &\hspace{20pt} &
H_1& \rightarrow & H & \rightarrow &H_2 \\
& & \downarrow& & &\hspace{20pt} &
& \searrow & \uparrow & \nearrow & \\
& & I_3^+ & & &\hspace{20pt} &
& & I_4^+ & & \\
& \nearrow & \downarrow& \searrow & &\hspace{20pt} &
& & \uparrow & & \\
H_1& \rightarrow & H & \rightarrow &H_2&\hspace{20pt} &
& & I_4^- & & \\
& & \downarrow& & &\hspace{20pt} &
& & \uparrow & & \\
& & 0 & & &\hspace{20pt} &
& & 0 & & \\
\end{array}$$ with exact vertical lines. Indeed, the filtration $V^*$ induces a filtration on the subspace of $V$ $$\bigoplus_{C\subseteq K} \Lambda^*\otimes\CC[C^\circ]\otimes
\CC[C^{*\circ}].$$ The resulting spectral sequence degenerates immediately, and the functoriality of spectral sequences assures that there are maps from $H_1$ as above. Similarly, the space $$\bigoplus_{C\subseteq K} \Lambda^*\otimes\CC[C]\otimes
\CC[C^{*}]$$ has a natural filtration by the dimension of $C$ that induces the filtration on $V$. Functoriality then gives the maps to $H_4$.
We immediately get $$Im(\beta) \subseteq Im(H_3\to H_2) \cap Im(H_4\to H_2)$$ which implies that $$Im(\beta) = Im(\beta\circ\alpha)=
\bigoplus_{C\subseteq K}\Lambda^{\dim C^*}
C^*_\CC \otimes R_1(f,C)\otimes R_1(g,C^{*}).$$ Analogously, $Ker(\alpha)=Ker(\beta\circ\alpha)$, which shows that $$\bigoplus_{C\subseteq K}\Lambda^{\dim C^*}
C^*_\CC \otimes R_1(f,C)\otimes R_1(g,C^{*})$$ is a direct summand of $H$.
The fact that $$Ker(\alpha)\supseteq Ker(H_1\to H_4)$$ $$=
\bigoplus_{C\subseteq K}\Lambda^{\dim C^*}C^*_\CC
\otimes Ker(R_0(f,C^\circ)\to R_0(f,C))\otimes R_0(g,C^{*\circ})$$ implies that $I_3^-$ contains the image of this space under $H_1\to H_3$, which is equal to $$\bigoplus_{C\subseteq K}\Lambda^{\dim C^*}C^*_\CC
\otimes Ker(R_0(f,C^\circ)\to R_0(f,C))\otimes R_1(g,C^{*}).$$ Similarly, $I_3^+$ is contained in the preimage of $$\bigoplus_{C\subseteq K}\Lambda^{\dim C^*}C^*_\CC
\otimes R_1(f,C)\otimes R_1(g,C^{*})$$ under $H_3\to H_2$, which is $$\bigoplus_{C\subseteq K}\Lambda^{\dim C^*}C^*_\CC
\otimes R_0(f,C^\circ)\otimes R_1(g,C^{*}).$$ As a result, $$H=\bigoplus_{C\subseteq K}\Lambda^{\dim C^*}
C^*_\CC \otimes R_1(f,C)\otimes R_1(g,C^{*}).$$
If one replaces $\CC[K^*]$ by $\CC[K^*]^\Sigma$ in the definition of $\CC[L]$, then the statement and the proof of Theorem \[Dcoh\] remain intact. In addition, one can make a similar statement after replacing $\Lambda^*N_\CC$ by $\Lambda^*M_\CC$ and switching contraction and exterior multiplication in the definition of $D$. It is easy to see that the resulting complex is basically identical, though various gradings are switched. This should correspond to a switch between $A$ and $B$ models.
We will now briefly outline the connection between Theorem \[Dcoh\] and the vertex algebra approach to mirror symmetry, developed in [@Bvertex] and further explored in [@MS]. The vertex algebra that corresponds to the N=2 superconformal field theory is expected to be the cohomology of a lattice vertex algebra ${\rm Fock}_{M\oplus N}$, built out of $M\oplus N$, by a certain differential $D_{f,g}$ that depends on the defining equations $f$ and $g$ of a mirror pair. The space $\Lambda^*(N_\CC)\otimes \CC[L]$ corresponds to a certain subspace of ${\rm Fock}_{M\oplus N}$ such that the restriction of $D_{f,g}$ to this subspace coincides with the differential $D$ of Theorem \[Dcoh\]. We can not yet show that this is precisely the chiral ring of the vertex algebra, so the connection to vertex algebras needs to be explored further.
Appendix. G-polynomials
=======================
A finite graded partially ordered set is called Eulerian if every its nontrivial interval contains equal numbers of elements of even and odd rank. We often consider the poset of faces of the Gorenstein cone $K$ over a reflexive polytope $\Delta$ with respect to inclusions. This is an Eulerian poset with the grading given by the dimension of the face. The minimum and maximum elements of a poset are commonly denoted by $\hat 0$ and $\hat 1$.
[@stanley1] [Let $P = \lbrack \hat{0}, \hat{1} \rbrack$ be an Eulerian poset of rank $d$. Define two polynomials $G(P,t)$, $H(P,t) \in {\ZZ} [t]$ by the following recursive rules: $$G(P,t) = H(P,t) = 1\;\; \mbox{\rm if $d =0$};$$ $$H(P,t) = \sum_{ \hat{0} < x \leq \hat{1}} (t-1)^{\rho(x)-1}
G(\lbrack x,\hat{1}\rbrack, t)\;\; (d>0),$$ $$G(P,t) =
\tau_{ < d/2 } \left(
(1-t)H(P,t) \right) \;\;( d>0),$$ where $\tau_{ < r }$ denotes the truncation operator ${\ZZ}\lbrack t \rbrack \rightarrow
{\ZZ}\lbrack t \rbrack$ which is defined by $$\tau_{< r} \left( \sum_i a_it^i \right) = \sum_{i < r}
a_it^i.$$]{} \[Gpoly\]
The following lemma will be extremely useful.
For every Eulerian poset $P=[\hat 0,\hat 1]$ of positive rank there holds $$\sum_{\hat 0\leq x\leq \hat 1}(-1)^{\rk[\hat 0,x]} G([\hat 0,x]^*,t)
G([x,\hat 1],t)
=\sum_{\hat 0\leq x\leq \hat 1}
G([\hat 0,x],t)G([x,\hat 1]^*,t)(-1)^{\rk[x,\hat 1]}
=0$$ where $()^*$ denotes the dual poset. In other words, $G(\_,t)$ and $(-1)^{\rk}
G(\_^*,t)$ are inverses of each other in the algebra of functions on the posets with the convolution product. \[Ginverse\]
[*Proof.*]{} See Corollary 8.3 of [@stanley].
The following polynomial invariants of Eulerian posets have been introduced in [@bb].
\[Q\] Let $P$ be an Eulerian poset of rank $d$. Define the polynomial $B(P; u,v) \in {\ZZ}[ u,v]$ by the following recursive rules: $$B(P; u,v) = 1\;\; \mbox{\rm if $d =0$},$$ $$\sum_{\hat{0} \leq x \leq \hat{1}}
B(\lbrack \hat{0}, x \rbrack; u,v) u^{d - \rho(x)}
G(\lbrack x , \hat{1}\rbrack, u^{-1}v) = G(P ,uv).$$
\[BfromG\] Let $P=[\hat 0,\hat 1]$ be an Eulerian poset. Then $$B(P;u,v) = \sum_{\hat 0\leq x \leq \hat 1} G([x,\hat 1]^*,u^{-1}v)
(-u)^{\rk \hat 1 -\rk x}
G([\hat 0,x],uv).$$
Indeed, one can sum the recursive formulas for $B([\hat 0, y])$ for all $\hat 0\leq y\leq\hat 1$ multiplied by $G([y,\hat
1]^*,u^{-1}v) (-u)^{\rk \hat 1 -\rk y}$ and use Lemma \[Ginverse\].
[BaBrFK]{}
\[AKaMW\][AKMW]{} D. Abramovich, K. Karu, K. Matsuki, J. Włodarsczyk, [*Torification and Factorization of Birational Maps*]{}, preprint math.AG/9904135.
\[BaBrFK\][bbfk]{} G. Barthel, J.-P. Brasselet, K.-H. Fieseler, L. Kaup, [*Combinatorial Intersection Cohomology for Fans*]{}, preprint math.AG/0002181.
\[B1\][b1]{} V. V. Batyrev, [*Variations of the mixed Hodge structure of affine hypersurfaces in algebraic tori*]{}, Duke Math. J., [**69**]{} (1993), 349–409.
\[B2\][b2]{} , [*Dual polyhedra and mirror symmetry for Calabi-Yau hypersurfaces in toric varieties*]{}, J. Algebraic Geometry [**6**]{} (1994), 493–535.
\[B3\][Batyrev.cangor]{} , [*Stringy Hodge numbers of varieties with Gorenstein canonical singularities*]{}, Integrable systems and algebraic geometry (Kobe/Kyoto, 1997), 1–32, World Sci. Publishing, River Edge, NJ, 1998.
\[B4\][Batyrev.nai]{} , [*Non-Archimedean integrals and stringy Euler numbers of log-terminal pairs*]{}, J. Eur. Math. Soc. (JEMS) [**1**]{} (1999), no. 1, 5–33.
\[BBo\][bb]{} V. V. Batyrev, L. A. Borisov, [*Mirror duality and string-theoretic Hodge numbers*]{} Invent. math. [**126**]{} (1996), 183–203.
\[BC\][bc]{} V. V. Batyrev, D. A. Cox, [*On the Hodge structure of projective hypersurfaces in toric varieties*]{}, Duke Math. J. [**75**]{} (1994), 293–338.
\[BDa\][bd]{} V. V. Batyrev, D. Dais, [*Strong McKay correspondence, string-theoretic Hodge numbers and mirror symmetry*]{}, Topology [**35**]{} (1996), 901–929.
\[Bo1\][Bor.locstring]{} L. A. Borisov, [*String cohomology of a toroidal singularity*]{}, J. Algebraic Geom. [**9**]{} (2000), no. 2, 289–300.
\[Bo2\][Bvertex]{} , [*Vertex Algebras and Mirror Symmetry*]{}, Comm. Math. Phys. [**215**]{} (2001), no. 3, 517–557.
\[BreL\][bl]{} P. Bressler, V. Lunts, [*Intersection cohomology on nonrational polytopes*]{}, preprint math.AG/0002006.
\[C1\][c]{} D. A. Cox, [*The homogeneous coordinate ring of a toric variety*]{}, J. Algebraic Geom. [**4**]{} (1995), 17–50.
\[C2\][c2]{} , [*Recent developments in toric geometry*]{}, in Algebraic Geometry (Santa Cruz, 1995), Proceedings of Symposia in Pure Mathematics, [bf 62]{}, Part 2, Amer. Math. Soc., Providence, 1997, 389–436.
\[CKat\][ck]{} D. A. Cox, S. Katz, [*Algebraic Geometry and Mirror Symmetry*]{}, Math. Surveys Monogr. [**68**]{}, Amer. Math. Soc., Providence, 1999.
\[ChR\][cr]{} W. Chen, Y. Ruan, [*A new cohomology theory for orbifold*]{}, preprint math.AG/0004129
\[D\][d]{} V. I. Danilov, [*The geometry of toric varieties*]{}, Russian Math. Surveys [**33**]{} (1978), 97–154.
\[DiHVW\][dhvw]{} L. Dixon, J. Harvey, C. Vafa, E. Witten, [*Strings on orbifolds I, II*]{}, Nucl. Physics, [**B261**]{} (1985), [**B274**]{} (1986).
\[DKh\][dk]{} V. Danilov, A. Khovanskii, [*Newton polyhedra and an algorithm for computing Hodge-Deligne numbers*]{}, Math. USSR-Izv. [**29**]{} (1987), 279–298.
\[E\][e]{} D. Eisenbud, [*Commutative algebra with a view toward algebraic geometry*]{}, Graduate Texts in Mathematics [**150**]{}, Springer-Verlag, New York, 1995.
\[F\][f]{} W. Fulton, [*Introduction to toric varieties*]{}, Princeton Univ. Press, Princeton, NJ, 1993.
\[G\][Greene]{}B. R. Greene, [*String theory on Calabi-Yau manifolds*]{}, Fields, strings and duality (Boulder, CO, 1996), 543–726, World Sci. Publishing, River Edge, NJ, 1997.
\[KoMo\][Kollar]{} J. Kollár, S. Mori, [*Birational geometry of algebraic varieties*]{}, With the collaboration of C. H. Clemens and A. Corti. Cambridge Tracts in Mathematics [**134**]{}. Cambridge University Press, Cambridge, 1998.
\[MaS\][MS]{} F. Malikov, V. Schechtman, [*Deformations of chiral algebras and quantum cohomology of toric varieties*]{}, preprint math.AG/0003170.
\[Mat\][ma]{} Matsumura, [*Commutative ring theory.*]{} Translated from Japanese by M. Reid. Second edition. Cambridge Studies in Advanced Mathematics, 8. Cambridge University Press, Cambridge, 1989.
\[M1\][m1]{} A. R. Mavlyutov, [*Semiample hypersurfaces in toric varieties*]{}, Duke Math. J. [**101**]{} (2000), 85–116.
\[M2\][m2]{} , [*On the chiral ring of Calabi-Yau hypersurfaces in toric varieties*]{}, preprint math.AG/0010318.
\[M3\][m3]{} , [*The Hodge structure of semiample hypersurfaces and a generalization of the monomial-divisor mirror map*]{}, in Advances in Algebraic Geometry Motivated by Physics (ed. E. Previato), Contemporary Mathematics, [**276**]{}, 199–227.
\[P\][p]{} M. Poddar, [*Orbifold Hodge numbers of Calabi-Yau hypersurfaces*]{}, preprint math.AG /0107152.
\[S1\][stanley1]{} R. Stanley, [*Generalized $H$-vectors, Intersection Cohomology of Toric Varieties, and Related Results*]{}, Adv. Stud. in Pure Math. [**11**]{} (1987), 187–213.
\[S2\][stanley]{} , [*Subdivisions and local h-vectors*]{}, JAMS [**5**]{} (1992), 805–851.
|
{
"pile_set_name": "arxiv"
}
|
Located on the Kefalonia Island in Greece, this spectacular cave was lost for centuries until being rediscovered in 1951 by Giannis Petrocheilos. Take a look at this beautiful cave system and the island that it is part of.
The famous Mytros Beach is also on this island.
In Greek mythology it is believed that Nymph's used to live in these caves.
|
{
"pile_set_name": "pile-cc"
}
|
The bantamweight champion of DEEP, Takafumi Otsuka, will take on Koichi Ishiuzka on May 13th at Differ Ariake in Tokyo.
Otsuka was supposed to fight Fernando Vieira for the WSOF-GC bantamweight title in December. However, the Brazilian was over the weight at the first weigh-in and never showed up at the second weigh-in. Vieira was nowhere to be found after this.The Brazilian basically fled from the entire show.
Otsuka became the inaugural WSOF-GC champ but this means, the last time he fought was back in August of last year. That was, however, against a Mongolian fighter named Baataryn Azjavkhlan who was 1-0 at the time.
In terms of competitive fight, vs Daisuke Engo in February 2016 maybe is the last time Otsuka went through, which is more than a year ago.
Ishizuka is basically born and raised in DEEP.
And, he is undefeated in the last ten fights.
For Ishizuka, this must be the opportunity he has been looking for all of his pro MMA career.
So, Ishizuka has to be motivated than ever.
The only concern is, his recent changes in the training environment. In last year, Ishizuka moved to Aichi because of the job which forced him to leave team Brightness Monma. And, Ishizuka joined team ALIVE which is based in Aichi prefecture.
But Ishizuka left ALIVE now, and his status is “independent.”
Besides this title fight between Otsuka and Ishizuka, men’s strawweight bout between Haruo Ochi and “Rambo” Kosuke is also confirmed.
These two met all the way back in May of 2011.
This fight took place in Shooto.
“Rambo” almost caught Ochi with an armbar in the first round. But Ochi came back and KO’d Kosuke in the second round. That was “Rambo”‘s first pro defeat in seven fights.
|
{
"pile_set_name": "pile-cc"
}
|
The news of the 2015 remastering of Air Jordan retros has resulted in a load of early photos featuring next year’s Jordans. Normally at this time we’d be stuck pondering what was to come based off early product sheets and such, but this time around we’ve got high res previews of everything for your viewing pleasure. This time around: the Air Jordan 7 “French Blue”. So far the group of Spring 2015 Air Jordans has been a newer leaning group, and this retro+ colorway sticks with that trend. See the 2015 Air Jordan 7 “French Blue” below and watch for extended previews right here on Sneaker News.
|
{
"pile_set_name": "pile-cc"
}
|
ChatSua
ChatSua () is a Thai film based on a work by "Orawun" (lyu Sresawek). It was premièred on June 18, 1958, at Sala Chalermkrung Royal Theatre and Sala Chalermbure Royal Theatre. The film was directed by Prateb Gomonpis. It is a sequel to the 1956 film PraiKuarng.
The film was the screen debut of Mitr Chaibancha, as Wai Sukda, and stars Rewadee Sewilai, Win Wunchai, Narmkern bunnuk, Praphasee Satornkid, Naiyana TanomSub, Usanee Isaranun, NoppaMad Sirisopon, Punga Suttirin, Porn Paroch, Pramin Jarujarit, Sail Poonsai, Sompong pongmitr, Sukon Kueawleam and Lortok.
The film has grossed over 800,000 baht. The critical response was mostly favourable.
References
Category:Thai films
Category:1958 films
|
{
"pile_set_name": "wikipedia_en"
}
|
Marine Air Control Group 38
Marine Air Control Group 38 (MACG-38) is a United States Marine Corps aviation command and control unit based at Marine Corps Air Station Miramar that is currently composed of five squadrons and one battalion that provide the 3rd Marine Aircraft Wing's tactical headquarters, positive and procedural control to aircraft, and air defense support for the I Marine Expeditionary Force.
Mission
Subordinate units
3rd Low Altitude Air Defense Battalion
Marine Air Control Squadron 1
Marine Air Support Squadron 3
Marine Tactical Air Command Squadron 38
Marine Wing Communications Squadron 38
History
Marine Air Control Group 38 was activated on September 1, 1967 at Marine Corps Air Station El Toro, California. The Group deployed to Saudi Arabia in August 1990 and later supported Operation Desert Storm. Elements of the group have supported Operation Restore Hope, Operation Safe Departure, Operation Southern Watch and Operation Stabilise. The group relocated to MCAS Miramar in October 1998. MACG-38 units began deploying to Kuwait in 2002 and the entire control group would eventually take part in the 2003 invasion of Iraq and continued to deploy today in support of Operation Iraqi Freedom through early 2009. They were headquartered at Al Asad Airbase in the Al Anbar Province from 2004 through the end of their last Iraq deployment in early 2009.
Most recently the Group deployed to Camp Leatherneck, Afghanistan in March 2010. They are responsible for providing aviation command and control for the I Marine Expeditionary Force (I MEF) in support of Operation Enduring Freedom. They returned to The United States in Spring of 2011.
See also
United States Marine Corps Aviation
List of United States Marine Corps aircraft groups
List of United States Marine Corps aircraft squadrons
References
External links
Category:United States Marine Corps air control groups
Category:Military units and formations in California
|
{
"pile_set_name": "wikipedia_en"
}
|
---
abstract: 'Let $(R,{\mathfrak{m}})$ be a complete Noetherian local ring and let $M$ be a finite $R$–module of positive Krull dimension $n$. It is shown that any subset $T$ of ${\mbox{Assh}\,}_R(M)$ can be expressed as the set of attached primes of the top local cohomology module ${\mbox{H}\, }^n_{\mathfrak{a}}(M)$ for some ideal ${\mathfrak{a}}$ of $R$. Moreover if ${\mathfrak{a}}$ is an ideal of $R$ such that the set of attached primes of ${\mbox{H}\, }^n_{\mathfrak{a}}(M)$ is a non–empty proper subset of ${\mbox{Assh}\,}_R(M)$, then ${\mbox{H}\, }^n_{\mathfrak{a}}(M)\cong{\mbox{H}\, }^n_{\mathfrak{b}}(M)$ for some ideal ${\mathfrak{b}}$ of $R$ with ${\mbox{dim}\,}_R (R/{\mathfrak{b}})=1$.'
address:
- |
Mohammad T. Dibaei\
Faculty of Mathematical Sciences, Teacher Training University, Tehran, Iran, and Institute for Theoretical Physics and Mathematics (IPM), Tehran, Iran.
- |
Raheleh Jafari\
Faculty of Mathematical Sciences, Teacher Training University, Tehran, Iran
author:
- 'Mohammad T. Dibaei'
- Raheleh Jafari
title: |
Top local cohomology modules\
with specified attached primes
---
Introduction
============
Throughout $(R,{\mathfrak{m}})$ is a commutative Noetherian local ring with maximal ideal ${\mathfrak{m}}$, $M$ is a non-zero finite (i.e. finitely generated) $R$–module with positive Krull dimension $n:={\mbox{dim}\,}_R(M)$ and ${\mathfrak{a}}$ denotes an ideal of $R$. Recall that for an $R$–module $N$, a prime ideal ${\mathfrak{p}}$ of $R$ is said to be an [*attached prime*]{} of $N$, if ${\mathfrak{p}}={\mbox{Ann}\,}_R(N/K)$ for some submodule $K$ of $N$ (see [@MS]). The set of attached primes of $N$ is denoted by ${\mbox{Att}\,}_R(N)$. If $N$ is an Artinian $R$–module so that $N$ admits a reduced secondary representation $N=N_1+\cdots+N_r$ such that $N_i$ is ${\mathfrak{p}}_i$–secondary, $i=1,\ldots,r$, then ${\mbox{Att}\,}_R(N)=\{{\mathfrak{p}}_1,\ldots,{\mathfrak{p}}_r\}$ is a finite set.
Denote by ${\mbox{H}\, }^n_{\mathfrak{a}}(M)$ the $n$th right derived functor of $$\Gamma_{\mathfrak{a}}(M)=\{x\in M|\, {\mathfrak{a}}^rx=0 \ \mbox{for some positive
integer} \ r \}$$ applied to $M$. It is well-known that ${\mbox{H}\, }^n_{\mathfrak{a}}(M)$ is an Artinian module. Macdonald and Sharp, in [@MS], studied ${\mbox{H}\, }^n_{\mathfrak{m}}(M)$ and showed that ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{m}}(M))= {\mbox{Assh}\,}_R(M)$ where ${\mbox{Assh}\,}_R(M):=\{{\mathfrak{p}}\in
{\mbox{Ass}\,}_R(M)|\, {\mbox{dim}\,}_R(R/{\mathfrak{p}})=n\}$. It is shown in [@DY1 Theorem A], that for any arbitrary ideal ${\mathfrak{a}}$ of $R$, ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M))=\{{\mathfrak{p}}\in{\mbox{Ass}\,}_R(M)|\, {\mbox{H}\, }^n_{\mathfrak{a}}(R/{\mathfrak{p}})\neq 0\}$ which is a subset of ${\mbox{Assh}\,}_R(M)$. In [@DY2], the structure of ${\mbox{H}\, }^n_{\mathfrak{a}}(M)$ is studied by the first author and Yassemi and they showed that, in case $R$ is complete, for any pair of ideals ${\mathfrak{a}}$ and ${\mathfrak{b}}$ of $R$, if ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M))={\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{b}}(M))$, then ${\mbox{H}\, }^n_{\mathfrak{a}}(M) \cong {\mbox{H}\, }^n_{\mathfrak{b}}(M)$. They also raised the following question in [@DY3 Question 2.9] which is the main object of this paper.
[**Question.**]{} For any subset $T$ of ${\mbox{Assh}\,}_R(M)$, is there an ideal ${\mathfrak{a}}$ of $R$ such that ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M)) = T$?
This paper provides a positive answer for this question in the case $R$ is complete.
Main Result
===========
In this section we assume that $R$ is complete with respect to the ${\mathfrak{m}}$–adic topology. As mentioned above, ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{m}}(M)) =
{\mbox{Assh}\,}_R(M)$ and ${\mbox{Att}\,}_R({\mbox{H}\, }^n_R(M)) = \emptyset$ is the empty set. Also ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M)) \subseteq {\mbox{Assh}\,}_R(M)$ for all ideals ${\mathfrak{a}}$ of $R$. Our aim is to show that as ${\mathfrak{a}}$ varies over ideals of $R$, the set ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M))$ takes all possible subsets of ${\mbox{Assh}\,}_R(M)$ (see Theorem 2.8). In the following results we always assume that $T$ is a non–empty proper subset of ${\mbox{Assh}\,}_R(M)$
In our first result we find a characterization for a subset of ${\mbox{Assh}\,}_R (M)$ to be the set of attached primes of the top local cohomology of $M$ with respect to an ideal ${\mathfrak{a}}$.
Assume that $n:={\mbox{dim}\,}_R(M)\geq 1$ and that $T$ is a proper non-empty subset of ${\mbox{Assh}\,}_R(M)$. Set ${\mbox{Assh}\,}_R(M)\setminus
T=\{{\mathfrak{q}}_1,\ldots,{\mathfrak{q}}_r\}$. The following statements are equivalent.
1. There exists an ideal ${\mathfrak{a}}$ of $R$ such that ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M))=T$.
2. For each $i,\,1\leq i\leq r$, there exists $Q_i\in {\mbox{Supp}\,}_R(M)$ with ${\mbox{dim}\,}_R(R/Q_i)=1$ such that $$\underset{{\mathfrak{p}}\in T}{\bigcap}{\mathfrak{p}}\nsubseteq Q_i \quad \mbox{and}
\quad {\mathfrak{q}}_i\subseteq Q_i.$$
With $Q_i,\, 1\leq i\leq r$, as above, ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M))=T$ where ${\mathfrak{a}}=\bigcap\limits_{i=1}^rQ_i$.
$(i)\Rightarrow (ii)$. By [@DY1 Theorem A], ${\mbox{H}\, }^n_{\mathfrak{a}}(R/{\mathfrak{p}})\neq 0$ for all ${\mathfrak{p}}\in T$, that is ${\mathfrak{a}}+{\mathfrak{p}}$ is ${\mathfrak{m}}$–primary for all ${\mathfrak{p}}\in T$ (by [Lichtenbaum-Hartshorne Theorem]{}). On the other hand, for $1\leq i\leq r, {\mathfrak{q}}_i\notin T$ which is equivalent to say that ${\mathfrak{a}}+{\mathfrak{q}}_i$ is not an ${\mathfrak{m}}$–primary ideal. Hence there exists a prime ideal $Q_i\in {\mbox{Supp}\,}_R(M)$ such that ${\mbox{dim}\,}_R(R/Q_i)=1$ and ${\mathfrak{a}}+{\mathfrak{q}}_i\subseteq Q_i$. It follows that $\underset{{\mathfrak{p}}\in
T}{\bigcap}{\mathfrak{p}}\nsubseteq Q_i$.\
$(ii)\Rightarrow (i)$. Set ${\mathfrak{a}}:=\bigcap\limits_{i=1}^rQ_i$. For each $i, 1\leq i\leq r$, ${\mathfrak{a}}+{\mathfrak{q}}_i\subseteq Q_i$ implies that ${\mathfrak{a}}+{\mathfrak{q}}_i$ is not ${\mathfrak{m}}$–primary and so ${\mbox{H}\, }^n_{\mathfrak{a}}(R/{\mathfrak{q}}_i)= 0$. Thus ${\mbox{Att}\,}_R{\mbox{H}\, }_{\mathfrak{a}}^n(M)\subseteq T$. Assume ${\mathfrak{p}}\in T$ and $Q\in
{\mbox{Supp}\,}(M)$ such that ${\mathfrak{a}}+{\mathfrak{p}}\subseteq Q$. Then $Q_i\subseteq Q$ for some $i, 1\leq i\leq r$. Since ${\mathfrak{p}}\nsubseteq Q_i$, we have $Q_i\neq
Q$, so $Q={\mathfrak{m}}$. Hence ${\mathfrak{a}}+ {\mathfrak{p}}$ is ${\mathfrak{m}}$–primary ideal. Now, by [Lichtenbaum-Hartshorne Theorem]{}, and by [@DY1 Theorem A], it follows that ${\mathfrak{p}}\in{\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M))$.
If ${\mbox{H}\, }^n_{\mathfrak{a}}(M)\not=o$ then there is an ideal ${\mathfrak{b}}$ of $R$ such that ${\mbox{dim}\,}_R(R/{\mathfrak{b}})\leq 1$ and ${\mbox{H}\, }^n_{\mathfrak{a}}(M)\cong{\mbox{H}\, }^n_{\mathfrak{b}}(M)$.
If ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M))= {\mbox{Assh}\,}_R(M)$, then ${\mbox{H}\, }^n_{\mathfrak{a}}(M)=
{\mbox{H}\, }^n_{\mathfrak{m}}(M)$. Otherwise $n\geq 1$ and ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M))$ is a proper subset of ${\mbox{Assh}\,}_R(M)$. Set ${\mbox{Assh}\,}_R(M)\setminus
{\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M)):=\{{\mathfrak{q}}_1, \cdots, {\mathfrak{q}}_r\}$. By Proposition 2.1, there are $Q_i\in {\mbox{Supp}\,}_R(M)$ with ${\mbox{dim}\,}_R(R/Q_i)= 1, \ i=1, \cdots,
r$, such that ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M))= {\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{b}}(M))$ with ${\mathfrak{b}}= {\bigcap\limits_{i=1}^rQ_i}$. Now, by [@DY2 Theorem 1.6], we have ${\mbox{H}\, }^n_{\mathfrak{a}}(M)\cong {\mbox{H}\, }^n_{\mathfrak{b}}(M)$. As ${\mbox{dim}\,}(R/{\mathfrak{b}})= 1$, the proof is complete.
If ${\mbox{dim}\,}_R(M)=1$ then any subset $T$ of ${\mbox{Assh}\,}_R(M)$ is equal to the set ${\mbox{Att}\,}_R({\mbox{H}\, }^1_{\mathfrak{a}}(M))$ for some ideal ${\mathfrak{a}}$ of $R$.
With notations as in Proposition 2.1, we take $Q_i={\mathfrak{q}}_i$ for $i=1,\cdots, r$.
By a straightforward argument one may notice that the condition “complete" is superficial, for if $T$ is a non–empty proper subset of ${\mbox{Assh}\,}_R(M)$, then $T={\mbox{Att}\,}_R({\mbox{H}\, }^1_{\mathfrak{a}}(M))$, where ${\mathfrak{a}}=\underset{{\mathfrak{p}}\in{\mbox{Assh}\,}_R(M)\setminus T}{\cap}{\mathfrak{p}}$.
The following is an example to Proposition 2.1.
Set $R=k[[X,Y,Z,W]]$, where $k$ is a field and $X,Y,Z,W$ are independent indeterminates. Then $R$ is a complete Noetherian local ring with maximal ideal ${\mathfrak{m}}=(X,Y,Z,W)$. Consider prime ideals $${\mathfrak{p}}_1=(X,Y) \quad , \quad {\mathfrak{p}}_2=(Z,W)\quad , \quad {\mathfrak{p}}_3=(Y,Z)
\quad , \quad {\mathfrak{p}}_4=(X,W)$$ and set $\displaystyle
M=\frac{R}{{\mathfrak{p}}_1{\mathfrak{p}}_2{\mathfrak{p}}_3{\mathfrak{p}}_4}$ as an $R$–module, so that we have ${\mbox{Assh}\,}_R(M)=\{{\mathfrak{p}}_1,{\mathfrak{p}}_2,{\mathfrak{p}}_3,{\mathfrak{p}}_4\}$ and ${\mbox{dim}\,}_R(M)=2$. We get $\{{\mathfrak{p}}_i\}={\mbox{Att}\,}_R({\mbox{H}\, }^2_{{\mathfrak{a}}_i}(M))$, where ${\mathfrak{a}}_1={\mathfrak{p}}_2,
{\mathfrak{a}}_2={\mathfrak{p}}_1, {\mathfrak{a}}_3={\mathfrak{p}}_4, {\mathfrak{a}}_4={\mathfrak{p}}_3$, and $\{{\mathfrak{p}}_i,{\mathfrak{p}}_j\}={\mbox{Att}\,}_R({\mbox{H}\, }^2_{{\mathfrak{a}}_{ij}}(M))$, where $$\begin{array}{l}
{\mathfrak{a}}_{12}=(Y^2+YZ,Z^2+YZ,X^2+XW,W^2+WX),\\
{\mathfrak{a}}_{34}=(Z^2+ZW,X^2+YX,Y^2+YX,W^2+WZ),\\
{\mathfrak{a}}_{13}=(Z^2+XZ,W^2+WY,X^2+XZ),\\
{\mathfrak{a}}_{14}=(W^2+WY,Z^2+ZY,Y^2+YW),\\
{\mathfrak{a}}_{23}=(X^2+XZ,Y^2+WY,W^2+ZW),\\
{\mathfrak{a}}_{24}=(X^2+XZ,Y^2+WY,Z^2+ZW).\\
\end{array}$$ Finally, we have $\{{\mathfrak{p}}_i,{\mathfrak{p}}_j,{\mathfrak{p}}_k\}={\mbox{Att}\,}_R({\mbox{H}\, }^2_{{\mathfrak{a}}_{ijk}}(M))$, where ${\mathfrak{a}}_{123}=(X,W,Y+Z)$, ${\mathfrak{a}}_{234}=(X,Y,W+Z)$, ${\mathfrak{a}}_{134}=(Z,W,Y+X)$.\
Assume that $n:={\mbox{dim}\,}_R(M)\geq 2$, and that $T$ is a non-empty subset of ${\mbox{Assh}\,}_R(M)$ such that $\underset{{\mathfrak{p}}\in T}{\bigcap} {\mathfrak{p}}\nsubseteq \underset
{{\mathfrak{q}}\in {\mbox{Assh}\,}_R(R/\sum\limits_{{\mathfrak{p}}\in T'}{\mathfrak{p}})}{\bigcap} {\mathfrak{q}}$, where $T'={\mbox{Assh}\,}_R(M)\setminus T$. Then there exists a prime ideal $Q\in {\mbox{Supp}\,}_R(M)$ with ${\mbox{dim}\,}_R(R/Q)=1$ and ${\mbox{Att}\,}_R({\mbox{H}\, }^n_Q(M))=T.$
Set $s:={\mbox{ht}\,}_M(\sum\limits_{{\mathfrak{p}}\in T'}{\mathfrak{p}})$. We have $s\leq n-1$, otherwise ${\mbox{Assh}\,}_R(R/\sum\limits_{{\mathfrak{p}}\in T'}{\mathfrak{p}})= \{{\mathfrak{m}}\}$ which contradicts the condition $\underset{{\mathfrak{p}}\in T}{\bigcap} {\mathfrak{p}}\nsubseteq \underset
{{\mathfrak{q}}\in {\mbox{Assh}\,}_R(R/\sum\limits_{{\mathfrak{p}}\in T'}{\mathfrak{p}})}{\bigcap} {\mathfrak{q}}$. As $R$ is catenary, we have ${\mbox{dim}\,}_R(R/\sum\limits_{{\mathfrak{p}}\in
T'}{\mathfrak{p}})=n-s$. We first prove, by induction on $j$, $0\leq j\leq
n-s-1$, that there exists a chain of prime ideals $Q_0 \subset Q_1
\subset \cdots \subset Q_j \subset {\mathfrak{m}}$ such that $Q_0\in{\mbox{Assh}\,}_R(R/\sum\limits_{{\mathfrak{p}}\in T'}{\mathfrak{p}})$, ${\mbox{dim}\,}_R(R/Q_j)=n-s-j$ and $\underset{{\mathfrak{p}}\in T}{\bigcap}{\mathfrak{p}}\nsubseteq Q_j$. There is $Q_0\in{\mbox{Assh}\,}_R(R/\sum\limits_{{\mathfrak{p}}\in T'}{\mathfrak{p}})$ such that $\underset{{\mathfrak{p}}\in T}{\bigcap}{\mathfrak{p}}\nsubseteq Q_0$. Note that ${\mbox{dim}\,}_R(R/Q_0)={\mbox{dim}\,}_R(R/\sum\limits_{{\mathfrak{p}}\in T'}{\mathfrak{p}})=n-s$. Now, assume that $0<j\leq n-s-1$ and that we have proved the existence of a chain $Q_0 \subset Q_1 \subset \cdots \subset Q_{j-1}$ of prime ideals such that $Q_0\in{\mbox{Assh}\,}_R(R/\sum\limits_{{\mathfrak{p}}\in T'}{\mathfrak{p}})$, ${\mbox{dim}\,}_R(R/Q_j)=n-s-(j-1)$ and that $\underset{{\mathfrak{p}}\in
T}{\bigcap}{\mathfrak{p}}\nsubseteq Q_{j-1}$. Note that we have $n-s-(j-1)=n-s+1-j\geq 2$. Therefore the set $V$ defined as\
$$\begin{array}{ll}
V= \{{\mathfrak{q}}\in {\mbox{Supp}\,}_R(M) |& Q_{j-1}\subset {\mathfrak{q}}\subset {\mathfrak{q}}'\subseteq
{\mathfrak{m}}, {\mbox{dim}\,}_R(R/{\mathfrak{q}})=n-s-j,\\ &
{\mathfrak{q}}'\in{\mbox{Spec}\,}(R)\, \mbox{and}\, {\mbox{dim}\,}_R(R/{\mathfrak{q}}')=n-s-j-1\}
\end{array}$$\
is non-empty and so, by Ratliff’s weak existence theorem [@M Theorem 31.2], is not finite. As $\underset{{\mathfrak{p}}\in
T}{\bigcap}{\mathfrak{p}}\nsubseteq Q_{j-1}$, we have $Q_{j-1}\subset
Q_{j-1}+\underset{{\mathfrak{p}}\in T}{\bigcap}{\mathfrak{p}}$. If, for ${\mathfrak{q}}\in V$, $\underset{{\mathfrak{p}}\in T}{\bigcap}{\mathfrak{p}}\subseteq {\mathfrak{q}}$, then ${\mathfrak{q}}$ is a minimal prime of $Q_{j-1}+\underset{{\mathfrak{p}}\in T}{\bigcap}{\mathfrak{p}}$. As $V$ is an infinite set, there is $Q_j\in V$ such that $\underset{{\mathfrak{p}}\in
T}{\bigcap}{\mathfrak{p}}\nsubseteq Q_j$. Thus the induction is complete. Now by taking $Q:=Q_{n-s-1}$ and by Proposition 2.1, the claim follows.
Assume that $n:={\mbox{dim}\,}_R(M)\geq 2$ and $T$ is a non-empty subset of ${\mbox{Assh}\,}_R(M)$ with $|T|=|{\mbox{Assh}\,}_R(M)|-1$. Then there is an ideal ${\mathfrak{a}}$ of $R$ such that ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M))=T$.
Note that ${\mbox{Assh}\,}_R(M)\setminus T$ is a singleton set $\{{\mathfrak{q}}\}$, say, and so ${\mbox{ht}\,}_M({\mathfrak{q}})=0$ and $\underset{{\mathfrak{p}}\in T}{\bigcap}{\mathfrak{p}}\nsubseteq
{\mathfrak{q}}$. Therefore, by Lemma 2.5, the result follows.
Assume that $n:={\mbox{dim}\,}_R(M)\geq 2$ and ${\mathfrak{a}}_1$ and ${\mathfrak{a}}_2$ are ideals of $R$. Then there exists an ideal ${\mathfrak{b}}$ of $R$ such that ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{b}}(M))={\mbox{Att}\,}_R({\mbox{H}\, }^n_{{\mathfrak{a}}_{1}}(M))\cap{\mbox{Att}\,}_R({\mbox{H}\, }^n_{{\mathfrak{a}}_{2}}(M))$.
Set $T_{1}={\mbox{Att}\,}_R({\mbox{H}\, }^n_{{\mathfrak{a}}_{1}}(M))$ and $T_{2}={\mbox{Att}\,}_R({\mbox{H}\, }^n_{{\mathfrak{a}}_{2}}(M))$. We may assume that $T_1\bigcap
T_2$ is a non–empty proper subset of ${\mbox{Assh}\,}_R(M)$. Assume that ${\mathfrak{q}}\in {\mbox{Assh}\,}_R(M)\setminus (T_1\bigcap T_2)=({\mbox{Assh}\,}_R(M)\setminus
T_1)\bigcup({\mbox{Assh}\,}_R(M)\setminus T_2) $. By Proposition 2.1, there exists $Q\in {\mbox{Supp}\,}_R(M)$ with ${\mbox{dim}\,}_R(R/Q)=1$ such that ${\mathfrak{q}}\subseteq Q$ and $\bigcap_{{\mathfrak{p}}\in T_1\bigcap T_2}{\mathfrak{p}}\nsubseteq
Q$. Now, by Proposition 2.1, again there exists an ideal ${\mathfrak{b}}$ of $R$ such that ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{b}}(M))=T_1\bigcap T_2$.
Now we are ready to present our main result.
Assume that $T\subseteq {\mbox{Assh}\,}_R(M)$, then there exists an ideal ${\mathfrak{a}}$ of $R$ such that $T={\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M))$.
By Corollary 2.3, we may assume that ${\mbox{dim}\,}_R(M)\geq 2$ and that $T$ is a non-empty proper subset of ${\mbox{Assh}\,}_R(M)$. Set $T=\{{\mathfrak{p}}_1,\ldots,{\mathfrak{p}}_t\}$ and ${\mbox{Assh}\,}_R(M)\setminus
T=\{{\mathfrak{p}}_{t+1},\ldots,{\mathfrak{p}}_{t+r}\}$. We use induction on $r$. For $r=1$, Corollary 2.6 proves the first step of induction. Assume that $r>1$ and that the case $r-1$ is proved. Set $T_1=\{{\mathfrak{p}}_1,\ldots,{\mathfrak{p}}_t,{\mathfrak{p}}_{t+1}\}$ and $T_2=\{{\mathfrak{p}}_1,\ldots,{\mathfrak{p}}_t,{\mathfrak{p}}_{t+2}\}$. By induction assumption there exist ideals ${\mathfrak{a}}_1$ and ${\mathfrak{a}}_2$ of $R$ such that $T_1={\mbox{Att}\,}_R({\mbox{H}\, }^n_{{\mathfrak{a}}_1}(M))$ and $T_2={\mbox{Att}\,}_R({\mbox{H}\, }^n_{{\mathfrak{a}}_2}(M))$. Now by the Lemma 2.7 there exists an ideal ${\mathfrak{a}}$ of $R$ such that $T=T_1\bigcap T_2={\mbox{Att}\,}_R({\mbox{H}\, }^n_{{\mathfrak{a}}}(M))$.
(See [@C Corollary 1.7]) With the notations as in Theorem 2.8, the number of non-isomorphic top local cohomology modules of $M$ with respect to all ideals of $R$ is equal to $2^{|{\mbox{Assh}\,}_R (M)|}$.
It follows from Theorem 2.8 and [@DY2 Theorem 1.6].
[**Acknowledgment.**]{}The authors would like to thank the referee for her/his comments.
[10]{}
F. W. Call, *On local cohomology modules*, J. Pure Appl. Algebra **43** (1986), no. 2, 111–117.
M. T. Dibaei and S. Yassemi, *Some regidity results for highest order local cohomology modules*, Algebra Colloq., to appear.
M. T. Dibaei and S. Yassemi, *Top local cohomology modules*, Algebra Colloq., to appear.
M. T. Dibaei and S. Yassemi, *Attached primes of the top local cohomology modules with respect to an ideal*, Arch. Math. (Basel) **84** (2005), no. 4, 292–297.
I. G. Macdonald and R. Y. Sharp, *An elementary proof of the non-vanishing of certain local cohomology modules*, Quart. J. Math. Oxford **23** (1972), 197–204.
H. Matsumura, *Commutative ring theory*, Cambridge Studies in Advanced Mathematics, 8. Cambridge University Press, Cambridge, 1986.
|
{
"pile_set_name": "arxiv"
}
|
Promethium: uses
The following uses for promethium are gathered from a number of sources as well as from anecdotal comments. I'd be delighted to receive corrections as well as additional referenced uses (please use the feedback mechanism to add uses).
shows promise as a portable X-ray unit
possibly useful as a heat source to provide auxilliary power for space probes and satellites
|
{
"pile_set_name": "pile-cc"
}
|
Susy and Geno, Inseparable!
Susy and Geno’s long-awaited reunion finally took place on March 11 at Market-Market Mall in Taguig!
A few weeks ago, Susy started a massive search for her missing friend Geno . Susy even put up a Facebook page where all info, photos and videos in relation to the search was posted.
Finally after weeks of anticipation, Susy and Geno reunited again where the two met up not only with each other but with their loyal and very enthusiastic supporters, waving banners and placards expressing their unwavering support.
Geno arrived at the activity center holding a fresh bouquet for Susy. It was a wonderful day for Susy and Geno and for their solid fans club. After long years of waiting, the two best friends shared a long and warm embrace.
Check out this YouTube video dance performance from Susy and Geno!
The two gladly gave a dance number people requested for. Afterwards, the pair mingled with the crowd where the latter grab the chance to take photos with them.
The reunion was also the first public appearance in many years for the faces of Sustagen Milk in the 80’s and 90’s, who disappeared from the public eye, only to re-emerge two decades later, starting with Susy’s return last February. Only then would we find out that she and Geno had actually lost touch through the years.
Meanwhile, Susy and Geno’s friends from Sustagen also did their part, providing free milk for all guests and fans.
It was a lovely day for Susy and Geno and for their loyal supporters. I’m sure happy memories came to you as you watched them reunited.
|
{
"pile_set_name": "pile-cc"
}
|
Ratno Dolne
Ratno Dolne () is a village in the administrative district of Gmina Radków, within Kłodzko County, Lower Silesian Voivodeship, in south-western Poland.
It lies approximately east of Radków, north-west of Kłodzko, and south-west of the regional capital Wrocław.
References
Ratno Dolne
|
{
"pile_set_name": "wikipedia_en"
}
|
---
abstract: 'We present here a microscopic analysis of the cooperative light scattering on an atomic system consisting of $\Lambda$-type configured atoms with the spin-degenerate ground state. The results are compared with a similar system consisting of standard “two-level” atoms of the Dicke model. We discuss advantages of the considered system in context of its possible implications for light storage in a macroscopic ensemble of dense and ultracold atoms.'
address: |
${}^1$Department of Theoretical Physics, St-Petersburg State Polytechnic University, 195251, St.-Petersburg, Russia\
${}^2$Department of Physics, St-Petersburg State University, 198504, St-Petersburg, Russia
author:
- 'A.S. Sheremet${}^1$, A.D. Manukhova${}^2$, N.V. Larionov${}^1$, D.V. Kupriyanov${}^1$'
title: |
Cooperative light scattering on an atomic system with\
degenerate structure of the ground state
---
Introduction
============
A significant range of studies of ultracold atomic systems have focused on their complex quantum behavior in various interaction processes. Among these, special attention has been payed to the quantum interface between light and matter, and quantum memory in particular [@PSH; @Simon; @SpecialIssueJPhysB]. Most of the schemes for light storage in atomic ensembles are based on idea of the $\Lambda$-type conversion of a signal pulse into the long-lived spin coherence of the atomic ground state. The electromagnetically induced transparency (EIT) protocol in a warm atomic ensemble was successfully demonstrated in Ref. [@NGPSLW], and also in Ref. [@CDLK], where a single photon entangled state was stored in two ensembles of cold atoms with an efficiency of 17%. Recent experiments on conversion of a spin polariton mode into a cavity mode with efficiency close to 90% [@STTV] and on the narrow-bandwidth biphoton preparation in a double $\Lambda$-system under EIT conditions [@DKBYH] show promising potential for developing a quantum interface between light and atomic systems. However, further improvement of atomic memory efficiencies is a challenging and not straightforward experimental task. In the case of warm atomic vapors, any increase of the sample optical depth meets a serious barrier for the EIT effect because of the rather complicated and mainly negative rule of atomic motion and Doppler broadening, which manifest in destructive interference among the different hyperfine transitions of alkali-metal atoms [@MSLOFSBKLG]. In the case of ultracold and dilute atomic gas, which can be prepared in a magneto-optical trap (MOT), for some experimental designs optical depths around hundreds are feasible [@FGCK], but there are a certain challenges in accumulating so many atoms and making such a system controllable. One possible solution requires special arrangements for effective light storage in MOT in a diffusion regime; see Ref. [@GSOH].
Recent progress in experimental studies of light localization phenomenon in the dense and strongly disordered atomic systems [@Kaiser; @BHSK] encourages us to think that the storage protocols for light could be organized more effectively if atoms interacted with the field cooperatively in the dense configuration. If an atomic cloud contains more than one atom in the volume scaled by the radiation wavelength, the essential optical thickness can be attained for a smaller number of atoms than it is typically needed in dilute configuration. In the present paper we address the problem of light scattering by such an atomic system, which has intrinsically cooperative behavior. Although the problem of cooperative or dependent light scattering and super-radiance phenomenon have been well established in atomic physics and quantum optics for decades (see Refs. [@BEMST; @Akkermns]), microscopic analysis for the atoms with degenerate ground states is still quite poorly performed in the literature [@Grubellier]. The microscopic calculations reported so far have been done mostly for “two-level” atoms and were basically motivated by the problem of mesoscopic description of the light transport through disordered media and by an Anderson-type localization, where transition from weak to strong disorder plays a crucial role; see Refs. [@RMO; @GeroAkkermns; @SKKH].
In this paper we develop a microscopic theory of the cooperative light scattering from an atomic system consisted of $\Lambda$-type configured atoms with the spin-degenerate ground state. The results are compared with a similar system of “two-level” atoms of the Dicke model. We discuss advantages of the considered system in the context of its possible implications for the problem of light storage in a macroscopic ensemble of dense and ultracold atoms.
Theoretical framework
=====================
Transition amplitude and the scattering cross section
-----------------------------------------------------
The quantum-posed description of the photon scattering problem is based on the formalism of $T$ matrix, which is defined by $$\hat{T}(E)=\hat{V}+\hat{V}\frac{1}{E-\hat{H}}\hat{V},%
\label{2.1}%$$ where $\hat{H}$ is the total Hamiltonian consisting of the nonperturbed part $\hat{H}_0$ and an interaction term $\hat{V}$ such that $\hat{H}=\hat{H}_0+\hat{V}$. The energy argument $E$ is an arbitrary complex parameter in Eq.(\[2.1\]). Then the scattering process, evolving from initial state $|i\rangle$ to the final state $|f\rangle$, is expressed by the following relation between the differential cross section and the transition amplitude, given by the relevant $T$-matrix element considered as a function of the initial energy $E_i$: $$d\sigma_{i\to f}=\frac{{\cal V}^2}{\hbar^2
c^4}\frac{\omega'^2}{(2\pi)^2}%
\left|T_{g'\mathbf{e}'\mathbf{k}',g\,\mathbf{e\,k}}(E_i+i0)\right|^2d\Omega%
\label{2.2}%$$ Here the initial state $|i\rangle$ is specified by the incoming photon’s wave vector $\mathbf{k}$, frequency $\omega\equiv\omega_k=c\,k$, and polarization vector $\mathbf{e}$, and the atomic system populates a particular ground state $|g\rangle$. The final state $|f\rangle$ is specified by a similar set of the quantum numbers, which are additionally upscribed by the prime sign, and the solid angle $\Omega$ is directed along the wavevector of the outgoing photon $\mathbf{k}'$. The presence of quantization volume ${\cal V}$ in this expression is caused by the second quantized structure of the interaction operators; see below. The scattering process conserves the energy of input and output channels, such that $E_i=E_f$.
Our description of interaction process of the electromagnetic field with an atomic system is performed in the dipole approximation. This states that the original Hamiltonian, introduced in the Coulomb gauge and valid for any neutral charge system, has been unitarily transformed to the dipole-type interaction with the assumption that atomic size is much smaller than a typical wavelength of the field modes actually contributing to the interaction dynamics. Such a long-wavelength dipole approximation see Ref. [@ChTnDRGr] for derivation details leads to the following interaction Hamiltonian for an atomic ensemble consisting of $N$ dipole-type scatterers interacting with the quantized electromagnetic field: $$\begin{aligned}
\hat{V}&=&-\sum_{a=1}^{N}%
\hat{\mathbf{d}}^{(a)}\hat{\mathbf{E}}(\mathbf{r}_a)+\hat{H}_{\mathrm{self}},%
\nonumber\\%
\hat{H}_{\mathrm{self}}&=&\sum_{a=1}^{N}\frac{2\pi}{{\cal V}}\sum_{s}\left(\mathbf{e}_s\hat{\mathbf{d}}^{(a)}\right)^2%
\label{2.3}%\end{aligned}$$ The first and most important term is normally interpreted as interaction of an $a$th atomic dipole $\mathbf{d}^{(a)}$ with electric field $\hat{\mathbf{E}}(\mathbf{r})$ at the point of dipole location. However, strictly defined in the dipole gauge, the latter quantity performs the microscopic displacement field, which can be expressed by a standard expansion in the basis of plane waves $s\equiv{\mathbf{k},\alpha}$ (where $\alpha=1,2$ numerates two orthogonal transverse polarization vectors $\mathbf{e}_s\equiv\mathbf{e}_{\mathbf{k}\alpha}$ for each $\mathbf{k}$) $$\begin{aligned}
\lefteqn{\hat{\mathbf{E}}(\mathbf{r})\equiv \hat{\mathbf{E}}^{(+)}(\mathbf{r})%
+\hat{\mathbf{E}}^{(-)}(\mathbf{r})}%
\nonumber\\%
&&=\sum_{s}\left(\frac{2\pi\hbar\omega_s}{{\cal V}}\right)^{1/2}%
\left[i\mathbf{e}_s a_s\mathrm{e}^{i\mathbf{k}_s\mathbf{r}}%
-i\mathbf{e}_s a_s^{\dagger}\mathrm{e}^{-i\mathbf{k}_s\mathbf{r}}\right]%
\nonumber\\%
&&=\hat{\mathbf{E}}_{\bot}(\mathbf{r})+\sum_{b=1}^{N}\frac{4\pi}{{\cal V}}%
\sum_{s}\mathbf{e}_s(\mathbf{e}_s\hat{\mathbf{d}}^{(b)})%
\mathrm{e}^{i\mathbf{k}_s(\mathbf{r}-\mathbf{r}_b)}%
\label{2.4}%\end{aligned}$$Here $a_s$ and $a_s^{\dagger}$ are the annihilation and creation operators for the $s$th field’s mode and the quantization scheme includes the periodic boundary conditions in the quantization volume ${\cal V}$. The bottom line in Eq.(\[2.4\]) indicates the important difference between the actual transverse electric field denoted as $\hat{\mathbf{E}}_{\bot}(\mathbf{r})$ and the displacement field. The difference cannot be ignored at the distances comparable with either atomic size or the radiation wavelength, which is the subject of the present report. For such a dense configuration the definitions (\[2.3\]) and (\[2.4\]) should be clearly understood.
Let us make a few remarks. The second term in Eq.(\[2.3\]) reveals a nonconverging self-energy (self-action) of the dipoles. This term is often omitted in practical calculations since it does not principally affect the dipoles’ dynamics, particularly when the difference between transverse electric and displacement fields is small. It can be also formally incorporated into the internal Hamiltonian associated with the atomic dipoles. However, as was pointed out in Ref. [@SKKH] via tracing the Heisenberg dynamics of atomic variables, the self-action term is mostly compensated by the self-contact dipole interaction. The latter manifests itself in the dipoles’ dynamics when $\mathbf{r}=\mathbf{r}_a=\mathbf{r}_b$ for interaction of a specific $a$-th dipole in Hamiltonian (\[2.3\]) with the longitudinal field created by the same dipole in the second term in Eq. (\[2.4\]). Both these nonconverging self-action and self-contact interaction terms can be safely renormalized in evaluation of a single-particle contribution into the self-energy part of the perturbation theory expansion for the resolvent operator; see below.
Resolvent operator and $N$-particle Green’s function {#II.B}
----------------------------------------------------
The transition amplitude (\[2.1\]) can be simplified if we substitute in it the interaction operator (\[2.3\]) by keeping only the terms with annihilation of the incoming photon in the input state and creating the outgoing photon in the output state. Such a simplification is in accordance with the standard approach of the rotating wave approximation, which is surely fulfilled for a near-resonance scattering process. As a consequence of this approximation the transition amplitude is now determined by the complete resolvent operator projected onto the vacuum state for the field subsystem and onto the singly excited state for the atomic subsystem $$\tilde{\hat{R}}(E)=\hat{P}\,\hat{R}(E)\,\hat{P}\equiv \hat{P}\frac{1}{E-\hat{H}}\hat{P}.%
\label{2.5}%$$ Here we defined the following projector $$\begin{aligned}
\lefteqn{\hspace{-0.8cm}\hat{P}=\sum_{a=1}^{N}\;\sum_{\{m_j\},j\neq a}\;\sum_{n}%
|m_1,\ldots,m_{a-1},n,m_{a+1},\ldots m_N\rangle}%
\nonumber\\%
&&\hspace{-0.5cm}\langle m_1,\ldots,m_{a-1},n,m_{a+1},\ldots,m_N|\times|0\rangle\langle 0|_{\mathrm{Field}}%
\label{2.6}%\end{aligned}$$ which selects in the atomic Hilbert subspace the entire set of the states where any $j$th of $N-1$ atoms populates a Zeeman sublevel $|m_j\rangle$ in its ground state and one specific $a$th atom (with $a$ running from $1$ to $N$ and $j\neq a$) populates a Zeeman sublevel $|n\rangle$ of its excited state. The field subspace is projected onto its vacuum state and the operator $\tilde{\hat{R}}(E)$ can be further considered as a matrix operator acting only in atomic subspace. The elements of the $T$ matrix can be directly expressed by the resolvent operator as follows: $$\begin{aligned}
\lefteqn{T_{g'\mathbf{e}'\mathbf{k}',g\,\mathbf{e\,k}}(E)=\frac{2\pi\hbar\sqrt{\omega'\omega}}{{\cal V}}%
\sum_{b,a=1}^{N}\;\sum_{n',n}}%
\nonumber\\%
&&\hspace{1 cm}(\mathbf{d}\mathbf{e}')_{n'm'_b}^{*}(\mathbf{d}\mathbf{e})_{nm_a}%
\mathrm{e}^{-i\mathbf{k}'\mathbf{r}_b+i\mathbf{k}\mathbf{r}_a}%
\nonumber\\%
&&\langle\ldots m'_{b-1},n',m'_{b+1}\ldots |\tilde{\hat{R}}(E)%
|\ldots m_{a-1},n,m_{a+1}\ldots \rangle%
\nonumber\\%
&&\label{2.7}%\end{aligned}$$ This performs a generalization of the well-known Kramers-Heisenberg formula [@BerstLifshPitvsk] for the scattering of a photon by a many-particle system consisting of atomic dipoles. The selected specific matrix element runs all the possibilities when the incoming photon is absorbed by any $a$th atom and the outgoing photon is emitted by any $b$th atom of the ensemble, including the possible coincidence $a=b$. The initial atomic state is given by $|g\rangle\equiv|m_1,\ldots,m_N\rangle$ and the final atomic state is given by $|g'\rangle\equiv|m'_1,\ldots,m'_N\rangle$.
The projected resolvent operator contributing to Eq. (\[2.7\]) is defined in the the Hilbert subspace of a finite size with dimension $d_eN\,d_g^{N-1}$, where $d_e$ is the degeneracy of the atomic excited state and $d_g$ is the degeneracy of its ground state. The matrix elements of operator $\tilde{\hat{R}}(E)$ can be linked with the $N$-particle causal Green’s function of atomic subsystem via the following Laplace-type integral transformation: $$\begin{aligned}
\lefteqn{\langle\ldots m'_{b-1},n',m'_{b+1}\ldots |\tilde{\hat{R}}(E)%
|\ldots m_{a-1},n,m_{a+1}\ldots \rangle}%
\nonumber\\%
&&\times\delta\left(\mathbf{r}'_1-\mathbf{r}_1\right)\ldots\delta\left(\mathbf{r}'_b-\mathbf{r}_b\right)\ldots%
\delta\left(\mathbf{r}'_a-\mathbf{r}_a\right)\ldots\delta\left(\mathbf{r}'_N-\mathbf{r}_N\right)%
\nonumber\\%
&&=-\frac{i}{\hbar}\int_0^{\infty}dt\,\,\exp\left[+\frac{i}{\hbar}E\,t\right]\,%
\nonumber\\%
&&G^{(N)}\left(1',t;\ldots ;b',t;\ldots ;N',t|1,0;\ldots ;a,t;\ldots;N,0\right)%
\label{2.8}%\end{aligned}$$ where on the right side we denoted $j=m_j,\mathbf{r}_j$ (for $j\neq
a$) and $j'=m'_j,\mathbf{r}'_j$ (for $j'\neq b$), and for specific atoms $a=n,\mathbf{r}_a$ and $b'=n',\mathbf{r}'_b$. Here $\mathbf{r}_j=\mathbf{r}'_j$, for any $j=1\div N$, is the spatial location of $j$th atom, which is assumed to be conserved in the scattering process. This circumstance is expressed by the sequence of $/delta$ functions in Eq. (\[2.8\]). The causal Green’s function is given by the vacuum expectation value of the following chronologically ($T$)-ordered product of atomic second quantized $\Psi$ operators introduced in the Heisenberg representation $$\begin{aligned}
\lefteqn{\hspace{-0.5cm}G^{(N)}\left(1',t'_1;\ldots ;b',t'_b;\ldots ;N',t'_N|1,t_1;\ldots ;a,t_a;\ldots;N,t_N\right)}%
\nonumber\\%
&&=\langle T \Psi_{m'_1}(\mathbf{r}'_1,t'_1)\ldots\Psi_{n'}(\mathbf{r}'_b,t'_b)\ldots%
\Psi_{m'_N}(\mathbf{r}'_N,t'_N)%
\nonumber\\%
&&\Psi_{m_N}^{\dagger}(\mathbf{r}_N,t_N)\ldots%
\Psi_{n}^{\dagger}(\mathbf{r}_a,t_a)\ldots\Psi_{m_1}^{\dagger}(\mathbf{r}_1,t_1)\rangle,%
\nonumber\\%
\label{2.9}%\end{aligned}$$ where $\Psi_{\ldots}(\ldots)$ and $\Psi_{\ldots}^{\dagger}(\ldots)$ are respectively the annihilation and creation operators for an atom in a particular state and coordinate. All the creation operators in this product contribute to transform (\[2.8\]) while being considered at initial time “$0$” and all the annihilation operators are considered at a later time $t>0$. That allows us to ignore effects of either bosonic or fermionic quantum statistics associated with atomic subsystem as far as we neglect any possible overlap in atomic locations and consider the atomic dipoles as classical objects randomly distributed in space. We ordered operators in Eq. (\[2.9\]) in such a way that in the fermionic case (under the anticommutation rule) and without interaction it generates the product of independent individual single-particle Green’s functions associated with each atom and with positive overall sign.
The perturbation theory expansion of the $N$-particle Green’s function (\[2.9\]) can be visualized by the series of the diagrams in accordance with the standard rules of the vacuum diagram technique; see Ref. [@BerstLifshPitvsk]. After rearrangement the diagram expansion can be transformed to the following generalized Dyson equation: $$\scalebox{1.0}{\includegraphics*{eq2.10.eps}}%
\label{2.10}$$ where the long straight lines with arrows correspond with individual causal single-particle Green’s functions of each atom in the ensemble such that the first term on the right side performs the graph image of nondisturbed $N$-particle propagator (\[2.9\]). The dashed block edged by short lines with arrows is the complete collective $N$-particle Green’s function dressed by the interaction. In each diagram block of equation (\[2.10\]) we indicated by $a,b,c$ (running from $1$ to $N$) the presence of one specific input as well as an output line associated with the single excited state equally shared by all the atoms of the ensemble. The sum of tight diagrams, which cannot be reduced to the product of lower order contributions linked by nondisturbed atomic propagators, builds a block of so-called self-energy part $\Sigma$. The diagram equation (\[2.10\]) in its analytical form performs an integral equation for $G^{(N)}(\ldots)$. With its transformation to the energy representation (\[2.8\]) the integral equation can be recomposed to the set of algebraic equations for the matrix of the projected resolvent operator $\tilde{\hat{R}}(E)$, which can be further numerically solved. The crucial requirement for this is the knowledge of the self-energy part (quasi-energy operator acting in the atomic subspace), which as we show below can be approximated by the lower orders in expansion of the perturbation theory.
The self-energy part
--------------------
In the lower order of perturbation theory the self-energy part consists of two contributions having single-particle and double-particle structures. Each specific line in the graph equation (\[2.10\]) associated with excitation of an $a$th atom generates the following irreducible self-energy diagram: $$\begin{aligned}
\lefteqn{\scalebox{1.0}{\includegraphics*{eq2.11.eps}}}
\nonumber\\%
&&\Rightarrow\sum_{m}\int\frac{d\omega}{2\pi} d^{\mu}_{n'm}d^{\nu}_{mn}%
iD^{(E)}_{\mu\nu}(\mathbf{0},\omega)%
\nonumber\\%
&&\times\frac{1}{E-\hbar\omega-E_m+i0}%
\equiv\Sigma^{(a)}_{n'n}(E),%
\label{2.11}\end{aligned}$$ which is analytically decoded with applying transformation (\[2.8\]) in the energy representation. Here the internal wavy line expresses the causal-type vacuum Green’s function of the chronologically ordered polarization components of the field operators $$iD^{(E)}_{\mu\nu}(\mathbf{R},\tau)=\left\langle T\hat{E}_{\mu}(\mathbf{r}',t')%
\hat{E}_{\nu}(\mathbf{r},t)\right\rangle,%
\label{2.12}%$$ which depends only on difference of its arguments $\mathbf{R}=\mathbf{r}'-\mathbf{r}$ and $\tau=t'-t$ and has the following Fourier image: $$\begin{aligned}
\lefteqn{D^{(E)}_{\mu\nu}(\mathbf{R},\omega)=\int_{-\infty}^{\infty} d\tau\,\mathrm{e}^{i\omega\tau}%
D^{(E)}_{\mu\nu}(\mathbf{R},\tau)}%
\nonumber\\%
&=&-\hbar\frac{|\omega|^3}{c^3}\left\{i\frac{2}{3}h^{(1)}_0\left(\frac{|\omega|}{c}R\right)\delta_{\mu\nu}\right.%
\nonumber\\%
&&\left.+\left[\frac{X_{\mu}X_{\nu}}{R^2}-\frac{1}{3}\delta_{\mu\nu}\right]%
ih^{(1)}_2\left(\frac{|\omega|}{c}R\right)\right\};%
\label{2.13}%\end{aligned}$$ see Ref. [@BerstLifshPitvsk]. Here $h^{(1)}_L(\ldots)$ with $L=0,2$ are the spherical Hankel functions of the first kind. As follows from Eq. (\[2.11\]) the Green’s function (\[2.13\]) contributes in that expression in a self-interacting form with spatial argument $\mathbf{R}\to\mathbf{0}$. As a consequence the expression (\[2.11\]) becomes non-converging in the limit $R\to 0$ and the integration over $\omega$ is nonconverging. Part of nonconverging terms should be associated with the longitudinal self-contact interaction. These terms are compensated by the dipolar self-action; see Eq. (\[2.3\]) and the related remark given above. The residual nonconvergency has radiative nature and demonstrates general incorrectness of the Lamb-shift calculation in assumptions of the long-wavelength dipole approximation. Finally we follow the standard renormalization rule, $$\begin{aligned}
\Sigma^{(a)}_{n'n}(E)&=&\Sigma^{(a)}(E)\delta_{n'n},%
\nonumber\\%
\Sigma^{(a)}(E)&\approx&\Sigma^{(a)}(\hbar\omega_0)=\hbar\Delta_{\mathrm{L}}-i\hbar\frac{\gamma}{2},%
\label{2.14}%\end{aligned}$$ where $\Delta_{\mathrm{L}}\to\infty$ is incorporated into the physical energy of the atomic state. To introduce the single-atom natural decay rate $\gamma$ we applied the Wigner-Weiskopf pole approximation and substituted the energy $E=\hbar\omega_k+E_g$ by its near resonance mean estimate $E\approx E_n$ with assumption that the atomic ground state is the zero-energy level such that $E_g=\sum_{j=1}^{N}E_{m_j}=E_m=0$. Then energy of the excited state is given by $E_n=\hbar\omega_0$, where $\omega_0$ is the transition frequency.
In the lower order of perturbation theory, the double-particle contribution to the self-energy part consists of two complementary diagrams: $$\begin{aligned}
\lefteqn{\scalebox{1.0}{\includegraphics*{eq2.15.eps}}}
\nonumber\\%
&&\hspace{-0.5cm}\Rightarrow\int\frac{d\omega}{2\pi} d^{\mu}_{n'm}d^{\nu}_{m'n}%
iD^{(E)}_{\mu\nu}(\mathbf{R}_{ab},\omega)%
\nonumber\\%
&&\hspace{-0.5cm}\times\frac{1}{E-\hbar\omega-E_m-E_{m'}+i0}%
\equiv\Sigma^{(ab+)}_{m'n';nm}(E)
\label{2.15}\end{aligned}$$ and $$\begin{aligned}
\lefteqn{\scalebox{1.0}{\includegraphics*{eq2.16.eps}}}
\nonumber\\%
&&\hspace{-0.5cm}\Rightarrow\int\frac{d\omega}{2\pi} d^{\mu}_{n'm}d^{\nu}_{m'n}%
iD^{(E)}_{\mu\nu}(\mathbf{R}_{ab},\omega)%
\nonumber\\%
&&\frac{1}{E+\hbar\omega-E_n-E_{n'}+i0}%
\equiv\Sigma^{(ab-)}_{m'n';nm}(E),
\label{2.16}\end{aligned}$$ which are responsible for the excitation transfer from atom $a$ to atom $b$ separated by a distance $R_{ab}$. The vector components of the dipole matrix elements $d^{\nu}_{m'n}$ and $d^{\mu}_{n'm}$ are related with atoms $a$ and $b$ respectively. In the pole approximation $E\approx E_n=\hbar\omega_0$ the $\delta$ function features dominate in the spectral integrals (\[2.15\]) and (\[2.16\]) and the sum of both the terms gives $$\begin{aligned}
\Sigma^{(ab)}_{m'n';nm}(E)&\approx&\Sigma^{(ab+)}_{m'n';nm}(\hbar\omega_0)+%
\Sigma^{(ab-)}_{m'n';nm}(\hbar\omega_0)%
\nonumber\\%
&=&\frac{1}{\hbar}\,d^{\mu}_{n'm}d^{\nu}_{m'n}\,D^{(E)}_{\mu\nu}(\mathbf{R}_{ab},\omega_0).%
\label{2.17}%\end{aligned}$$ The derived expression has clear physical meaning. For nearly located atoms the real component of the double-particle contribution to the self-energy part reproduces the static interaction between two atomic dipoles. Its imaginary component is responsible for formation of cooperative dynamics of the excitation decay in the entire radiation process. For long distances, when the atomic dipoles are separated by the radiation zone, this term describes radiation interference between any pair of two distant atoms, which weakly reduces with the interatomic separation. For short distances or in a dense sample the cooperative effects become extremely important and the scattering process becomes strongly dependent on a particular atomic configuration.
It is a challenging problem to further improve the self-energy part by taking into consideration the higher orders of the perturbation theory expansion. Here we only substantiate the validity and sufficiency of the lower order approximation for the considered configuration. The main physical reason for this is weakness of interaction. This justifies ignoring of any deviation from free dynamics of atomic variables on a short time scale associated with the light retardation on distances of a few wavelengths. That yields main cooperation in the radiative dynamics among neighboring dipoles which can effectively interact via static longitudinal electric field. The diagram (\[2.16\]), in contrast with (\[2.15\]), is mostly important for evaluation of the static interaction such that in this graph the field propagator preferably links the points with coincident times on atomic lines. As a consequence, the presence of such diagram fragments as a part of any irreducible diagrams in higher orders would make the overall contribution small and negligible just because the static dipole-dipole interaction only weakly affects the dipoles’ dynamics during the short retardation time, which can be roughly estimated by the wave period $2\pi/\omega_0$. For the same reason we can ignore any vertex-type corrections to the diagram (\[2.14\]). Another part of the self-energy diagrams in higher orders can be associated with correction of the static interaction for itself. If the atomic system were as dense as the atoms were separated by a distance comparable with atomic size (much shorter than the radiation wavelength) then the description of the static interaction in the simplest dipole model would be inconsistent and insufficient. This correction is evidently ignorable for atomic ensemble with a density of a few atoms in a volume scaled by the cubic radiation wavelength. In this case the higher order static corrections are negligible as far as the dipole-dipole interaction is essentially less than the internal transition energy. As we can finally see, for the considered atomic systems, the self-energy part is correctly reproducible by the introduced lower order contributions.
Results and discussion
======================
Cooperative scattering from the system of two atoms
---------------------------------------------------
Let us apply the developed theory to the calculation of the total cross section for the process of light scattering from the system consisting of two atoms. We consider two complementary examples where the scattering atoms have different but similar Zeeman state structure. In the first example we consider V-type atoms, which have $F_0=0$ total angular momentum in the ground state and $F=1$ total angular momentum in the excited state. Such atoms are the standard objects for discussion of the Dicke problem see Ref. [@BEMST], and each atom performs a “two-level” energy system sensitive to the vector properties of light. In an alternative example we consider the $\Lambda$-type atoms, which can be also understood as overturned “two-level” system, which have $F_0=1$ total angular momentum in the ground state and $F=0$ total angular momentum in the excited state. For the latter example in the scattering scenario we assume the initial population by atoms of a particular Zeeman sublevel of the ground state, which has highest projection of the angular momentum. Both the excitation schemes and transition diagrams in the laboratory reference frame are displayed in Fig. \[fig1\].
![(Color online) The excitation diagram of “two-level” V-type atom (left) and overturned “two-level” $\Lambda$-type atom (right). In both the configurations the light scattering is considered for the left-handed $\sigma_{-}$ polarization mode. The $\Lambda$-atom populates the Zeeman sublevel with the highest angular momentum projection.[]{data-label="fig1"}](figure1.eps)
In Figs. \[fig2\]-\[fig4\] we reproduce the spectral dependencies of the total cross section for a photon scattering from the system consisted of two atoms separated by different distances $R$ and for different spatial orientations. The variation of interatomic separation from $R=10\lambdabar$ (independent scatterers) to $R=0.5\lambdabar$ (strongly dependent scatterers) transforms the scattering process from its independent to cooperative dynamics. In the plotted graphs the frequency spectra, reproduced as function of the frequency detuning $\Delta=\omega-\omega_0$ of the probe frequency $\omega$ from the nondisturbed atomic resonance $\omega_0$, are scaled by the natural radiation decay rate of a single atom $\gamma$, which is significantly different for $\Lambda$- and V-type energy configurations, such that $\gamma(\Lambda)=3\gamma(V)$. As a consequence the near-field effects responsible for the resonance structure of the resolvent operator and the cross section manifest more perceptibly for the V-type atoms, which are traditionally considered in many discussions of the Dicke system in literature. In the symmetric collinear excitation geometry, when the internal reference frame coincides with the laboratory frame see Fig. \[fig2\] the left-handed $\sigma_{-}$ excitation channel shown in Fig. \[fig1\] is only allowed for either V- or $\Lambda$-type transition schemes. In such a symmetric configuration the interatomic interaction via the longitudinal as well as via the radiative transverse fields splits the excitation transition in two resonance lines. For the case of the V-type excitation the observed resonances demonstrate either superradiant or subradiant nature. This is an evident indicator of the well-known Dicke effect of either cooperative or anticooperative contribution of the atomic dipoles into the entire radiation and scattering processes; see Refs. [@ChTnDRGr; @BEMST]. For the system of two $\Lambda$-type atoms separated by the same distances the atomic line also splits in two resonances, but they are less resolved and have relatively comparable line widths. The spectral widths indicate a slight cooperative modification, which is much weaker effect than in the case of V-type atoms. The physical reason of that is the contribution of the Raman scattering channels, which are insensitive to the effects of dependent scattering.
![(Color online) Spectral dependencies of the total cross section for a photon scattering from the system of two “two-level” V-type atoms (upper panel) and $\Lambda$-type atoms (lower panel) in the collinear excitation geometry; see inset. In the case of V-type atoms, in accordance with predictions of the Dicke model [@ChTnDRGr; @BEMST], the observed resonances demonstrate either super- or subradiant behavior when interatomic separation $R$ becomes shorter. In the case of $\Lambda$-type atoms the resonances are less resolved and both have a line width comparable with atomic natural decay rate.[]{data-label="fig2"}](figure2.eps)
If both the atoms are located in the wavefront plane of the driving field, as shown in Fig. \[fig3\], the spectral dependence of the cross section is also described by two resonance features. With referring to the excitation scheme defined in the laboratory frame see Fig. \[fig1\] in the specific planar geometry the double-particle self-energy part (\[2.17\]) can couple only the states $|1,\pm 1\rangle$ related to either upper (V-type) or lower ($\Lambda$-type) atomic levels. As a consequence the resolvent operator $\tilde{\hat{R}}(E)$ has a block structure and only its 4 $\times$ 4 block, built in subspace $|0,0\rangle_1|1,\pm 1\rangle_2,\, |1,\pm
1\rangle_1|0,0\rangle_2$, can actually contribute to the scattering process. We subscribed the states by the atomic number $a,b=1,2$. The eigenstates of this matrix have different parities $g$ (even) and $u$ (odd) reflecting their symmetry or antisymmetry to transposition of the atomic state; see Ref. [@LaLfIII].[^1]. The observed resonances can be associated with two even-parity states symmetrically sharing the single excitation in the system of two atoms. Such selection rule is a consequence of the evident configuration symmetry of the system, shown in inset of Fig. \[fig3\], to its rotation on any angle around the $z$ axis such that the allowed transition amplitude should be insensitive to the atoms’ positions. In contrast to the collinear geometry case in the planar geometry both the resonances have identical shapes and line widths. It is also interesting that for this specific excitation geometry both the atomic systems of either V- or $\Lambda$-type demonstrate similar spectral behavior.
![(Color online) Same as in Fig. \[fig2\] but for planar excitation geometry. In both the excitation schemes for either V- or $\Lambda$-type atoms there is a symmetric resonance structure; see the text.[]{data-label="fig3"}](figure3.eps)
In general for random orientation of the diatomic system shown in Fig. \[fig4\] there are four resonances. These resonances can be naturally specified in the internal reference frame, where the quantization axis is directed along the internuclear axis, via following the standard definitions of diatomic molecule terms; see Ref. [@LaLfIII]. There are two $\Sigma_g$ and $\Sigma_u$ terms of different parity and two doubly degenerate $\Pi_g$ and $\Pi_u$ terms, which also have different parities. Here the defined terms are associated with the symmetry of the self-energy part and specified by the transition type in the internal frame such that the transition dipole moment can have either $0$ projection ($\Sigma$ term) or $\pm 1$ projection ($\Pi$ term). For random orientation all these resonances can be excited and in case of the V-type atoms the odd-parity resonances have subradiant nature and the even-parity ones are superradiant. In contrast in the case of the $\Lambda$-type atoms the observed resonances are less resolved and have comparable widths; two of them have rather small amplitudes (see the lower panel of Fig. \[fig4\]). The previous configurations with the collinear and planar excitation geometries respectively correspond to the excitations of the $\Pi_g$ and $\Pi_u$, and $\Sigma_g$ and $\Pi_g$, resonance pairs. Summarizing the results, we can point out that all the plotted dependencies demonstrate significant difference in the cooperative scattering dynamics resulting from the similar quantum systems shown in Fig. \[fig1\].
![(Color online) Same as in Fig. \[fig2\] but for random excitation geometry. For V-type atoms there are two superradiant and two subradiant resonances. For $\Lambda$-type atoms the four resonances are less resolved and have line widths comparable with a single-atom natural decay rate.[]{data-label="fig4"}](figure4.eps)
Cooperative scattering from a collection of $\Lambda$-type atoms randomly distributed in space
----------------------------------------------------------------------------------------------
Evaluation of the resolvent operator for the situation of a many-particle system is a challenging task and its solution depends on the type of transition driven in the atomic system. For V-type atoms the problem can be solved even for a macroscopic atomic ensemble since the number of equations rises linearly with the number of atoms; see the relevant estimate given in Sec. \[II.B\]. In Ref. [@SKKH] the transformation of light scattering on macroscopic atomic ensemble consisting of V-type atoms were analyzed as functions of the sample density. Particularly, the authors demonstrated how the smooth spectral dependence of the cross section, observed in the limit of dilute and weakly disordered distribution of atomic scatterers, would transform to the random speckle resonance structure in the case of strongly disordered and dense distribution containing more than one atom in the cubic wavelength. The presence of narrow sub-radiant Dicke-type resonance modes revealed a microcavity structure built up in an environment of randomly distributed atomic scatterers that can be posed as a certain analog of Anderson-type localization of light.
Our analysis in the previous section indicates that in the example of the $\Lambda$-type atoms the subradiant modes are not manifestable and such a system would be not suitable for observation of the localization effects. For coherent mechanisms of the quantum memory, which we keep in mind as a most interesting implication, the existence of the localization regime would be useful but not a crucially important feature of the light propagation process. However, the spectral profile of the scattering cross section and its dependence on the atomic density and sensitivity to the level of disorder are very important, for example, for further consideration of an EIT-based memory scheme.
Below we consider an example of the atomic system consisted of five $\Lambda$-type atoms, which is described by the $405\times 405$ square matrix of the resolvent operator $\tilde{R}(E)$. With evident provisoes but at least qualitatively the system can be considered as having many particles and can show a tendency toward macroscopic behavior. We show how the scattering process is modified when the configuration is made more dense and how this corresponds with the description of the problem in terms of the macroscopic Maxwell theory. In macroscopic description the atomic system can be approximated by a homogeneous dielectric sphere of a small radius, which scatters light in accordance with the Rayleigh mechanism; see Ref. [@BornWolf]. We fix the parameters of the dielectric sphere by the same density of atoms as we have in the compared microscopic random distribution. The calculation of the dielectric susceptibility were made similarly to that done earlier in Ref. [@SKKH] and we will publish the calculation details elsewhere. The key point of our numerical analysis is to verify the presence of the Zeeman structure, which manifests itself via the Raman scattering channels in the observed total scattering cross section.
In Fig. \[fig5\] we show how the scattering cross section is modified with varying atomic density $n_0$, scaled by the light bar wavelength $\lambdabar$, from $n_0\lambdabar^3=0.1$ (dilute configuration) to $n_0\lambdabar^3=1$ (dense configuration). There are two reference dependencies shown in these plots and indicated by dashed and solid black curves. The dashed curve is the spectral profile of single-atom cross section $\sigma_0=\sigma_0(\Delta)$ multiplied by the number of atomic scatterers $N=5$. The solid black curve is evaluated via the self-consistent macroscopic Maxwell description and reproduces the scattering cross section for the Rayleigh particle performed by a small dielectric sphere. Other dependencies subsequently show the results of microscopic calculations of the scattering cross section: (green \[dashed light gray\]) for a particular random configuration (visualized in insets) and (red \[dash-dotted dark gray\]) the microscopic spectral profiles averaged over many random configurations.
The upper panel of Fig. \[fig5\] relates to the low-density (i.e., dilute configuration or weak disorder) regime, which is insensitive to any specific location of atomic scatterers in space. Indeed the exact result evaluated with the microscopic model is perfectly reproducible by the simplest approximation of the cross section by the sum of partial contributions of all five atoms considered as independent scatterers. This confirms the traditional vision of light propagation through a multiparticle atomic ensemble as through the system of independent scatterers, which are in background of many practical scenarios of interaction of atomic systems with external fields. The Raman channel manifests in the scattering process, a direct consequence of the Zeeman degeneracy of the atomic ground state. In contrast, the central and bottom panels of Fig. \[fig5\] show how the scattering process is modified in the situation of high density and strong disorder when the near-field effects are manifestable. The system evidently demonstrates cooperative behavior and the scattering mechanism becomes extremely sensitive to any specific distribution of the scatterers in space. The spectral profile is described by several resonances, and locations, amplitudes, and widths are unique for each specific configuration. However, there is a certain tendency to compromise the microscopically calculated scattering profile with the rough macroscopic prediction. The latter keeps only the Rayleigh channel as observable in the self-consistent macroscopic model of the scattering process. It is interesting that for any configurations, created randomly in the spatial distribution of the atomic scatterers, one of the observed resonances is preferably located near the vicinity of the zero detuning $\Delta\sim 0$. As a consequence, after the configuration averaging, the system demonstrates scattering characteristics qualitatively similar to those reproduced by the macroscopic model.
Application to atomic memory problem
------------------------------------
The considered system of $\Lambda$-type atoms has certain potential for light-assisted coherent redistribution of atoms in the ground state initiated by simultaneous action of strong control and weak signal modes, that is, for realization of atomic memory protocol. Let us discuss the applicability and advantage of such a dense configuration of atoms for realization of light storage in atomic memories. At present most of the experiments and the supporting theoretical discussions operate with dilute configurations of atoms either confined with MOT at low temperature or existing in a warm vapor phase; see Refs. [@PSH; @Simon; @SpecialIssueJPhysB]. For such systems the standard conditions for realization of either EIT- or Raman-based storage schemes require an optical depth of around hundreds such that the macroscopic ensemble would typically consist of billions of atoms. The optimization of the memory protocol for the parameters of optical depth, pulse shape, etc., has been the subject of many discussions in literature, see Ref. [@NGPSLW] and references therein. There would be an evident advantage in developing the memory unit with fewer atoms but with the same optical depth of the sample. This immediately readdresses the basic problem of cooperative light scattering by a dense system of the $\Lambda$-type configured atoms.
The presented microscopic analysis of the scattering process in such systems shows that in the strong disorder regime the spectral profile of the cross section is generally described by rather complicated and randomized resonance structure contributed by both the longitudinal and transverse self-energy interaction parts of the resolvent operator. This spectrum is unique for each particular configuration of the atomic scatterers and has only slight signature of original nondisturbed atomic spectrum. This circumstance is a direct consequence of the complicated cooperative dynamics, which reflects a microcavity nature of light interaction with a strongly disordered atomic ensemble.
To determine possible implications of our results to the problem of atomic memories, we should extend the presented calculations toward the ensembles consisting of a macroscopic number of atoms. Such an extension seems not so straightforward since the number of contributing equations rises exponentially with the number of atoms and certain simplifying approximations are evidently needed. In this sense our calculations of the scattering cross section performed for a small collection of atoms can be considered a precursor to calculation of the transmittance coefficient, which would be a key characteristic in the macroscopic description of the problem. Our calculations indicate preferable contribution of the Rayleigh mechanism in the overall cooperative scattering process for a density and disorder level near the Ioffe-Regel bound $n_0\lambdabar^3\sim 1$. It is important that in this case one of the absorption resonances is located in the spectral domain near the zero detuning for any atomic configurations and provides the desirable conditions for further observation of the EIT phenomenon. The presence of the control mode, tuned at this predictable resonance point and applied in any “empty” arm of the $\Lambda$ scheme see Fig. \[fig1\], would make the atomic sample transparent for a signal pulse. Due to controllable spectral dispersion the signal pulse could be delayed and effectively converted into the long-lived spin coherence.
Realization of this scheme requires essentially fewer atoms than for dilute ensembles prepared in warm vapors and in MOT experiments. Roughly for a fixed optical depth $b_0\sim n_0\lambdabar^2L$, where $L$ is the sample length, and for $n_0\lambdabar^3\ll 1$, the required number of atoms, allowing for diffraction losses, should be more than $b_0^2/n_0\lambdabar^3$. This number can be minimized if we approach the dense configuration $n_0\lambdabar^3\sim 1$ and make the near field effects manifestable. We are currently working on a self-consistent modification of the presented calculation scheme to make it applicable for a multiatomic ensemble and then to describe the problem in a macroscopic limit. This can be done if we take into consideration the near-field effects only for the neighboring atoms separated by a distance of wavelength. For the intermediate densities with $n_0\lambdabar^3\sim 1$ we can soften our original estimate, given in Sec. \[II.B\], for the number of equations to be solved, and can expect that the actual number would be scaled as $d_eN\,d_g^{n-1}$. Here $n-1\sim n_0\lambdabar^3$ performs the varying parameter denoting the number of the neighboring atoms, which have near field interfering with a selected specific atom. Our preliminary analysis shows that such a calculation algorithm should demonstrate a rapidly converging series with increasing $n$ and would allow us to include the control mode in the entire calculation procedure. Such a modification of the performed calculation scheme would be practically important and generally interesting for better understanding the microscopic nature of a $\Lambda$-type optical interaction in macroscopic atomic systems existing in a strong disorder regime.
Summary
=======
In this paper we have studied the problem of light scattering on a collection of atoms with degenerate structure of the ground state, which cooperatively interact with the scattered light. We have discussed the difference for the scattering process between such system of atoms and well-known object of the Dicke problem, performing an ensemble of two-level V-type atoms. The investigation is specifically focused toward understanding principle aspects of the scattering processes that can occur and how they vary as the atomic density is varied from low values to levels where the mean separation between atoms is on the order of the radiation wavelength. For both the $\Lambda$- and V-type systems the spectral profile of the scattering cross section strongly depends on the particular atomic spatial configuration. However, in the case of the degenerate ground state, the presence of Raman scattering channels washes out visible signature of the super- and subradiant excitation modes in the resolvent spectrum, which are normally resolved in the system consisted of two-level atoms. We have discussed advantages of the considered system in the context of its possible implications for the problem of light storage in a macroscopic ensemble of dense and ultracold atoms and we point out that the quantum memory protocol can be effectively organized with essentially fewer atoms than in the dilute configuration regime.
ACKNOWLEDGMENTS {#acknowledgments .unnumbered}
===============
We thank Elisabeth Giacobino, Igor Sokolov, Ivan Sokolov, and Julien Laurat for fruitful discussions. The work was supported by RFBR 10-02-00103, by the CNRS-RFBR collaboration (CNRS 6054 and RFBR 12-02-91056) and by Federal Program “Scientific and Scientific-Pedagogical Personnel of Innovative Russia on 2009-2013” (Contract No. 14.740.11.1173 ).
[10]{}
K. Hammerer, A. S[ø]{}rensen, and E. Polzik, Rev. Mod. Phys. **82**, 1041 (2010).
C. Simon et al, Eur. Phys. J. D **58**, 1 (2010).
J. Phys B: At. Mol. Opt. Phys **45** \#12 (2012), special issue on quantum memory.
I. Novikova, A.V. Gorshkov, D.F. Phillips, A.S. S[ø]{}rensen, M.D. Lukin, and R.L. Walsworth, Phys. Rev. Lett. **98** 243602 (2007); I. Novikova, N.B. Phillips, A.V. Gorshkov, Phys. Rev. A **78** 021802(R) (2008).
K.S. Choi, H. Deng, J. Laurat, and H.J. Kimble, Nature (London) **452** 67 (2008).
J. Simon, H. Tanji, J.K. Thompson, V. Vuletić, Phys. Rev. Lett. **98** 183601 (2007).
S. Du, P. Kolchin, C. Belthangady, G. Y. Yin, and S. E. Harris, Phys. Rev. Lett. **100** 183603 (2008)
M. Scherman, O.S. Mishina, P. Lombardi, J. Laurat, E. Giacobino, Optics Express **20**, 4346 (2012); O.S. Mishina, M. Scherman, P. Lombardi, J. Ortalo, D. Felinto, A.S. Sheremet, A. Bramati, D.V. Kupriyanov, J. Laurat, and E. Giacobino, Phys. Rev. A **83** 053809 (2011).
L.S. Froufe-P[é]{}rez, W. Guerin, R. Carminati, R. Kaiser Phys. Rev. Lett. **102** 173903 (2009); W. Guerin, N. Mercadier, F. Michaud, D. Brivio, L.S. Froufe-P[é]{}rez, R. Carminati, V. Eremeev, A. Goetschy, S.I. Skipetrov, R. Kaiser, J. of Opt. **12** 024002 (2010).
L.V. Gerasimov, I.M. Sokolov, R.G. Olave, M.D. Havey, J. Opt. Soc. Am. B **28** 1459 (2011); L.V. Gerasimov, I.M. Sokolov, R.G. Olave, M.D. Havey, J. Phys. B: At. Mol. Opt. Phys. **45** 124012 (2012).
R. Kaiser, J. Mod. Opt. **56** 2082 (2009).
S. Balik, M.D. Havey, I.M. Sokolov, D.V. Kupriyanov, Phys. Rev. A **79** 033418 (2009); I.M. Sokolov, D.V. Kupriyanov, R.G. Olave, M.D. Havey J. Mod. Opt. **57** 1833 (2010).
M.G. Benedict, A.M. Ermolaev, V.A. Malyshev, I.V. Sokolov, and E.D. Trifonov, *Super-radiance: Multiatomic coherent emission* (Institute of Physics Publishing, Techno House, Redcliffe Way, Bristol BS1 6NX UK, 1996).
E. Akkermans and G. Montambaux, *Mesoscopic Physics of Electrons and Photons* (Cambridge University Press, Cambridge, 2007).
A. Grubellier, Phys. Rev. A **15**, 2430 (1977).
M. Rusek, J. Mostowski, and A. Orlowski, Phys. Rev. A **61** 022704 (2000 ); F.A. Pinheiro, M. Rusek, A. Orlowski, B.A. van Tiggelen, Phys. Rev. E **69** 026605 (2004).
A. Gero, E. Akkermans, Phys. Rev. A **75** 053413 (2007); E. Akkermans, A. Gero, R. Kaiser, Phys. Rev. Lett. **101** 103602 (2008).
I.M. Sokolov, M.D. Kupriyanova, D.V. Kupriyanov, M.D. Havey Phys. Rev. A **79** 053405 (2009).
C. Cohen-Tannoudji, J. Dupont-Roc, G. Grynberg *Atom-Photon Interactions. Basic Processes and Applications* (John Wiley, New York, 1992).
V.B. Beresteskii, E.M. Lifshits, L.P. Pitaevskii, *Course of Theoretical Physics: Quantum Electrodynamics* (Oxford: Pergamon Press, Oxford, 1981).
L.D. Landau E.M. Lifshits *Course of Theoretical Physics: Quantum Mechanics* (Pergamon Press, Oxford, 1981).
M. Born, E. Wolf, *Principles of Optics* (Pergamon Press, Oxford 1964).
[^1]: By “parity” we mean symmetry of the self-energy part to the transposition of atoms. This is similar to the parity definition for homonuclear diatomic molecules in chemistry; see Ref. [@LaLfIII]
|
{
"pile_set_name": "arxiv"
}
|
---
abstract: 'The purpose of this note is to use the scaling principle to study the boundary behaviour of some conformal invariants on planar domains. The focus is on the Aumann–Carathéodory rigidity constant, the higher order curvatures of the Carathéodory metric and two conformal metrics that have been recently defined.'
address:
- 'ADS: Department of Mathematics, Indian Institute of Science, Bangalore 560012, India'
- 'KV: Department of Mathematics, Indian Institute of Science, Bangalore 560 012, India'
author:
- Amar Deep Sarkar and Kaushal Verma
title: Boundary behaviour of some conformal invariants on planar domains
---
Introduction
============
The scaling principle in several complex variables provides a unified paradigm to address a broad array of questions ranging from the boundary behaviour of biholomorphic invariants to the classification of domains with non-compact automorphism group. In brief, the idea is to blow up a small neighbourhood of a smooth boundary point, say $p$ of a given domain $D \subset \mathbb C^n$ by a family of non-isotropic dilations to obtain a limit domain which is usually easier to deal with. The choice of the dilations is dictated by the Levi geometry of the boundary near $p$ and the interesting point here is that the limit domain is not necessarily unique. For planar domains, this method is particularly simple and the limit domain always turns out to be a half space if $p$ is a smooth boundary point.
The purpose of this note is to use the scaling principle to understand the boundary behaviour of some conformal invariants associated to a planar domain. We will focus on the Aumann–Carathéodory rigidity constant [@MainAumannCaratheodory], the higher order curvatures of the Carathéodory metric [@BurbeaPaper], a conformal metric arising from holomorphic quadratic differentials [@SugawaMetric] and finally, the Hurwitz metric [@TheHurwitzMetric]. These analytic objects have nothing to do with one another, except of course that they are all conformal invariants, and it is precisely this disparity that makes them particularly useful to emphasize the broad utility of the scaling principle as a technique even on planar domains.
While each of these invariants requires a different set of conditions on $D$ to be defined in general, we will assume that $D \subset \mathbb C$ is bounded – all the invariants are well defined in this case and so is $\lambda_D(z) \vert dz \vert$, the hyperbolic metric on $D$. Assuming this will not entail any great loss of generality but will instead assist in conveying the spirit of what is intended with a certain degree of uniformity. Any additional hypotheses on $D$ that are required will be explicitly mentioned. Let $\psi$ be a $C^2$-smooth local defining function for $\partial D$ near $p \in \partial D$ and let $ \lambda_D(z) \vert dz \vert $ denotes the hyperbolic metric on $ D $. In what follows, $\mathbb D \subset \mathbb C$ will denote the unit disc. The question is to determine the asymptotic behaviour of these invariants near $p$. Each of the subsequent paragraphs contain a brief description of these invariants followed by the corresponding results and the proofs will follow in subsequent sections after a description of the scaling principle for planar domains.
*Higher order curvatures of the Carathéodory metric*:
-----------------------------------------------------
Suita [@SuitaI] showed that the density $c_D(z)$ of the Carathéodory metric is real analytic and that its curvature $$\kappa(z) = - c_D^{-2}(z) \Delta \log c_D(z)$$ is at most $-4$ for all $z \in D$. In higher dimensions, this metric is not smooth in general.
For $j, k \ge 0$, let $\partial^{j \overline k} c_D$ denote the partial derivatives $\partial^{j+k} c_D/\partial z_j \partial \overline z_k$. Write $((a_{jk}))_{j, k \ge 0}^n$ for the $(n+1) \times (n+1)$ matrix whose $(j, k)$-th entry is $a_{jk}$. For $n \ge 1$, Burbea [@BurbeaPaper] defined the $n$-th order curvature of the Carathéodory metric $c_D(z) \vert dz \vert$ by $$\kappa_n(z: D) = -4c_D(z)^{-(n+1)^2} J^D_n(z)$$ where $J^D_n(z) = \det ((\partial^{j \overline k} c_D))_{j, k \ge 0}^n$. Note that $$\kappa(z) = \kappa_1(z : D)$$ which can be seen by expanding $J_1(z)$. Furthermore, if $f : D \rightarrow D'$ is a conformal equivalence between planar domains $D, D'$, then the equality $$c_D(z) = c_{D'}(f(z)) \vert f'(z) \vert$$ upon repeated differentiation shows that the mixed partials of $c_D(z)$ can be expressed as a combination of the mixed partials of $c_{D'}(f(z))$ where the coefficients are rational functions of the derivatives of $f$ – the denominators of these rational functions only involve $f'(z)$ which is non-vanishing in $D$. By using elementary row and column operations, it follows that $$J^D_n(z) = J^{D'}_n(f(z)) \vert f'(z) \vert^{n+1}$$ and this implies that $\kappa_n(z: D)$ is a conformal invariant for every $n \ge 1$. If $D$ is equivalent to $\mathbb D$, a calculation shows that $$\kappa_n(z: D) = -4 \left( \prod_{k=1}^n k ! \right)^2$$ for each $z \in D$. For a smoothly bounded (and hence of finite connectivity) $D$, Burbea [@BurbeaPaper] showed, among other things, that $$\kappa_n(z: D) \le -4 \left( \prod_{k=1}^n k ! \right)^2$$ for each $z \in D$. This can be strengthened as follows:
\[T:HigherCurvature\] Let $D \subset \mathbb C$ be a smoothly bounded domain. For every $p \in \partial D$ $$\kappa_n(z: D) \rightarrow -4 \left( \prod_{k=1}^n k ! \right)^2$$ as $z \rightarrow p$.
*The Aumann–Carathéodory rigidity constant*:
--------------------------------------------
Recall that the Carathéodory metric $c_D(z) \vert dz \vert$ is defined by $$c_D(z) = \sup \left\{ \vert f'(z) \vert : f : D \rightarrow \mathbb D \; \text{holomorphic and} \; f(z) = 0 \right\}.$$ Let $D$ be non-simply connected and fix $a \in D$. Aumann–Carathéodory [@MainAumannCaratheodory] showed that there is a constant $\Omega(D, a)$, $0\le \Omega(D, a) < 1$, such that if $f$ is any holomorphic self-mapping of $D$ fixing $a$ and $f$ is [*not*]{} an automorphism of $D$, then $\vert f'(a) \vert \le \Omega(D, a)$. For an annulus $\mathcal A$, this constant was explicitly computed by Minda [@AumannCaratheodoryRigidityConstant] and a key ingredient was to realize that $$\Omega(\mathcal A, a) = c_{\mathcal A}(a)/\lambda_{\mathcal A}(a).$$ The explicit formula for $\Omega(\mathcal A, a)$ also showed that $\Omega(\mathcal A, a) \rightarrow 1$ as $a \rightarrow \partial \mathcal A$. For non-simply connected domains $D$ with higher connectivity, [@AumannCaratheodoryRigidityConstant] also shows that $$c_{D}(a)/\lambda_{D}(a) \le \Omega(D, a) < 1.$$ Continuing this line of thought further, Minda in [@HyperbolicMetricCoveringAumannCaratheodoryConstant] considers a pair of bounded domains $D, D'$ with base points $a \in D, b \in D'$ and the associated ratio $$\Omega(a, b) = \sup \{ \left(f^{\ast}(\lambda_{D'})/\lambda_D\right)(a): f \in \mathcal N(D, D'), f(a) = b \}$$ where $\mathcal N(D, D')$ is the class of holomorphic maps $f : D \rightarrow D'$ that are [*not*]{} coverings. Note that $$\left(f^{\ast}(\lambda_{D'})/\lambda_D\right)(a) = \left( \lambda_{D'}(b) \;\vert f'(a) \vert \right) / \lambda_D(a).$$ Among other things, Theorems 6 and 7 of [@HyperbolicMetricCoveringAumannCaratheodoryConstant] respectively show that $$c_D(a)/\lambda_D(a) \le \Omega(a, b) < 1$$ and $$\limsup_{a \rightarrow \partial D} \Omega(a, b) = 1.$$ Note that the first result shows that the lower bound for $\Omega(a, b)$ is independent of $b$ while the second one, which requires $\partial D$ to satisfy an additional geometric condition, is a statement about the boundary behaviour of $\Omega(a, b)$. Here is a result that supplements these statements and emphasizes their local nature:
\[T:AumannCaratheodoryConstant\] Let $D, D' \subset \mathbb C$ be a bounded domains and $p \in \partial D$ a $C^2$-smooth boundary point. Then $\Omega(D, z) \rightarrow 1$ as $z \rightarrow p$. Furthermore, for every fixed $w \in D'$, $\Omega(z, w) \rightarrow 1$ as $z \rightarrow p$.
*Holomorphic quadratic differentials and a conformal metric*:
-------------------------------------------------------------
We begin by recalling a construction due to Sugawa [@SugawaMetric]. Let $R$ be a Riemann surface and $\phi$ a holomorphic $(m, n)$ form on it. In local coordinates $(U_{\alpha}, z_{\alpha})$, $\phi = \phi_{\alpha}(z_{{\alpha}}) dz^2_{{\alpha}}$ where $\phi_{\alpha} : U_{\alpha} \rightarrow \mathbb C$ is a family of holomorphic functions satisfying $$\phi_{\alpha}(z_{\alpha}) = \phi_{\beta} (z_{\beta}) \left(\frac{d z_{\beta}}{d z_{\alpha}}\right)^m \left(\frac{d \overline z_{\beta}}{d \overline z_{\alpha}}\right)^n$$ on the intersection $U_{\alpha} \cap U_{\beta}$. For holomorphic $(2,0)$ forms, this reduces to $$\phi_{\alpha}(z_{\alpha}) = \phi_{\beta} (z_{\beta}) \left(\frac{d z_{\beta}}{d z_{\alpha}}\right)^2$$ and this in turn implies that $$\Vert \phi \Vert_1 = \int_R \vert \phi \vert$$ is well defined. Consider the space $$A(R) = \left\lbrace \phi = \phi(z) \;dz^2 \; \text{a holomorphic}\; (2,0)\; \text{form on}\; R \; \text{with} \; \Vert \phi \Vert_1 < \infty \right \rbrace$$ of integrable holomorphic $(2,0)$ forms on $R$. Fix $z \in R$ and for each local coordinate $(U_{{\alpha}}, \phi_{{\alpha}})$ containing it, let $$q_{R, {\alpha}}(z) = \sup \left\lbrace \vert \phi_{{\alpha}}(z) \vert^{1/2} : \phi \in A(R) \; \text{with} \; \Vert \phi \Vert_1 \le \pi \right\rbrace.$$ Theorem 2.1 of [@SugawaMetric] shows that if $R$ is non-exceptional, then for each $z_0 \in U_{{\alpha}}$ there is a unique extremal differential $\phi \in A(R)$ ($\phi = \phi_{{\alpha}}(z_{{\alpha}}) dz^2_{{\alpha}}$ in $U_{{\alpha}}$) with $\Vert \phi \Vert_1 = \pi$ such that $$q_{R, {\alpha}}(z_0) = \vert \phi_{{\alpha}}(z_0) \vert^{1/2}.$$ If $(U_{\beta}, \phi_{\beta})$ is another coordinate system around $z$, then the corresponding extremal differential $\phi_{\beta}$ is related to $\phi_{\alpha}$ as $$\overline{w'(z_0)} \;\phi_{\beta} = w'(z_0) \; \phi_{\alpha}$$ where $w = \phi_{\beta} \circ \phi_{{\alpha}}^{-1}$. Hence $\vert \phi_{{\alpha}} \vert$ is intrinsically defined and this leads to the conformal metric $q_R(z) \vert dz \vert$ with $q_R(z) = q_{{\alpha}}(z)$ for some (and hence every) chart $U_{{\alpha}}$ containing $z$.
It is also shown in [@SugawaMetric] that the density $q_R(z)$ is continuous, $\log q_R$ is subharmonic (or identically $-\infty$ on $R$) and $q_{\mathbb D}(z) = 1/(1 - \vert z \vert^2)$ – therefore, this reduces to the hyperbolic metric on the unit disc. In addition, [@SugawaMetric] provides an estimate for this metric on an annulus. We will focus on the case of bounded domains.
\[T:SugawaMetric\] Let $D \subset {\mathbb}C$ be a bounded domain and $p \in \partial D$ a $C^2$-smooth boundary point. Then $$q_D(z) \approx 1/{\rm dist} (z, \partial D)$$ for $z$ close to $p$.
Here and in what follows, we use the standard convention that $A \approx B$ means that there is a constant $C > 1$ such that $A/B, B/A$ are both bounded above by $C$. In particular, this statement shows that the metric $q_D(z) \vert dz \vert$ is comparable to the quasi-hyperbolic metric near $C^2$-smooth points. Thus, if $D$ is globally $C^2$-smooth, then $q_D(z) \vert dz \vert$ is comparable to the quasi-hyperbolic metric everywhere on $D$.
*The Hurwitz metric*:
---------------------
The other conformal metric that we will discuss here is the Hurwitz metric that has been recently defined by Minda [@TheHurwitzMetric]. We begin by recalling its construction which is reminiscent of that for the Kobayashi metric but differs from it in the choice of holomorphic maps which are considered: for a domain $D \subset \mathbb C$ and $a \in D$, let $\mathcal{O}(a, D)$ be the collection of all holomorphic maps $f : \mathbb{D} \rightarrow D$ such that $f(0) = a$ and $f'(0) > 0$. Let $\mathcal{O}^{\ast}(a, D) \subset \mathcal{O}(a, D)$ be the subset of all those $f \in \mathcal{O}(a, D)$ such that $f(z) \not= a$ for all $z$ in the punctured disc $\mathbb{D}^{\ast}$. Set $$r_D(a) = \sup \left\lbrace f'(0) : f \in \mathcal{O}^{\ast}(a, D) \right\rbrace.$$ The Hurwitz metric on $D$ is $\eta_D(z) \vert dz \vert$ where $$\eta_D(a) = 1/r_D(a).$$ Of the several basic properties of this conformal metric that were explored in [@TheHurwitzMetric], we recall the following two: first, for a given $a \in D$, let $\gamma \subset D^{\ast} = D\setminus \{a\}$ be a small positively oriented loop that goes around $a$ once. This loop generates an infinite cyclic subgroup of $\pi_1(D^{\ast})$ to which there is an associated holomorphic covering $G : \mathbb{D}^{\ast} \rightarrow D^{\ast}$. This map $G$ extends holomorphically to $G : \mathbb{D} \rightarrow D$ with $G(0) = a$ and $G'(0) \not= 0$. This covering depends only on the free homotopy class of $\gamma$ and is unique up to precomposition of a rotation around the origin. Hence, it is possible to arrange $G'(0) > 0$. Minda calls this the Hurwitz covering associated with $a \in D$. Using this it follows that every $f \in \mathcal{O}^{\ast}(a, D)$ lifts to $\tilde f : \mathbb{D}^{\ast} \rightarrow \mathbb{D}^{\ast}$. This map extends to a self map of $\mathbb{D}$ and the Schwarz lemma shows that $\vert f'(0) \vert \le G'(0)$. The conclusion is that the extremals for this metric can be described in terms of the Hurwitz coverings.
\[T:TheHurwitzMetric\] Let $D \subset \mathbb{C}$ be bounded. Then $\eta_D(z)$ is continuous. Furthermore, if $p \in \partial D$ is a $C^2$-smooth boundary point, then $$\eta_D(z) \approx 1/{\rm dist} (z, \partial D)$$ for $z$ close to $p$.
A consequence of Theorem 1.3 and 1.4 is that both $q_D(z) \vert dz \vert$ and $\eta_D(z) \vert dz \vert$ are equivalent metrics near smooth boundary points.
Finally, in section 7, we provide some estimates for the generalized curvatures of $ q_D(z) \vert dz \vert $ and $ \eta_D(z)\vert dz \vert $.
Scaling of planar domains
=========================
The scaling principle for planar domains has been described in detail in [@ScalingInHigherDimensionKrantzKimGreen]. A simplified version which suffices for the applications presented later can be described as follows:\
Let $ D $ be a domain in $ {\mathbb{C}}$ and $ p \in \partial D $ a $ C^2 $-smooth boundary point. This means that there is a neighborhood $ U $ of $ p $ and a $ C^2 $-smooth real function $ \psi $ such that $$U \cap D = \{\psi < 0\}, \quad U \cap \partial D = \{\psi = 0\}$$ and $$d \psi \neq 0 \quad \text{on} \quad U \cap \partial D.$$ Let $ p_j $ be a sequence of points in $ D $ converging to $ p $. Suppose $ \tau(z)\vert dz \vert $ is a conformal metric on $ D $ whose behaviour near $ p $ is to be studied. The affine maps
$$\label{Eq:ScalingMap}
T_j(z) = \frac{z - p_j}{-\psi(p_j)}$$
satisfy $ T_j(p_j) = 0 $ for all $ j $ and since $ \psi(p_j) \to 0$, it follows that the $ T_j $’s expand a fixed neighborhood of $ p $. To make this precise write $$\psi(z) = \psi(p) + 2Re\left( \frac{\partial \psi}{\partial z}(p)(z - p)\right) + o(\vert z - p\vert)$$ in a neighborhood of $ p $. Let $ K $ be a compact set in $ {\mathbb{C}}$. Since $ \psi(p_j) \to 0 $, it follows that $ T_j(U) $ is an increasing family of open sets that exhaust $ {\mathbb{C}}$ and hence $ K \subset T_j(U) $ for all large $ j $. The functions, by taking their Taylor series expansion at $ z = p_j $, $$\psi \circ T^{-1}_{j}(z) = \psi \left( p_j + z\left( -\psi(p_j)\right) \right)
= \psi(p_j) + 2Re\left( \frac{\partial \psi}{\partial z}(p_j)z \right) (-\psi(p_j)) + \psi(p_j)^2 o(1)$$ are therefore well defined on $ K $ and the domains $ D_j' = T_j(U \cap D) $ are defined by $$\psi_j(z) = \frac{1}{-\psi(p_j)}\psi \circ T_j^{-1}(z) = -1 + 2Re\left( \frac{\partial \psi}{\partial z}(p_j)z \right) + (-\psi(p_j))o(1).$$ It can be seen that $ \psi_j(z) $ converges to $$\psi_{\infty}(z) = -1 + 2Re\left( \frac{\partial \psi}{\partial z}(p)z \right)$$ uniformly on $ K $ as $ j \to \infty $. At this stage, let us recall the Hausdorff metric on subsets of a metric space.
Given a set $ S \subseteq {\mathbb{C}}^n $, let $ S_{\epsilon} $ denote the $ \epsilon $-neighborhood of $ S $ with respect to the standard Euclidean distance on $ {\mathbb{C}}^n $. The Hausdorff distance between compact sets $ X, Y \subset {\mathbb{C}}^n $ is given by $$d_H(X,Y) = \inf\{\epsilon > 0 : X \subset Y_{\epsilon}\,\, \text{and} \,\, Y \subset X_{\epsilon}\}.$$ It is known that this is a complete metric space on the space of compact subsets of $ {\mathbb{C}}^n $. To deal with non-compact but closed sets there is a topology arising from a family of local Hausdorff semi-norms. It is defined as follows: fix an $ R > 0 $ and for a closed set $ A \subset {\mathbb{C}}^n $, let $ A^R = A \cap \overline B(0, R)$ where $ B(0, R) $ is the ball centred at the origin with radius $ R $. Then, for $ A, B \subset {\mathbb{C}}^n$, set $$d_H^{(R)}(A, B) = d_H\left(A^R, B^R\right).$$ We will say that a sequence of closed sets $ A_n $ converges to a closed set $ A $ if there exists $ R_0 $ such that $$\lim_{n \to \infty}d_H\left(A_n^R, A^R\right) = 0$$ for all $ R \geq R_0 $. Since $ \psi_j \to \psi_{\infty} $ uniformly on every compact subset in $ {\mathbb{C}}$, it follows that the closed sets $ \overline{D_j'} = T_j(\overline{U \cap D})$ converge to the closure of the half-space $${\mathcal{H}}= \{z : -1 + 2Re\left( \frac{\partial \psi}{\partial z}(p)z \right) < 0 \} = \{z: Re(\overline{\omega}z-1)<0\}$$ where $\omega = (\partial\psi/\partial x)(p) + i(\partial\psi/\partial y)(p)$, in the Hausdorff sense as described above. As a consequence, every compact $ K \subset {\mathcal{H}}$ is eventually contained in $ D_j' $. Similarly, every compact $ K \subset {\mathbb{C}}\setminus \overline{{\mathcal{H}}} $ eventually has no intersection with $ \overline{D_j'} $.
It can be seen that the same property holds for the domains $ D_j = T_j(D) $, i.e, they converge to the half-space $ {\mathcal{H}}$ in the Hausdorff sense.
Now coming back to the metric $ \tau(z)\vert dz \vert $, the pull-backs $$\tau_j(z) \vert dz \vert = \tau_D(T_j^{-1}(z)) \vert (T_j^{-1})^{\prime}(z)\vert \vert dz \vert$$ are well-defined conformal metrics on $ D_j $ which satisfy $$\tau_j(0) \vert dz \vert = \tau_D(p_j)(-\psi(p_j))\vert dz \vert.$$ Therefore, to study $ \tau(p_j) $, it is enough to study $ \tau_j(0) $. This is exactly what will be done in the sequel.
Proof of Theorem \[T:HigherCurvature\]
======================================
It is known (see [@InvariantMetricJarnicki section 19.3] for example) that if $ D \subset {\mathbb{C}}$ is bounded and $ p \in \partial D $ is a $ C^2 $-smooth boundary point, then $$\lim_{z \to p}\frac{c_{U \cap D}(z)}{c_D(z)} = 1$$ where $ U $ is a neighborhood of $ p $ such that $ U \cap D $ is simply connected. Here is a version of this statement that we will need:
Let $ D \subset {\mathbb{C}}$ be a bounded domain and $ p \in \partial D $ a $ C^2 $-smooth boundary point. Let $ U $ be a neighborhood of $ p $ such that $ U \cap D $ is simply connected. Let $ \psi $ be a defining function of $\partial D $ near the point $ p $. Then $$\lim_{z \to p}c_{D}(z)(-\psi(z)) = c_{{\mathcal{H}}}(0).$$
Let $ \{p_j\} $ be a sequence in $ D $ which converges to $ p $. Consider the affine map $$T_j(z) = \frac{z - p_j}{-\psi(p_j)}$$ whose inverse is given by $$T_j^{-1}(z) = -\psi(p_j)z + p_j.$$ Let $D_j = T_j(D)$ and $ D'_j = T_j({U \cap D}) $. Note that $ \{D_j\} $ and $ D'_j $ converge to the half-space $ {\mathcal{H}}$ both in the Hausdorff and Carathéodory kernel sense.
Let $z \in {\mathcal{H}}$. Then $z \in D'_j$ for $j $ large. Since $D'_j$ is a simply connected, there is a biholomorphic map $f_j : \mathbb{D} {\longrightarrow}D'_j$ with $f_j(0) = z$ and $f_j^{\prime}(0) > 0$. The domain $ D'_j $ converges to the half-space $ {\mathcal{H}}$ and therefore the Carathéodory kernel convergence theorem (see [@CaratheodoryKernelConvergence] for example), shows that $f_j$ admits a holomorphic limit $f : \mathbb{D} {\longrightarrow}{\mathcal{H}}$ which is a biholomorphism. Note that $f(0) = z$ and $f'(0) > 0$. We know that in the case of simply connected domains, the Carathéodory and hyperbolic metric coincide and so $$c_{D'_j}(z) = \lambda_{D'_j}(z)$$ for all large $ j $ and hence $$c_{{\mathcal{H}}}(z) = \lambda_{{\mathcal{H}}}(z).$$ It is known that $$\lambda_{D'_j}(z) = \frac{1}{f_j^{\prime}(0)} \,\, \mbox{and} \,\, \lambda_{{\mathcal{H}}}(z) = \frac{1}{f^{\prime}(0)}.$$ From this we conclude that $c_{D'_j}(z)$ converges to $c_{{\mathcal{H}}}(z)$ as $j \to \infty$.
Under the biholomorphism $ T_j^{-1} $, the pull back metric $$(T_j^{-1})^*(c_{{U \cap D}})(z) = c_{D'_j}(z)$$ for all $ z \in D_j$. That is $$c_{{U \cap D}}(T_j^{-1}(z))\vert (T_j^{-1})^{\prime}(z)\vert = c_{D'_j}(z).$$ Putting $ z = 0 $, we obtain $$c_{{U \cap D}}(p_j)(-\psi(p_j)) = c_{D'_j}(0).$$ As we have seen above $ c_{D'_j}(z) $ converges to $ c_{{\mathcal{H}}}(z) $, for all $ z \in {\mathcal{H}}$, as $ j \to \infty $. Therefore, $ c_{{U \cap D}}(p_j)(-\psi(p_j)) $, which is equal to $ c_{D_j}(0) $, converges to $ c_{{\mathcal{H}}}(0) $ as $ j \to \infty $. Since $ \{p_j\} $ is an arbitrary sequence, we conclude that $$\lim_{z \to p}c_{{U \cap D}}(z)(-\psi(z)) = c_{{\mathcal{H}}}(0).$$ Since $$\lim_{z \to p}\frac{c_{U \cap D}(z)}{c_D(z)} = 1,$$ we get $$\lim_{z \to p}c_{D}(z)(-\psi(z)) = c_{{\mathcal{H}}}(0).$$
The Ahlfors map, the Szegö kernel and the Garabedian kernel of the half space
$${\mathcal{H}}= \{z : Re(\bar \omega z - 1) < 0\}$$ at $ a \in {\mathcal{H}}$ are given by $$f_{{\mathcal{H}}}(z, a) = \vert \omega \vert\frac{z - a}{2 - \omega \bar a - \bar \omega z},$$ $$S_{{\mathcal{H}}}(z, a) = \frac{1}{2 \pi}\frac{\vert \omega \vert}{2 - \omega \bar a - \bar \omega z}$$ and $$L_{{\mathcal{H}}}(z, a) = \frac{1}{2 \pi}\frac{1}{z - a}$$ respectively.
Let $ D $ be a $ C^{\infty} $-smooth bounded domain. Choose $ p \in \partial D $ and a sequence $ p_j $ in $ D $ that converges to $ p $. The sequence of scaled domains $ D_j = T_j(D) $, where $ T_j $ are as in \[Eq:ScalingMap\], converges to the half-space $ {\mathcal{H}}$ as before.
Fix $ a \in {\mathcal{H}}$ and note that $ a \in D_j $ for $ j $ large. Note that $ 0 \in D_j $ for $ j \geq 1 $. Let $ f_j(z, a) $ be the Ahlfors map such that $ f_j(a, a) = 0 $, $ f_j^{\prime}(a, a) > 0 $ and suppose that $ S_j(z, a) $ and $ L_j(z, a) $ are the Szegö and Garabedian kernels for $ D_j $ respectively.
\[Prop:ConvergenceAhlforsSzegoGarabeidian\] In this situation, the sequence of Ahlfors maps $ f_j(z, a) $ converges to $ f_{{\mathcal{H}}}(z, a) $ uniformly on compact subsets of $ {\mathcal{H}}$. The Szegő kernels $ S_j(z, a) $ converge to $ S_{{\mathcal{H}}}(z, a) $ uniformly on compact subsets of $ {\mathcal{H}}$. Moreover, $ S_j(z, w) $ converges to the $ S_{{\mathcal{H}}}(z, w) $ uniformly on every compact subset of $ {\mathcal{H}}\times {\mathcal{H}}$. Finally, the Garabedian kernels $ L_j(z, a) $ converges to $ L_{{\mathcal{H}}}(z, a) $ uniformly on compact subsets of $ {\mathcal{H}}\setminus \{a\} $.
In the proof of the previous proposition we have seen that $ c_{D_j}(a) $ converges $ c_{{\mathcal{H}}}(a) $ as $ j \to \infty $. By definition $f_j^{\prime}(a, a) = c_{D_j}(a)$ and $ f_{{\mathcal{H}}}^{\prime}(a, a) = c_{{\mathcal{H}}}(a) $. Therefore, $f_j^{\prime}(a, a)$ converges to $f_{{\mathcal{H}}}^{\prime}(a, a)$ as $ j \to \infty $.
Now, we shall show that $ f_j(z, a) $ converges to $f_{{\mathcal{H}}}(z, a)$ uniformly on compact subsets of ${\mathcal{H}}$. Since the sequence of the Ahlfors maps $\{f_j(z, a)\}$ forms a normal family of holomorphic functions and $ f_j(a, a) = 0 $, there exists a subsequence $\{f_{k_j}(z, a)\}$ of $\{f_j(z, a)\}$ that converges to a holomorphic function $f$ uniformly on every compact subset of the half-space ${\mathcal{H}}$. Then $f(a) = 0$ and, as $f_j^{\prime}(a)$ converges to $f_{{\mathcal{H}}}^{\prime}(a, a)$, so we have $f^{\prime}(a) = f_{{\mathcal{H}}}^{\prime}(a, a)$. Thus, we have $f: {\mathcal{H}}\longrightarrow \mathbb{D}$ such that $f(a) = 0$ and $f^{\prime}(a) = f_{{\mathcal{H}}}^{\prime}(a, a)$. By the uniqueness of the Ahlfors map, we conclude that $f(z) = f_{{\mathcal{H}}}(z, a)$ for all $ z \in {\mathcal{H}}$. Thus, from above, we see that every limiting function of the sequence $\{f_j(z, a)\}$ is equal to $f_{{\mathcal{H}}}(z, a)$. Hence, we conclude that $\{f_j(z, a)\}$ converges to $f_{{\mathcal{H}}}(z, a)$ uniformly on every compact subset of $ {\mathcal{H}}$.
Next, we shall show that $ S_j(\zeta, z) $ converge to $ S_{{\mathcal{H}}}(\zeta, z) $ uniformly on compact subsets of ${\mathcal{H}}\times {\mathcal{H}}$. First, we show that ${S_j}(\zeta, z)$ is locally uniformly bounded. Let $z_0, \,\zeta_0 \in {\mathcal{H}}$ and choose $ r_0 > 0 $ such that the closed balls $ \overline{B}(z_0, r_0) $, $ \overline{B}(\zeta_0, r_0) \subset {\mathcal{H}}$. Since $ D_j $ converges to $ {\mathcal{H}}$, $ \overline{B}(z_0, r_0) $, $ \overline{B}(\zeta_0, r_0) \subset D_j $ for $ j $ large. By the monotonicity of the Carathéodory metric $${c_{D_j}}(z) \leq \frac{r_0}{r_0^2 - |z|^2}$$ for all $ z \in B(z_0,r_0)$, and $${c_{D_j}}(\zeta) \leq \frac{r_0}{r_0^2 - |\zeta|^2}$$ for all $ \zeta \in B(\zeta_0,r_0) $ and for $j$ large. Therefore, if $ 0 < r < r_0 $ and $ (z, \zeta) \in \overline{B}(z_0, r_0) \times \overline{B}(\zeta_0, r_0) $, it follows that $${c_{D_j}}(z) \leq \frac{r_0}{r_0^2 - r^2}$$ and $${c_{D_j}}(\zeta) \leq \frac{r_0}{r_0^2 - r^2}$$ for $j$ large. Using the fact that ${c_{D_j}}(z) = 2\pi{S_j}(z,z)$, we have $${S_j}(z, z) \leq \frac{1}{2 \pi} \frac{r_0}{r_0^2 - r^2}$$ for all $ z \in \overline{B}(z_0,r) $ and $${S_j}(\zeta, \zeta) \leq \frac{1}{2 \pi} \frac{r_0}{r_0^2 - r^2}$$ for all $ \zeta \in \overline{B}(\zeta_0,r) $ and for $j$ large. By the Cauchy-Schwarz inequality, $$| {S_j}(\zeta, z) |^2 \leq | {S_j}(\zeta, \zeta)| | {S_j}(z, z)|$$ which implies $$| {S_j}(\zeta, z) | \leq \frac{1}{2 \pi} \frac{r_0}{r_0^2 - r^2}$$ for all $ (\zeta, z) \in \overline{B}(\zeta_0,r) \times \overline{B}(z_0,r) $, and for all $ j \geq 1 $. This shows that ${S_j}(\zeta, z)$ is locally uniformly bounded. Hence the sequence $\{{S_j}(\zeta,z)\}$, as a holomorphic function in the two variables $\zeta, \overline{z}$ is a normal family. Now, we claim that the sequence $\{{S_j}(\zeta,z)\}$ converges to a unique limit. Let $S$ be a limiting function of $\{{S_j}(\zeta,z)\}$. Since $S$ is holomorphic in $\zeta, \overline{z}$, the power series expansion of the difference $S - S_{{\mathcal{H}}}$ around the point $0$ has the form $$S(\zeta, z) - S_{{\mathcal{H}}}(\zeta, z) = \sum_{l,k =0}^{\infty}a_{l,k}\zeta^l\overline{z}^k.$$ Recall that $2\pi S_{j}(z,z) = {c_{D_j}}(z)$ converges to $c_{{\mathcal{H}}}(z)$ as $ j \to \infty $ and $2\pi S_{{\mathcal{H}}}(z,z) = c_{{\mathcal{H}}}(z)$. From this we infer that $S(z, z) = S_{{\mathcal{H}}}(z, z)$ for all $ z \in {\mathcal{H}}$ and hence $$\sum_{l,k =0}^{\infty}a_{l,k}z^l\overline{z}^k = 0.$$ By substituting $z = |z|e^{i\theta}$ in the above equation, we get $$\sum_{l,k =0}^{\infty}a_{l,k}|z|^{(l + k)}e^{i(l - k)\theta} = 0.$$ and hence $$\sum_{l +k = n}a_{l,k}e^{i(l - k)\theta} = 0$$ for all $ n \geq 1$. It follows that $a_{l,k} = 0$ for all $l, k \geq 1 $. Hence we have $$S(\zeta, z) = S_{{\mathcal{H}}}(\zeta, z)$$ for all $ \zeta, z \in {\mathcal{H}}$. So any limiting function of $\{{S_j}(\zeta,z)\}$ is equal to $ S_{{\mathcal{H}}}(\zeta,z) $. This shows that $\{{S_j}(\zeta,z)\}$ converges to $ S_{{\mathcal{H}}}(\zeta,z) $ uniformly on compact subsets of $ {\mathcal{H}}\times {\mathcal{H}}$.
Finally, we show that the sequence of the Garabedian kernel functions $\{L_j(z, a)\}$ also converges to the Garabedian kernel function $ L_{{\mathcal{H}}}(z, a) $ of the half-space $ {\mathcal{H}}$ uniformly on every compact subset of ${\mathcal{H}}\setminus \{a\}$. This will be done by showing that $\{L_j(z, a)\}$ is a normal family and has a unique limiting function. To show that $\{L_j(z, a)\}$ is a normal family, it is enough to show that $L_j(z, a)$ is locally uniformly bounded on $ {\mathcal{H}}\setminus \{a\} $.
Since $ z = a $ is a zero of $ f_{{\mathcal{H}}}(z, a) $, it follows that for an arbitrary compact set $ K \subset {\mathcal{H}}\setminus \{a\} $, the infimum of $ \vert f_{{\mathcal{H}}}(z, a) \vert $ on $ K $ is positive. Let $ m > 0 $ be this infimum. As $ f_j(z, a) \to f_{{\mathcal{H}}}(z, a)$ uniformly on $ K $, $$\vert f_j(z, a) \vert > \frac{m}{2}$$ and hence $$\frac{1}{\vert f_j(z, a) \vert} < \frac{2}{m}$$ for all $ z \in K $ and $ j $ large. Since $ S_j(z, a) $ converges to $ S_{{\mathcal{H}}}(z, a) $ uniformly on $ K $, there exists an $ M > 0 $ such that $ \vert S_j(z, a) \vert \leq M $ for $ j $ large. As $$L_j(z, a) = \frac{S_j(z, a)}{f_j(z, a)}$$ for all $z \in D_j \setminus \{a\}$, we obtain $$|L_j(\zeta, a)| \leq \frac{2M}{m}$$ for all $\zeta \in K$ and for $j$ large. This shows that $L_j(z, a)$ is locally uniformly bounded on $ {\mathcal{H}}\setminus \{a\} $, and hence a normal family.
Finally, any limit of $ L_j(z, a) $ must be $ S_{{\mathcal{H}}}\big/ f_{{\mathcal{H}}}(z,a) $ on $ {\mathcal{H}}\setminus \{a\} $ and hence $ L_j(z, a) $ converges to $ L_{{\mathcal{H}}}(z, a) $ uniformly on compact subsets of $ {\mathcal{H}}\setminus \{a\} $.
This generalizes the main result of [@SuitaI] to the case of a sequence of domains $ D_j $ that converges to $ {\mathcal{H}}$ as described above.
\[Pr:UniformConvergenceOfCaraParDerCara\] Let $\{D_j\}$ be the sequence of domains that converge to the half-space ${\mathcal{H}}$ in the Hausdorff sense as in the previous proposition. Then the sequence of Carathéodory metrics $c_{D_j}$ of the domains $D_j$ converges to the Carathéodory metric $c_{{\mathcal{H}}}$ of the half-space $ {\mathcal{H}}$ uniformly on every compact subset of ${\mathcal{H}}$. Moreover, all the partial derivatives of $c_{D_j}$ converge to the corresponding partial derivatives of $c_{{\mathcal{H}}}$, and the sequence of curvatures $ \kappa(z : D_j) $ converges to $ \kappa(z : {\mathcal{H}}) = -4 $, which is the curvature of the half-space $ {\mathcal{H}}$, uniformly on every compact subset of $ {\mathcal{H}}$.
Since $$c_{D_j}(z) = 2 \pi S_j(z, z)$$ and $ S_j(z, z) $ converges to $ S_{{\mathcal{H}}}(z, z) $ uniformly on compact subsets of $ {\mathcal{H}}$, we conclude that the sequence of Carathéodory metrics $c_{D_j}$ of the domains $D_j$ also converges to the Carathéodory metric $c_{{\mathcal{H}}}$ of the half-space $ {\mathcal{H}}$ uniformly on every compact subset of ${\mathcal{H}}$.
To show that the derivatives of $ c_{D_j} $ converge to the corresponding derivatives of $ c_{{\mathcal{H}}} $, it is enough to show the convergence in a neighborhood of a point in ${\mathcal{H}}$.
Let $ D^2 = D^2((z_0, \zeta_0); (r_1, r_2)) $ be a bidisk around the point $ (z_0, \zeta_0) \in {\mathcal{H}}\times {\mathcal{H}}$ which is relatively compact in ${\mathcal{H}}\times {\mathcal{H}}$. Then $ D^2 $ is relatively compact in $D_j \times D_j$ for all large $j$. Let $C_1 = \{z : |z -z_0| = r_1\}$ and $C_2 = \{\zeta : |\zeta -\zeta_0| = r_2\}$. Since $S_j(z, \zeta)$ is holomorphic in $z$ and antiholomorphic in $\zeta$, the Cauchy integral yields $$\frac{\partial^{m +n}S_j(z, \zeta)}{\partial z^m\partial \bar{z}^n } = \frac{m!n!}{(2\pi i)^2} \int_{C_1} \int_{C_2}\frac{S_j(\xi_1, \xi_2)}{(\xi - z)^{m + 1}(\overline{\xi - \zeta})^{n + 1}}d\xi_1 d\xi_2$$ and an application of the maximum modulus principle shows that $$\left|\frac{\partial^{m +n}S_j(z, \zeta)}{\partial z^m\partial \bar{z}^n }\right| \leq
\frac{m!n!}{4\pi^2}\sup_{(\xi_1, \xi_2) \in D^2} |S_j(\xi_1, \xi_2)|\frac{1}{r_1^m r_2^n}.$$ Now applying the above inequality for the function $S_j - S_{{\mathcal{H}}}$, we get $$\left|\frac{\partial^{m +n}}{\partial z^m\partial \bar{z}^n }( S_j - S_{{\mathcal{H}}} )(z, \zeta)\right|
\leq
\frac{m!n!}{4\pi^2}\sup_{(\xi_1, \xi_2) \in D^2} | S_j(\xi_1, \xi_2) - S_{{\mathcal{H}}}(\xi_1, \xi_2) |\frac{1}{r_1^m r_2^n}.$$ Since $S_j(z, z) \to S_{\mathcal{H}}(z, z)$ uniformly on $ D^2 $, all the partial derivatives of $S_j(z, z)$ converge to the corresponding partial derivatives of $S_{\mathcal{H}}(z, z)$ uniformly on every compact subset of $D^2$.
Recall that the curvature of the Carathéodory metric $ c_{D_j} $ is given by $$\kappa(z : c_{D_j}) = -c_{D_j}(z)^{-2}\Delta \log c_{D_j}(z)$$ which upon simplification gives $$-\Delta \log c_{D_j} = 4c_{D_j}^{-4}\left( \partial^{0\bar{1}}c_{D_j}\partial^{1\bar{0}}c_{D_j} - c_{D_j}\partial^{1\bar{1}}c_{D_j} \right)$$ where $ \partial^{i \bar j}c_{D_j} = \partial^{i + j}c_D\big/\partial^iz \partial^j \bar z $ for $ i, j = 0, 1 $. Since all the partial derivatives of $ c_{D_j} $ converge uniformly to the corresponding partial derivatives of $ c_{{\mathcal{H}}} $, $$-\Delta \log c_{D_j} = 4c_{D_j}^{-4}\left( \partial^{0\bar{1}}c_{D_j}\partial^{1\bar{0}}c_{D_j} - c_{D_j}\partial^{1\bar{1}}c_{D_j} \right)$$ converges to $$-\Delta \log c_{{\mathcal{H}}} = 4c_{{\mathcal{H}}}^{-4}\left( \partial^{0\bar{1}}c_{{\mathcal{H}}}\partial^{1\bar{0}}c_{{\mathcal{H}}} - c_{{\mathcal{H}}}\partial^{1\bar{1}}c_{{\mathcal{H}}} \right)$$ as $ j \to \infty $.
Hence the sequence of curvatures $$\kappa(z : c_{D_j}) = 4c_{D_j}^{-4}\left( \partial^{0\bar{1}}c_{D_j}\partial^{1\bar{0}}c_{D_j} - c_{D_j}\partial^{1\bar{1}}c_{D_j} \right)$$ converges to $ \kappa(z,c_{{\mathcal{H}}}) = -4$ as $ j \to \infty $.
Note that the higher order curvatures of the Carathéodory metrics of the domains $ D_j $ are given by $$\kappa_n(z : c_{D_j}) =c_{D_j}(z)^{-(n + 1)^2}\big/ J^{D_j}_n(z).$$ By appealing to the convergence $c_{D_j}$ and its partial derivatives to $c_{{\mathcal{H}}}$ and its corresponding partial derivatives uniformly on every compact subset of $ {\mathcal{H}}$, we infer that $\kappa_n(z, c_{D_j})$ converges to $\kappa_n(z : c_{{\mathcal{H}}}) = - 4 \left(\prod_{k = 1}^n k!\right)^2$ uniformly on every compact subset of ${\mathcal{H}}$. The fact that $$\kappa_n(z : c_{{\mathcal{H}}}) = - 4 \left(\prod_{k = 1}^n k!\right)^2$$ is a calculation using the Carathéodory metric on $ {\mathcal{H}}$.
Proof of Theorem \[T:AumannCaratheodoryConstant\]
=================================================
Scale $ D $ near $ p $ as explained earlier along a sequence $ p_j \rightarrow p $. Let $ D_j = T_j(D) $ as before and let $ \Omega_j $ be Aumann-Carathéodory constant of $ D_j $ and $ D' $. Then $ \Omega_j(0, w) = \Omega(p_j, w) $ for all $ j $. So, it suffices to study the behaviour of $ \Omega_j(0, w) $ as $ j \rightarrow \infty $.
Let $ z \in D_j $ and $ \Omega_{j}(z, w) $ be Aumann-Carathéodory constant at $ z $ and $ w $. We shall show that $ \Omega_{j}(z, w) $ converges to $1$ as $ j \to \infty $. Let $ c_{D_j}$ and $ \lambda_{D_j}$ be Carathéodory and hyperbolic metric on $D_j$ respectively, for all $j\geq 1$. Using the following inequality $$\Omega_{0} \leq \Omega \leq 1,$$ we have $$\label{Eq:RatioCaraHyper}
\frac{ c_{D_j}(z) }{ \lambda_{D_j}(z) }\leq \Omega_{j}(z, w) \leq 1.$$ By Proposition \[Pr:UniformConvergenceOfCaraParDerCara\], $c_{D_j}(z)$ converges to $c_{{\mathcal{H}}}(z)$ uniformly on every compact subset of ${\mathcal{H}}$ as $ j \to \infty $. Again, using the scaling technique, we also have that the hyperbolic metric $\lambda_{D_j}(z) $ converges to $ \lambda_{{\mathcal{H}}}(z)$ on uniformly every compact subsets of ${\mathcal{H}}$ as $ j \to \infty $. In case of simply connected domains, in particular for the half-space $ {\mathcal{H}}$, the hyperbolic metric and Carathéodory metric coincide, so we have $ c_{{\mathcal{H}}}(z) = \lambda_{{\mathcal{H}}}(z) $. Consequently, by (\[Eq:RatioCaraHyper\]), we conclude that $ \Omega_{j}(z, w) $ converges to $1$, as $ j \to \infty $, uniformly on every compact subset of $ {\mathcal{H}}$. In particular, we have $ \Omega_{j}(0, w) \to 1 $ as $ j \to \infty $. This completes the proof.
Let $D \subset {\mathbb{C}}$ be a domain and $ p \in \partial D $ is a $ C^2 $-smooth boundary point. Then the Aumann-Carathéodory rigidity constant $$\Omega_D(z) \rightarrow 1$$ as $ z \rightarrow p $.
Proof of Theorem \[T:SugawaMetric\]
===================================
For a fixed $ p \in D $, let $ \phi_D(\zeta, p) $ be the extremal holomorphic differential for the metric $ q_D(z) \vert dz \vert $. Recall Lemma 2.2 from [@SugawaMetric] that relates how $ \phi_D(\zeta, p) $ is transformed by a biholomorphic map. Let $ F : D \longrightarrow D' $ be a biholomorphic map. Then $$\phi_{D'}(F(\zeta), F(p)) \left(F^{\prime}(\zeta)\right)\frac{\overline{F^{\prime
}(p)}}{F^{\prime}(p)} = \phi_D(\zeta, p).$$
\[L:ExtremalFunctionSugawaMetricHalfSpace\] Let ${\mathcal{H}}= \{z \in {\mathbb{C}}: Re(\overline{\omega}z -1) < 0\}$ be a half-space, and let $\omega_0 \in {\mathcal{H}}$. Then the extremal function of $ q_{{\mathcal{H}}}(z) \vert dz \vert $ of ${\mathcal{H}}$ at $\omega_0 \in {\mathcal{H}}$ is $$\phi_{{\mathcal{H}}}(z, \omega_0) = |\omega|^2 \frac{(2 - \omega\overline{\omega_0} - \overline{\omega}\omega_0)^2}{(2 - \omega\overline{\omega_0} - \overline{\omega}z)^4}$$ for all $z \in {\mathcal{H}}$.
The extremal function for $q_{\mathbb{D}}$ at the point $z = 0$ is $$\label{Eq: HurAtZeroDisk}
\phi_{\mathbb{D}}(\zeta, 0) = 1.$$ for all $\zeta \in \mathbb{D}$. Also $$f(z) = \frac{|\omega|(z - \omega_0)}{2 - \omega\overline{\omega_0} - \overline{\omega}z}$$ is the Riemann map between $ {\mathcal{H}}$ and the unit disk $ {\mathbb{D}}$. By computing the derivative of the map $ f $ , we get $$f^{\prime}(z) = |\omega| \frac{(2 - \omega\overline{\omega_0} - \overline{\omega}\omega_0)}{(2 - \omega\overline{\omega_0} - \overline{\omega}z)^2}$$ for all $ z \in {\mathcal{H}}$. By substituting the value of $f^{\prime}(z)$, $f^{\prime}(0)$ and $\overline{f^{\prime}(0)}$ in the transformation formula, we get $$\label{Eq:TansForwithPhiD}
\phi_{{\mathcal{H}}}(z, \omega) = |\omega|^2 \frac{(2 - \omega\overline{\omega_0} - \overline{\omega}\omega_0)^2}{(2 - \omega\overline{\omega_0} - \overline{\omega}z_0)^4}\phi_{{\mathbb{D}}}(f(z), f(\omega_0)).$$ Note that $f(\omega_0) = 0$, therefore by (\[Eq: HurAtZeroDisk\]), we have $\phi_{{\mathbb{D}}}(f(z), f(\omega_0)) = 1$. So from (\[Eq:TansForwithPhiD\]) $$\phi_{{\mathcal{H}}}(z, \omega_0) = |\omega|^2 \frac{(2 - \omega\overline{\omega_0} - \overline{\omega}\omega_0)^2}{(2 - \omega\overline{\omega_0} - \overline{\omega}z)^4}$$ for all $z \in {\mathcal{H}}$.
\[L:ConvergenceOfIntegral\] Let the sequence of domains $ D_j $ converges to $ {\mathcal{H}}$ as before. Let $ z_j $ be a sequence in $ {\mathcal{H}}$ that converges to $ z_0 \in {\mathcal{H}}$. Let $ \phi_{{\mathcal{H}},j} $ be the extremal function of $ q_{{\mathcal{H}}}(z)\vert dz \vert $ at $ z_j $ for all $ j $ and $ \phi_{{\mathcal{H}}} $ be the extremal function for $ q_{{\mathcal{H}}}(z)\vert dz \vert $ at $ z_0 $. Then $$\int_{D_j}|\phi_j| \to \int_{{\mathcal{H}}}\vert \phi_{{\mathcal{H}}}\vert = \pi$$ as $j \to \infty$.
By Lemma \[L:ExtremalFunctionSugawaMetricHalfSpace\], we see that the extremal functions of $ q_{{\mathcal{H}}}(z) \vert dz \vert $ of the half-space ${\mathcal{H}}$ at the points $z_j$ and $z_0$ are given by $$\phi_{{\mathcal{H}}, j}(z, z_j) = |\omega|^2 \frac{(2 - \omega\overline{z_j} - \overline{\omega}z_j)^2}{(2 - \omega\overline{z_j} - \overline{\omega}z)^4}$$ and $$\phi_{{\mathcal{H}}}(z, z_0) = |\omega|^2 \frac{(2 - \omega\overline{z_0} - \overline{\omega}z_0)^2}{(2 - \omega\overline{z_0} - \overline{\omega}z)^4}$$ respectively, for all $z \in {\mathcal{H}}$ and for all $j \geq 1$. By substituting $Z_j = \frac{2 - \omega\overline{z_j}}{\overline{\omega}}$ and $Z_0 = \frac{2 - \omega\overline{z_0}}{\overline{\omega}}$ in the above equations, we can rewrite the above equations as $$\phi_{{\mathcal{H}}, j}(z,z_j) = \frac{|\omega|^2}{\overline{\omega}^4}\frac{(2 - \omega\overline{z_j} - \overline{\omega}z_j )^2}{(Z_j - z)^4}$$ and $${\phi_{\mathcal{H}}}(z,z_0) = \frac{|\omega|^2}{\overline{\omega}^4}\frac{(2 - \omega\overline{z_0} - \overline{\omega}z_0 )^2}{(Z_0 - z)^4}.$$ respectively, for all $z \in {\mathcal{H}}$ and for all $j \geq 1$.
Note that $ Z_0 \notin {\mathcal{H}}$ and hence $ Z_j \notin {\mathcal{H}}$ for $ j $ large. Define $$\phi_j(z) = {\chi_{D_j}}(z)|\phi_{{\mathcal{H}},j}(z, z_j)|$$ and $$\phi(z) = {\chi_{\mathcal{H}}}(z) |{\phi_{\mathcal{H}}}(z, z_0)|$$ Here, $\chi_{A}$, for $A \subset {\mathbb{C}}$, its characteristic function. Note that $\phi_j$ and $ \phi $ are measurable functions on $ {\mathbb{C}}$ and $ \phi_j \to \phi $ point-wise almost everywhere on ${\mathbb{C}}$.
Next, we shall show that there exists a measurable function $g$ on ${\mathbb{C}}$ satisfying $$\vert \phi_j \vert \leq g$$ for all $j\geq 1$ and $$\int_{{\mathbb{C}}}|g| < \infty.$$
First, we note that $$|Z_j - z| \geq \frac{R}{2}$$ for all $j \geq 1$. Again, by the triangle inequality, we have $$\left|\frac{|Z_0 - z|}{|Z_j - z|} - 1 \right| \leq \frac{|Z_0 - Z_j|}{|Z_j - z|} \leq 1$$ for all $j \geq 1$ and for all $z \in {\mathbb{C}}\setminus B(Z_0, R)$. Therefore $$\frac{1}{|Z_j - z|} \leq \frac{2}{|Z_0 - z|}$$ for all $z \in {\mathbb{C}}\setminus B(Z_0, R)$ and for all $j \geq 1$ and this implies that $$\frac{1}{|\omega^2|} \frac{|2 - \omega\overline{z_j} - \overline{\omega}z_j |^2}{|Z_j - z|^4} \leq \frac{16}{|\omega^2|}\frac{|2 - \omega\overline{z_j} - \overline{\omega}z_j |^2}{|Z_0 - z|^4}$$ for all $z \in {\mathbb{C}}\setminus B(Z_0, R)$ and for all $j \geq 1$. Note that there exists $ K > 0 $ such that $$|2 - \omega\overline{z_j} - \overline{\omega}z_j |^2 \leq K$$ for all $j \geq 1$ since $ \{z_j \} $ converges. Hence $$|\phi_j(z)| \leq \frac{16 K}{|\omega^2|}\frac{1}{|Z_0 - z|^4}$$ for all $z \in {\mathbb{C}}\setminus B(Z_0, R)$ and for all $j \geq 1$. If we set $$g(z) = \begin{cases} \frac{16 K}{|\omega^2|}\frac{1}{|Z_0 - z|^4}, &\mbox{if} \,\, z \in {\mathbb{C}}\setminus B(Z_0, R)\\
0 & \mbox{if}\,\, z \in {\bar{B}(Z_0, R)}\end{cases}$$ we get that $$|\phi_j(z)| \leq g(z)$$ for all $z \in {\mathbb{C}}$ and for all $j \geq 1$.
Now note that $$\begin{split}
\int_{{\mathbb{C}}}g &= \int_{{\mathbb{C}}\setminus B(Z_0, R)} \frac{16K}{|\omega|^2}\frac{1}{|Z_0 - z|^4}
= \frac{16K}{|\overline{\omega}^2|}\int_{{\mathbb{C} \setminus {\bar{B}(Z_0, R)}}} \left|\frac{1}{(Z_0 - \zeta)^4}\right| \\
&= \frac{16K}{|\overline{\omega}^2|} \int_{R}^{\infty} \int_0^{2\pi} \frac{1}{r^3}
= \frac{16K}{|\overline{\omega}^2|} \frac{1}{R^2} < \infty.
\end{split}$$ The dominated convergence theorem shows that $$\int_{{\mathbb{C}}}\phi_j
\to
\int_{{\mathbb{C}}}\phi_{{\mathcal{H}}}$$ as $ j \to \infty $. However, by construction $$\int_{{\mathbb{C}}}\phi_j = \int_{D_j}\phi_{{\mathcal{H}}, j}(z,z_j)$$ and $$\int_{{\mathbb{C}}}\phi = \int_{{\mathcal{H}}}\phi_{{\mathcal{H}}}(z, z_0)$$ and by definition. Hence $$\int_{{\mathcal{H}}}\phi_{{\mathcal{H}}}(z, z_0) = \pi$$ and this completes the proof.
\[Pr:ConvergenceOfTheSugawaMetricUnderScaling\] Let $\{D_j\}$ be the sequence of domains that converges to the half-space ${\mathcal{H}}$ as in Proposition \[Prop:ConvergenceAhlforsSzegoGarabeidian\]. Then $q_{D_j}$ converges to $q_{{\mathcal{H}}}$ uniformly on compact subsets of $ {\mathcal{H}}$.
If possible, assume that $q_{D_j}$ does not converge to $q_{{\mathcal{H}}}$ uniformly on every compact subset of ${\mathcal{H}}$. Then there exists a compact subset $K$ of ${\mathcal{H}}$ – without loss of generality, we may assume that $K \subset D_j$ for all $j \geq 1$ – $\epsilon_{0} >0 $, a sequence of integers $\{ k_j \}$ and points $\{ z_{k_j}\}\subset K$ such that $$|q_{D_{k_j}}(z_{k_j})- q_{\mathcal{H}}(z_{k_j})|>\epsilon_0.$$ Since $K$ is compact, we assume that $z_{k_j}$ converges to a point $z_0 \in K$. Using the continuity of $ q_{{\mathcal{H}}}(z) \vert dz \vert $, we have $$|q_{\mathcal{H}}(z_{k_j})- q_{\mathcal{H}}(z_0)|< \epsilon_0\big/2$$ for $j$ large, which implies $$|q_{D_{k_j}}(z_{k_j})- q_{\mathcal{H}}(z_0)|> \epsilon_0\big/2$$ by the triangle inequality. There exist extremal functions $\phi_{k_j} \in A(D_{k_j})$ with $ \Vert \phi_{k_j} \Vert \leq \pi $ such that $$q_{D_{k_j}}^2 = \phi_{k_j}(z_{k_j}).$$ We claim that the collection $\{\phi_{k_j}\}$ is a normal family. To show this, it is enough to show $\{\phi_{k_j}\}$ is locally uniformly bounded.
Let $\zeta_0 \in {\mathcal{H}}$ and $R>0$ such that $\overline{B}(\zeta_0, 2R) \subset {\mathcal{H}}$. Then $\overline{B}(\zeta_0, 2R) \subset D_j$ for all $j$ large. By the mean value theorem, we have $$\begin{split}
|\phi_{k_j}(\zeta)| = \frac{1}{2\pi} \left|\int_{0}^{2\pi}\phi_{k_j}(\zeta + re^{\theta})d\theta\right|
\leq \frac{1}{2\pi} \int_{0}^{2\pi}|\phi_{k_j}(\zeta + re^{\theta})|d\theta
\end{split}$$ for $ 0 < r < R $. This implies $$\begin{split}
\int_0^R|\phi_{k_j}(\zeta)|rdr &\leq \frac{1}{2\pi} \int_0^R\int_{0}^{2\pi}|\phi_{k_j}(\zeta + re^{\theta})|r d\theta dr
\leq \frac{1}{ 2\pi } \int_0^R\int_{0}^{2\pi}|\phi_{k_j}(\zeta + re^{\theta})|r d\theta dr,
\end{split}$$ that is, $$\begin{split}
|\phi_{k_j}(\zeta)|
&= \frac{1}{\pi R^2} \int\int_{B(\zeta, R)}|\phi_{k_j}(z)|
\leq \frac{1}{\pi R^2} \int\int_{D_j}|\phi_{k_j}(z)|
= \frac{\pi}{\pi R^2} = \frac{1}{R^2}
\end{split}$$ for all $\zeta \in B(\zeta_0, R)$. This proves that the family $\{\phi_{k_j}\}$ is locally uniformly bounded on $B(\zeta_0, R)$ and hence a normal family.
Next, we shall show that $$\limsup_j q_{D_j}(z_j) \leq q_{\mathcal{H}}(z_0).$$ First, we note that the finiteness of the integral $\int_{D_j}|\phi_{k_j}| = \pi$ implies that no limiting function of the sequence $\{\phi_{k_j}\}$ diverges to infinity uniformly on any compact subset of ${\mathcal{H}}$.
Now, by definition, we see that there exists a subsequence $\{\phi_{l_{k_j}}\}$ of the sequence $\{\phi_{k_j}\}$ such that $|\phi_{l_{k_j}}(z_0)|$ converges to $\limsup_{j}|\phi_{k_j}(z_0)|$ as $ j \to \infty$. Let $\phi$ be a limiting function of $\{\phi_{l_{k_j}}\}$. For simplicity, we may assume that $\{\phi_{l_{k_j}}\}$ converges to $\phi$ uniformly on every compact subset of ${\mathcal{H}}$. We claim that $\phi \in A({\mathcal{H}})$ and $ \Vert \phi \Vert \leq \pi $. That is, we need to show $$\int_{{\mathcal{H}}}|\phi| \leq \pi.$$ To show this we take an arbitrary compact subset $K$ of ${\mathcal{H}}$. Then $K \subset D_j$ and, by the triangle inequality, $$\int_{K}|\phi| \leq \int_{K} |\phi - \phi_{l_{k_j}}| + \int_{K}|\phi_{l_{k_j}}|$$ for $j $ large. This implies, $$\int_{K}|\phi| \leq \int_{K} |\phi - \phi_{l_{k_j}}| + \int_{D_j}|\phi_{l_{k_j}}|$$ for $j$ large. Since $\int_{D_j}|\phi_{l_{k_j}}| = \pi$ and $\{\phi_{l_{k_j}}\}$ converges to $\phi$ uniformly on $K$, $\int_{K} |\phi - \phi_{l_{k_j}}|$ converges to $0$ as $ j \to \infty$. Hence $$\int_{{\mathcal{H}}}|\phi| \leq \pi.$$ This shows that $\phi(z) $ is a candidate in the family that defines $ q_{{\mathcal{H}}}(z) $.
By definition of $ q_{{\mathcal{H}}}(z) \vert dz \vert $, we have $$\phi(z_0) \leq q_{\mathcal{H}}^2(z_0)$$ which implies $$\limsup_j\phi_{k_j}(z_j) \leq q_{\mathcal{H}}^2(z_0).$$ Now, by substituting $q_{D_j}^2(z_j)$ in place of $\phi_{k_j}(z_j)$ above, we obtain $$\limsup_j q_{D_j}^2(z_j) \leq q_{\mathcal{H}}^2(z_0)$$ or $$\label{Eqn: Eqsup}
\limsup_j q_{D_j}(z_j) \leq q_{\mathcal{H}}(z_0).$$
Next, we show $$q_{\mathcal{H}}(z_0) \leq \liminf_j q_{D_j}(z_0),$$ and this will lead to a contradiction to our assumption.
As we have seen in Lemma \[L:ExtremalFunctionSugawaMetricHalfSpace\], the extremal functions of $ q_{{\mathcal{H}}}(z) \vert dz \vert $ at the points $z_j$ and $z_0$ of the half-space ${\mathcal{H}}$ are given by rational functions with poles at $Z_j = (2 - \omega\overline{z_j})\big/\overline{\omega}$ and $Z_0 = (2 - \omega\overline{z_0})\big/\overline{\omega}$ respectively. The Hausdorff convergence of $ D_j $ to $ {\mathcal{H}}$ as $ j \to \infty $ implies that there exists $R>0$ such that $Z_j \in B(Z_0, R\big/2)$ and $\overline{B}(Z_0, R) \subset {\mathbb{C}}\setminus \overline{D_j}$ for $j$ large. This ensures $$\psi_j(z) = \pi \phi_j(z)\big / M_j$$ is a well-defined holomorphic function on $ D_j $, where $ \phi_j $ is an extremal function of $ q_{{\mathcal{H}}}(z) \vert dz \vert $ at $ z_j $ and $$M_j = \int_{D_j} |\phi_j|.$$ By Lemma \[L:ConvergenceOfIntegral\], $ M_j < \infty $ for $j$ large. From the definition of the function $\psi_j$ it follows that $\int_{D_j} |\psi_{\mathcal{H}}| = \pi$ for all $j \geq 1$. Therefore, $$\psi_j \in A_0(D_j) = \left\{ |\phi(z)|^2 : \phi \in A(D_j) \, \text{with} \, ||\phi||_1 \leq \pi \right \}$$ and hence $$\pi \psi_{{\mathcal{H}}}(z_j)\big/ M_j = \psi_j(z_j) \leq q_{D_j}^2(z_j).$$
Since $M_j$ converges to $\pi$ as $ j \to \infty$, Lemma \[L:ConvergenceOfIntegral\] and by the formulae of the extremal functions of $ {\mathcal{H}}$, it is seen that $\phi_j(z_j)$ converges to $\phi(z_0)$, the extremal function of the half-space at the point $ z_0 $, as $j \to \infty$. Therefore, taking liminf both sides, $$q_{\mathcal{H}}^2(z_0) \leq \liminf_j q_{D_j}^2(z_j).$$ That is, $$\label{Eqn: Eqinf}
q_{\mathcal{H}}(z_0) \leq \liminf_j q_{D_j}(z_j).$$
By (\[Eqn: Eqsup\]) and (\[Eqn: Eqinf\]), we conclude that $$\lim_{j \to \infty} q_{D_j}(z_j) = q_{\mathcal{H}}(z_0)$$ which contradicts the assumption that $$|q_{D_{k_j}}(z_{k_j})- q_{\mathcal{H}}(z_0)|> \epsilon_0\big /2.$$
\[C:AsymptoticqD\] Let $ D \subset {\mathbb{C}}$ be a bounded domain. Suppose $ p \in \partial D $ is a $ C^2 $-smooth boundary point. Then $$\lim_{z \to p}q_D(z)(-\psi(z)) = q_{{\mathcal{H}}}(0) = \vert \omega \vert \big/2.$$
Let $ D_j = T_j(D) $ where $ T_j(z) = (z - p_j)\big/(-\psi(p_j)) $. By Proposition \[Pr:ConvergenceOfTheSugawaMetricUnderScaling\] $ q_{D_j} \to q_{{\mathcal{H}}} $ on compact subsets of $ {\mathcal{H}}$. In particular, $q_{D_j}(0) \to q_{{\mathcal{H}}}(0) $ as $ j \to \infty $. But $ q_{D_j}(0) = q_{D}(p_j) (-\psi(p_j)) $ and hence $ q_{D_j}(0) = q_{D}(p_j) (-\psi(p_j)) \to q_{{\mathcal{H}}}(0) $. Since $ p_j $ is an arbitrary sequence converging to $ p $, hence it follows that $$\lim_{z \to p}q_D(z)(-\psi(z)) = q_{{\mathcal{H}}}(0) = \vert \omega \vert \big / 2.$$
As a consequence of the Corollary \[C:AsymptoticqD\], we have $$q_D(z) \approx 1\big /{\rm dist}(z, \partial D).$$
Proof of Theorem \[T:TheHurwitzMetric\]
=======================================
The continuity of $ \eta_D $ is a consequence of the following observation.
\[T:contHurM\] Let $D \subset \mathbb{C}$ be a domain. Fix $a\in D$ and let $\{a_n\}$ be a sequence in $ D$ converging to $a$. Let $G_n,\, G$ be the normalized Hurwitz coverings at $a_n$ and $a$ respectively.Then $G_n$ converges to $G$ uniformly on compact subsets of $\mathbb{D}$.
The proof of this requires the following lemmas.
\[L:PiPosCon\] Let $ y_0 \in {\mathbb{D}}^* $ and $ \{y_n\} $ a sequence in $ {\mathbb{D}}^* $ converging to $ y_0 $. Let $\pi_n : \mathbb{D} \longrightarrow \mathbb{D}^{*}$ and $\pi_0 : \mathbb{D} \longrightarrow \mathbb{D}^{*}$ be the unique normalized coverings satisfying $\pi_n(0) = y_n$, $\pi_0(0) = y_0$ and $\pi^{\prime}_n(0) > 0$, $\pi^{\prime}_0(0) > 0$. Then $\{\pi_n \}$ converges to $\pi_0$ uniformly on compact subsets of $ {\mathbb{D}}$.
Note that the hyperbolic density $\lambda_{\mathbb{D}^{*}}$ is continuous and hence $\lambda_{\mathbb{D}^{*}}(y_n) \to \lambda_{\mathbb{D}^{*}}(y)$. Also $\lambda_{\mathbb{D}^{*}} > 0 $ and satisfies $$\lambda_{\mathbb{D}^{*}}(y_n) = 1 \big/ \pi^{\prime}_n(0),$$ $$\lambda_{\mathbb{D}^{*}}(y_0) = 1\big/\pi^{\prime}_0(0).$$ Let $ \pi_{\infty}$ be the limit point of the family $\{\pi_n\}$ which is a normal. Then $ \pi_{\infty} : {\mathbb{D}}\longrightarrow {\mathbb{D}}^*$ and $ \pi_{\infty } (0) =y_0 \in {\mathbb{D}}^* $. If the image $ \pi_{\infty}({\mathbb{D}}^*) $ intersects $ \partial {\mathbb{D}}$ then $ \pi_{\infty} $ must be constant, i.e., $ \pi_{\infty}(z) = e^{i \theta_0} $ for some $ \theta_0 $. This can not happen since $ \pi_{\infty}(0) = y_0 \in {\mathbb{D}}^* $. If the image $ \pi_{\infty}({\mathbb{D}}^*) $ contains the origin, Hurwitz’s theorem shows that $ \pi_n $ must also contains the origin for $ n $ large. Again this is not possible since $ \pi_n({\mathbb{D}}) = {\mathbb{D}}^* $. It follows that $ \pi_{\infty} : {\mathbb{D}}\longrightarrow {\mathbb{D}}^* $ with $ \pi_{\infty}(0) = y_0 $.
Let $ \tilde\pi_{\infty} : {\mathbb{D}}\longrightarrow {\mathbb{D}}$ be the lift of $ \pi_{\infty} $ such that $ \tilde\pi_{\infty}(0) = 0 $. By differentiating the identity, $$\pi_0 \circ \tilde\pi_{\infty} = \pi_{\infty},$$ we obtain $$\pi_0^{\prime}(\tilde\pi_{\infty}(z)) \tilde\pi_{\infty}^{\prime}(z) = \pi_{\infty}^{\prime}(z)$$ and evaluating at $ z = 0 $ gives $$\pi^{\prime}(0) \tilde \pi_{\infty}^{\prime}(0) = \pi_{\infty}^{\prime}(0).$$ Now observe that $ \pi_n^{\prime}(0) \to \pi_{\infty}^{\prime}(0) $ and since $1 \big/ \pi_{n}^{\prime} (0) = \lambda_{\mathbb{D}^{*}}(y_n) \to \lambda_{\mathbb{D}^{*}}(y_0) $ it follows $ \pi_{\infty}^{\prime}(0) = 1 \big/ \lambda_{\mathbb{D}^{*}}(y_0) = \pi_0^{\prime}(0) > 0 $. Therefore, $ \tilde \pi_{\infty}^{\prime}(0) = 1 $ and hence $ \pi_{\infty}^{\prime}(z) = z $ by the Schwarz lemma. As a result, $ \pi_{\infty} = \pi_0 $ on $ {\mathbb{D}}$. If follows that $ \{ \pi_n \} $ has unique limit, namely $ \pi_0 $.
\[L:UniformConvergenceOfPunCoveringMpas\] Let $\pi_n : \mathbb{D} \longrightarrow \mathbb{D}^{*}$ be a sequence of holomorphic coverings. Let $\pi$ be a non-constant limit point of the family $\{\pi_n\}$. Then $\pi : \mathbb{D} \longrightarrow \mathbb{D}^{*}$ is a covering.
Suppose that $ \pi_{n_k} \to \pi $ uniformly on compact subsets of $ {\mathbb{D}}$. As in the previous lemma, $ \pi : {\mathbb{D}}\longrightarrow {\mathbb{D}}^* $ since $ \pi $ is assumed to be non-constant. In particular, $ \pi_{n_k}(0) \to \pi(0) \in {\mathbb{D}}^*$. Let $$\phi_n(z) = e^{i \theta_n}z$$ where $ \theta_n = Arg(\pi_{k_n}^{\prime}(0)) $ and consider the compositions $$\tilde \pi_{k_n} = \pi_{k_n} \circ \phi_n$$ which are holomorphic covering of $ {\mathbb{D}}^* $ satisfying $ \tilde \pi_{k_n}^{\prime}(0) > 0$ and $ \tilde \pi_{k_n}(0) = \pi_{k_n}(0) \to \pi(0) \in {\mathbb{D}}^* $. By Lemma \[L:PiPosCon\], $ \tilde \pi_{k_n} \to \tilde \pi $ where $ \tilde \pi : {\mathbb{D}}\longrightarrow {\mathbb{D}}^* $ is a holomorphic covering with $ \tilde \pi(0) = \pi(0) $ and $ \tilde \pi^{\prime}(0) > 0 $. By passing to a subsequence, $ \phi_n(z) \to \phi(z) $ whree $ \phi(z) = e^{i\theta_0}z $ for some $ \theta_0 $. As a result, $$\tilde \pi = \pi \circ \phi$$ and this shows that $ \pi : {\mathbb{D}}\longrightarrow {\mathbb{D}}^* $ is a holomorphic covering.
\[L:HyperbolicCoveringConvergenc\] Let $D \subset {\mathbb{C}}$ be bounded. Fix $ a \in D $ and let $\{a_n\}$ be a sequence in $ D $ converging to $a$. Set $ D_n = D\setminus \{x_n\}$ and $ D_0 = D\setminus \{a\} $. Fix a base point $p \in D \setminus \{a, a_1, a_2, \dots \}$. Let $\pi_n : \mathbb{D} \longrightarrow D_n $ and $\pi_0 : \mathbb{D} \longrightarrow D_0 $ be the unique normalized coverings such that $\pi_n(0) = \pi_0(0) = p $ and $\pi^{\prime}_n(0), \, \pi^{\prime}_0(0) > 0$. Then $\pi_n \to \pi_{0}$ uniformly on compact subsets of $\mathbb{D}$.
Move $ p $ to $ \infty $ by $ T(z) = 1\big/(z - p) $ and let ${\tilde D_n}= T(D_n)$, $\tilde D_0 = T(D_0)$. Then $${\tilde \pi_n}= T \circ {\pi_n}: {\mathbb{D}}\longrightarrow {\tilde D_n}$$ and $${\tilde \pi}= T \circ \pi : {\mathbb{D}}\longrightarrow \tilde D_0$$ are coverings that satisfy $$\lim_{z \to 0} z{\tilde \pi_n}(z) = \lim_{z \to 0} \frac{z}{{\pi_n}(z) - {\pi_n}(0)} = \frac{1}{{\pi_n}^{\prime}(0)} > 0$$ and $$\lim_{z \to 0} z{\tilde \pi}(z) = \lim_{z \to 0} \frac{z}{\pi(z) - \pi(0)} = \frac{1}{\pi^{\prime}(0)} > 0.$$ It is evident that the domains $ {\tilde D_n}$ converge to $ \tilde D_0 $ in the Carathéodory kernel sense and Hejhal’s result [@Hejhal] shows that $${\tilde \pi}_n \to \tilde \pi_0$$ uniformly on compact subsets of $ {\mathbb{D}}$. This completes the proof.
Fix a point $a \in D$ and let $ a_n \to a $. Let $ G_n, \, G$ be the normalized Hurwitz coverings at $ a_n, a $ respectively. Since $ D $ is bounded, $ \{ G_n \} $ is a normal family. Assume that $ G_n \to \tilde G $ uniformly on compact subsets of $ {\mathbb{D}}$.
By Theorem 6.4 of [@TheHurwitzMetric],
$$\label{Eq:bilipchitzCondition}
1\big/8 \delta_D(z) \leq \eta_D(z) \leq 2\big/ \delta_D(z)$$
where $ \delta_D(z) = {\rm dist}(z, \partial D) $. By definition, $ \eta_D(a_n) = 1\big/G_n^{\prime}(0) $ which gives $$2 \delta_D(a_n) \leq G_n^{\prime}(0) \leq 8 \delta_D(a_n)$$
for all $ n $ and hence $$2 \delta_D(a) \leq \tilde G^{\prime}(0) \leq 8 \delta_D(a).$$ Since $ \delta_D(a_n),\, \delta_D(a) $ have uniform positive lower and upper bounds, it follows that $ G_n^{\prime}(0) $ and $ \tilde G^{\prime}(0) $ admits uniform lower and upper bounds as well. We will now use the following fact which is a consequence of the inverse function theorem.
[***Claim***]{}: Let $ f : \Omega \longrightarrow {\mathbb{C}}$ be holomorphic and suppose that $ f^{\prime}(z_0) \neq 0 $ for some $ z_0 \in
\Omega $. Then there exists $ \delta > 0 $ such that $ f : B(z_0, \delta) \longrightarrow G_n^{\prime}(0) $ is biholomorphic and $ B(f(z_0), \delta \vert f^{\prime}(z_0) \vert \big/2) \subset f(B(z_0, \delta))$.
To indicate a short proof of this claim, let $ \delta > 0 $ be such that $$\label{Eq:DerIneq}
\vert f^{\prime}(z) - f^{\prime}(z_0) \vert < \vert f^{\prime
}(z_0) \vert\big/2$$ for all $ \vert z - z_0 \vert < \delta $. Then $ g(z) = f(z) - f^{\prime}(z_0)z $ satisfies $$\vert g^{\prime}(z) \vert \leq 1\big/2 \vert f^{\prime}(z_0) \vert$$ and hence $$\label{Eq:LipCon}
\vert f(z_2) - f(z_1) - f^{\prime}(z_0)z(z_2 - z_1) =
\vert g(z_2) - g(z_1) \vert \leq \vert f^{\prime}(z_0)\vert \vert z_2 - z_1 \vert \big/2.$$ This shows that $ f $ is injective on $ B(z_0, \delta) $. Finally, if $ w \in B(f(z_0), \delta \vert f^{\prime}(z_0)\vert \big/2) $, then $$z_k = z_{k - 1} - \frac{w - f^{\prime}(z_{k - 1})}{f^{\prime}(z_0)}$$ defines a Cauchy sequence which is compactly contained in $ B(z_0, \delta) $. It converges to $ \tilde z \in B(z_0, \delta) $ such that $ f(\tilde z) = w $. This shows that $ B(f(z_0), \delta \vert f^{\prime}(z_0) \vert \big/2) \subset f(B(z_0, \delta))$.
Let $ m, \delta > 0 $ be such that $$\vert \tilde G^{\prime}(z) - \tilde G^{\prime}(0) \vert < m \big/2 < m \leq \frac{\vert \tilde G^{\prime}(0) \vert}{2}$$ for $ \vert z \vert < \delta $. Since $ G_n $ converges to $ \tilde G $ uniformly on compact subsets of $ {\mathbb{D}}$, $$\vert G_n^{\prime}(0) \vert\big/2 \geq \vert \tilde G^{\prime}(0) \vert\big/2 - \vert G_n^{\prime}(0) - \tilde G^{\prime}(0) \vert\big/2 \geq m - \tau$$ for $ n $ large. Here $ 0 < \tau < m $. On the other hand, $$\vert G_n^{\prime}(z) - G_n^{\prime}(0) \vert
\leq
\vert \tilde G^{\prime}(z) - \tilde G^{\prime}(0) \vert
+ \vert G_n^{\prime}(z) - \tilde G^{\prime}(z) \vert
+ \vert G_n^{\prime}(0) - \tilde G^{\prime}(0) \vert \leq m \big/2 + \epsilon + \epsilon$$ for $ \vert z \vert < \delta $ and $ n $ large enough. Therefore, if $ 2 \epsilon + \tau < m\big/2 $, then $$\left\vert G_n^{\prime}(z) - G_n^{\prime}(0) \right \vert \leq m \big/2 + 2 \epsilon < m - \tau < \vert G_n^{\prime}(0) \vert \big/2$$ for $ \vert z \vert < \delta $ and $ n $ large enough. It follows from the claim that there is a ball of uniform radius, say $ \eta > 0 $ which is contained in the images $ G_n(B(0, \delta)) $. Choose $ p \in B(0, \eta) $. This will serve as a base point in the following way. Let $ \pi_n : {\mathbb{D}}\longrightarrow D_n = D \setminus \{
a_n \} $ be holomorphic coverings such $ \pi_n(0) = p $. Then there exist holomorphic coverings $ \tilde \pi_n : {\mathbb{D}}\longrightarrow {\mathbb{D}}^* $ such that $$\begin{tikzcd}
\mathbb{D} \arrow{r}{\tilde \pi_n} \arrow[swap]{dr}{\pi_n} & \mathbb{D} ^{*} \arrow{d}{G_n} \\
& D_n
\end{tikzcd}$$ commutes, i.e., $ G_n \circ \tilde \pi_n = \pi_n $. By locally inverting $ G_n $ near the origin, $$\tilde \pi_n(0) = G_n^{-1} \circ \pi_n(0) = G_n^{-1}(p).$$ The family $ \{\tilde \pi_n\} $ is normal and admits a convergent subsequence. Let $ \tilde \pi_0 $ be a limit of $ \tilde \pi_{k_n} $. The image $ \tilde \pi_0({\mathbb{D}}) $ can not intersect $ \partial {\mathbb{D}}$ as otherwise $ \tilde \pi_0(z) = e^{i \theta_0} $ for some $ \theta_0 $. This contradicts the fact that $ \tilde \pi_{k_n}(0) = G_{k_n}^{-1}(p)$ is compactly contained in $ {\mathbb{D}}$. If $ \tilde \pi_0({\mathbb{D}}) $ were to contain the origin, then $ \tilde \pi_0(z) \equiv 0 $ as otherwise $ \tilde \pi_n({\mathbb{D}}) $ would also contain the origin by Hurwitz’s theorem. The conclusion of all this is that $ \tilde \pi_0 : {\mathbb{D}}\longrightarrow {\mathbb{D}}^* $ is non-constant and hence a covering by Lemma \[L:UniformConvergenceOfPunCoveringMpas\]. By Lemma \[L:HyperbolicCoveringConvergenc\], $ \pi_n \to \pi_0 $ where $ \pi_0 : {\mathbb{D}}\longrightarrow {\mathbb{D}}_a = D \setminus \{a \}$ is a covering and hence by passing to the limits in $$G_n \circ \tilde \pi_n = \pi_n$$ we get $$\tilde G \circ \tilde \pi_n = \pi_0.$$ This shows that $ \tilde G : {\mathbb{D}}^* \longrightarrow D_a$ is a covering. As noted earlier $ \tilde G^{\prime}(0) > 0 $ and this means that $ \tilde G = G $, the normalized Hurwitz covering at a.
\[L:HurCoverScalingConvergence\] Let $ D_j $ be the sequence of domains that converges to $ {\mathcal{H}}$ as in Proposition \[Prop:ConvergenceAhlforsSzegoGarabeidian\]. Let $ z_j $ be a sequence converging to $ a \in {\mathcal{H}}$ satisfying $ z_j \in D_j $ for all $ j $ and let $ G_j $ be the normalized Hurwitz covering of $ D_j$ at the point $ z_j $ and $ G $ be the normalized Hurwitz covering of $ {\mathcal{H}}$ at $ a $. Then $ G_j $ converges to $ G $ uniformly on compact subsets of $ {\mathbb{D}}$.
Since $ D_j $ converges to $ {\mathcal{H}}$ in the Hausdorff sense and $ z_j \to a $, there exist $ r > 0 $ and a point $ p \in {\mathcal{H}}$ such that $p \in B(z_j, r) \subset D_j $ for all large $ j $ – for simplicity we may assume for all $ j $ – and $p \in B(a, r) \subset {\mathcal{H}}$ satisfying $ p \neq z_j $ and $ p \neq a $ and $ G_j $ is locally invertible in a common neighborhood containing $ p $ for all $ j $. Let $ \pi_j : {\mathbb{D}}\longrightarrow D_j \setminus \{z_j\}$ be the holomorphic coverings such that $ \pi_j(0) = p$ and $ \pi^{\prime}(0) > 0 $. Let $ \pi_{0 j} : {\mathbb{D}}\longrightarrow {\mathbb{D}}^* $ be holomorphic coverings such that $ \pi_j = G_j \circ \pi_{0 j} $. Since $ D_j \setminus \{z_j\} $ converges to $ {\mathcal{H}}\setminus \{a\} $ in the Carathéodory kernel sense, by Hejhal’s result [@Hejhal], $ \pi_j $ converges to $ \pi $ uniformly on compact subsets of $ {\mathbb{D}}$, where $ \pi : {\mathbb{D}}\longrightarrow {\mathcal{H}}\setminus \{a\} $ is the holomorphic covering such that $ \pi(0) = p $ and $ \pi^{\prime}(0) > 0 $.
Since $ D_j \to {\mathcal{H}}$ in the Hausdorff sense, any compact set $ K $ with empty intersection with $ \overline {\mathcal{H}}$ will have no intersection with $ D_j $ for $ j $ large and as a result the family $ \{G_j\} $ is normal. Also note that $ \pi_{0 j}({\mathbb{D}}) = {\mathbb{D}}^* $ for all $ j $, and this implies that $ \{\pi_{0 j}\}$ is a normal family. Let $ G_0 $ and $ \pi_0 $ limits of $ \{G_j\} $ and $ \{\pi_j\} $ respectively. Now, together with the fact that $ \pi_0 $ is non-constant – guaranteed by its construction – and the identity $ \pi_j = G_j \circ \pi_{0 j}$, we have $ \pi = G_0 \circ \pi_0 $. This implies that $ G_0 : {\mathbb{D}}^* \longrightarrow {\mathcal{H}}_a = {\mathcal{H}}\setminus \{a \} $ is a covering since $ \pi_0 $ is a covering which follows by Lemma \[L:UniformConvergenceOfPunCoveringMpas\]. Since $ G_j(0) = z_j $ and $ G_j^{\prime}(0) > m > 0 $ – for some constant $ m > 0 $ which can be obtained using the bilipschitz condition of the Hurwitz metric, for instance, see inequality (\[Eq:bilipchitzCondition\]) – it follows $ G_0(0) = a $ and $ G_0^{\prime}(0) \geq m $. This shows that $ G_0 $ is the normalized Hurwitz covering, in other words $ G_0 = G $. Thus, we showed that any limit of $ G_j $ is equal to $ G $ and this proves that $ G_j $ converges to $ G $ uniformly on compact subsets of $ {\mathbb{D}}$.
\[L:ConvergenceOfTheHurwitzMetricScaling\] Let $\{D_j\}$ be the sequence of domains that converges to the half-space ${\mathcal{H}}$ as in Proposition \[Prop:ConvergenceAhlforsSzegoGarabeidian\]. Then $\eta_{D_j}$ converges to $\eta_{{\mathcal{H}}}$ uniformly on compact subsets of ${\mathcal{H}}$.
Let $K \subset \mathcal{H}$ be a compact subset. Without loss of generality, we may assume that $K \subset D_j$ for all $j$.
If possible assume that $\eta_{D_j}$ does not converges to $\eta_{\mathcal{H} }$ uniformly on $K$. Then there exist $\epsilon_{0} >0 $, a sequence of integers $\{ k_j \}$ and sequence of points $\{ z_{k_j}\}\subset K \subset D_{k_j}$ such that $$\vert \eta_{D_{k_j}}(z_{k_j})- \eta_{\mathcal{H}}(z_{k_j}) \vert >\epsilon_0.$$ Since $K$ is compact, for simplicity, we assume $z_{k_j}$ converges to a point $z_0 \in K$. Using the continuity of the Hurwitz metric, we have $$\vert \eta_{\mathcal{H}}(z_{k_j})- \eta_{\mathcal{H}}(z_0) \vert < \epsilon_0\big/2$$ for $j$ large. By the triangle inequality $$\vert \eta_{D_{k_j}}(z_{k_j})- \eta_{\mathcal{H}}(z_0) \vert > \epsilon_0\big/2.$$ Since the domains $D_{k_j}$ are bounded there exist normalized Hurwitz coverings $ G_{k_j} $ of $ D_{k_j} $ at $ z_{k_j} $. Using the convergence of $ D_{k_j}\setminus \{z_{k_j}\} $ to $ {\mathcal{H}}\setminus \{z_0\} $ in the Carathéodory kernel sense we have by Lemma \[L:HurCoverScalingConvergence\], $ G_{k_j} $ converges to the normalized Hurwitz covering map $ G $ of $ {\mathcal{H}}$ at the point $ z_0 $ uniformly on every compact subset of $ {\mathcal{H}}$ as $ j \to \infty $. But we have $$\eta_{{\mathcal{H}}}(z_0) = 1\big/G^{\prime}(0).$$ From above we obtain that $ \eta_{D_{k_j}}(z_{k_j}) $ converges to $ \eta_{{\mathcal{H}}}(z_0) $ as $ j \to \infty $. But from our assumption, we have $$\vert \eta_{D_{k_j}}(z_{k_j})- \eta_{\mathcal{H}}(z_0) \vert > \epsilon_0\big/2$$ and this is a contradiction. Therefore, $ \eta_{D_{k_j}}$ converges to $\eta_{\mathcal{H}}$ uniformly on compact subsets of $\mathcal{H}$.
\[C:AsymptoticHurwitzMetric\] Let $ D \subset {\mathbb{C}}$ be a bounded domain. Suppose $ p \in \partial D $ is a $ C^2 $-smooth boundary point. Then $$\lim_{z \to p}\eta_D(z)(-\psi(z)) = \eta_{{\mathcal{H}}}(0)= \vert \omega \vert \big/2.$$
By Proposition \[Pr:ConvergenceOfTheSugawaMetricUnderScaling\], if $ D_j $ is the sequence of scaled domains of $ D $ under the affine transformation $ T_j(z) = (z - p_j)\big/(-\psi(p_j)) $, for all $ z \in D $, where $ \psi $ is a local defining function of $ \partial D $ at $ z = p $ and $ p_j $ is a sequence of points in $ D $ converging to $ p $, then the corresponding sequence of Hurwitz metrics $ \eta_{D_j} $ of $ D_j $ converges to $ \eta_{{\mathcal{H}}} $ uniformly on every compact subset of $ {\mathcal{H}}$ as $ j \to \infty $. In particular, $\eta_{D_j}(0) $ converges to $ \eta_{{\mathcal{H}}}(0) $ as $ j \to \infty $. Again, we know that $ \eta_{D_j}(z) = \eta_{D}(T_j^{-1}(z)) \vert (T_j^{-1})^{\prime}(z)\vert $. From this it follows that $ \eta_{D_j}(0) = \eta_{D}(p_j) (-\psi(p_j)) $, and consequently, $ \eta_{D}(p_j) (-\psi(p_j)) $ converges to $ \eta_{{\mathcal{H}}}(0) $ as $ j \to \infty $. Since $ p_j $ is an arbitrary sequence converging to $ p $, hence we have the following $$\lim_{z \to p}\eta_D(z)(-\psi(z)) = \eta_{{\mathcal{H}}}(0) = \vert \omega \vert \big/2.$$
From Theorem \[T:contHurM\], it follows that the Hurwitz metric is continuous and as a consequence of Corollary \[C:AsymptoticHurwitzMetric\], we have $$\eta_D(z) \approx 1\big/{\rm dist}(z, \partial D).$$
A curvature calculation
=======================
For a smooth conformal metric $ \rho(z)\vert dz \vert $ on a planar domain $ D $, the curvature $$K_{\rho} = -\rho^{-2} \Delta \log \rho$$ is a well defined conformal invariant. It $ \rho $ is only continuous, Heins [@Heins] introduced the notion of generalized curvatures as follows:
For $ a \in D $ and $ r > 0 $, let $$T(\rho, a, r) = \left. -\frac{4}{r^2}\left\{\frac{1}{2 \pi}\int_0^{2\pi} \log \rho(a + r e^{i \theta})d\theta - \log \rho(a)\right\}\right/ \rho^2(a).$$ The $ \liminf_{r \to 0} T(\rho, a , r)$ and $ \limsup_{r \to 0} T(\rho, a , r)$ are called generalized upper and lower curvature of $ \rho(z)\vert dz \vert $. Our aim is to give some estimates for the quantities $ T(\rho, a , r) $ for $ r > 0 $ and $ \rho = q_D $ or $ \eta_D $.
Let $ T_j : D \longrightarrow D_j $ be given by $$T_j(z) = \frac{z - p_j}{- \psi(p_j)}$$ where $ D $ and $ p_j \to p $ as before. Then $$T\left(\rho, p_j , r\vert \psi(p_j) \vert \right) = T\left((T_j)_*\rho, 0 , r\right)$$ for $ r > 0 $ small enough. Here $ (T_j)_*\rho $ is the push-forward of the metric $ \rho $.
Computing, $$\begin{aligned}
&T\left((T_j)_*\rho, 0 , r\right)\\
&= \left. -\frac{4}{r^2} \left\{\frac{1}{2 \pi}\int_0^{2\pi} \log((T_j)_*\rho)( re^{i\theta}d) d\theta - \log((T_j)_*\rho)(0)\right\}\right/((T_j)_*\rho)^2(0)\\
&= \left.-\frac{4}{r^2} \left\{\frac{1}{2 \pi}\int_0^{2\pi} \log\rho (T_j^{-1}(re^{i\theta}))\vert (T_j^{-1})^{\prime}(re^{i\theta}) \vert d\theta - \log\rho ((T_j^{-1})(0))\vert (T_j^{-1})^{\prime}(0) \vert\right\}\right/\rho ((T_j^{-1})(0))^2\vert (T_j^{-1})^{\prime}(0) \vert^2.
\end{aligned}$$ Since $ (T_j^{-1})^{\prime}(z) = -\psi(p_j) $ for all $ z \in D $, we have $$T\left((T_j)_*\rho, 0 , r\right) = \left. -\frac{4}{r^2\vert \psi(p_j) \vert^2} \left\{\frac{1}{2 \pi}\int_0^{2\pi} \log\rho (p_j + r\vert \psi(p_j) \vert e^{i\theta}) d\theta - \log\rho(p_j )\right\}\right/\rho^2(p_j ).$$ This implies $$T\left(\rho, p_j , r\vert \psi(p_j) \vert \right) = T\left((T_j)_*\rho, 0 , r\right).$$
Fix $ r_0 $ small enough and let $ \epsilon > 0 $ arbitrary. Then $$-4 - \epsilon < T\left(\rho, p_j , r_0\vert \psi(p_j) \vert \right) < -4 + \epsilon.$$ for $ j $ large. Consequently, for a fixed $ r $ with $ 0 < r < r_0 $, $$\lim_{j \to \infty}T(\rho, p_j, \vert \psi(p_j) \vert r) = -4$$
From the lemma above, it is enough to show the inequality for $ T\left((T_j)_*\rho, 0 , r_0\right) $. Recall that the scaled Sugawa metric and Hurwitz metric, i.e., $ q_{D_j} $ and $ \eta_{D_j} $ converge to $ q_{{\mathcal{H}}} $ and $ \eta_{{\mathcal{H}}} $ respectively. The convergence is uniform on compact subsets of $ {\mathcal{H}}$ and note that $ q_{{\mathcal{H}}} = \eta_{{\mathcal{H}}} $. By writing $ \rho_j $ for $ q_{D_j} $ or $ \eta_{D_j} $, and $ \rho_{{\mathcal{H}}} $ for $ q_{{\mathcal{H}}} = \eta_{{\mathcal{H}}} $, we get $$T(\rho_{D_j}, 0, r) \to T(\rho_{{\mathcal{H}}}, 0, r)$$ for a fixed $ r $ as $ j \to \infty $. That is $$T(\rho_{{\mathcal{H}}}, 0, r) - \epsilon\big/2 < T(\rho_{D_j}, 0, r) < T(\rho_{{\mathcal{H}}}, 0, r) + \epsilon\big/2$$ for $ j $ large. Also recall that since the curvature of $ \rho_{{\mathcal{H}}} $ is equal to $ -4 $, there exists $ r_1 > 0 $ such that $$-4 - \epsilon\big/2 < T(\rho_{{\mathcal{H}}}, 0, r) < -4 + \epsilon\big/2$$ whenever $ r < r_1 $. Now, choose $ r_0 = r_1/2 $, then there exists $ j_0 $ depending on $ r_0 $ such that $$-4 - \epsilon < T\left(\rho_{D_j}, 0 , r_0\right) < -4 + \epsilon.$$ for all $ j \geq j_0 $. This completes the proof.
|
{
"pile_set_name": "arxiv"
}
|
---
abstract: 'We continue our study of Cartan schemes and their Weyl groupoids. The results in this paper provide an algorithm to determine connected simply connected Cartan schemes of rank three, where the real roots form a finite irreducible root system. The algorithm terminates: Up to equivalence there are exactly 55 such Cartan schemes, and the number of corresponding real roots varies between $6$ and $37$. We identify those Weyl groupoids which appear in the classification of Nichols algebras of diagonal type.'
address:
- 'Michael Cuntz, Fachbereich Mathematik, Universität Kaiserslautern, Postfach 3049, D-67653 Kaiserslautern, Germany'
- 'István Heckenberger, Philipps-Universität Marburg, Fachbereich Mathematik und Informatik, Hans-Meerwein-Straße, D-35032 Marburg, Germany'
author:
- 'M. Cuntz'
- 'I. Heckenberger'
bibliography:
- 'quantum.bib'
title: Finite Weyl groupoids of rank three
---
Introduction
============
Root systems associated with Cartan matrices are widely studied structures in many areas of mathematics, see [@b-BourLie4-6] for the fundaments. The origins of the theory of root systems go back at least to the study of Lie groups by Lie, Killing and Cartan. The symmetry of the root system is commonly known as its Weyl group. Root systems associated with a family of Cartan matrices appeared first in connection with Lie superalgebras [@a-Kac77 Prop.2.5.6] and with Nichols algebras [@a-Heck06a], [@a-Heck08a]. The corresponding symmetry is not a group but a groupoid, and is called the Weyl groupoid of the root system.
Weyl groupoids of root systems properly generalize Weyl groups. The nice properties of this more general structure have been the main motivation to develop an axiomatic approach to the theory, see [@a-HeckYam08], [@a-CH09a]. In particular, Weyl groupoids are generated by reflections and Coxeter relations, and they satisfy a Matsumoto type theorem [@a-HeckYam08]. To see more clearly the extent of generality it would be desirable to have a classification of finite Weyl groupoids.[^1] However, already the appearance of a large family of examples of Lie superalgebras and Nichols algebras of diagonal type indicated that a classification of finite Weyl groupoids is probably much more complicated than the classification of finite Weyl groups. Additionally, many of the usual classification tools are not available in this context because of the lack of the adjoint action and a positive definite bilinear form.
In previous work, see [@a-CH09b] and [@p-CH09a], we have been able to determine all finite Weyl groupoids of rank two. The result of this classification is surprisingly nice: We found a close relationship to the theory of continued fractions and to cluster algebras of type $A$. The structure of finite rank two Weyl groupoids and the associated root systems has a natural characterization in terms of triangulations of convex polygons by non-intersecting diagonals. In particular, there are infinitely many such groupoids.
At first view there is no reason to assume that the situation for finite Weyl groupoids of rank three would be much different from the rank two case. In this paper we give some theoretical indications which strengthen the opposite point of view. For example in Theorem \[cartan\_6\] we show that the entries of the Cartan matrices in a finite Weyl groupoid cannot be smaller than $-7$. Recall that for Weyl groupoids there is no lower bound for the possible entries of generalized Cartan matrices. Our main achievement in this paper is to provide an algorithm to classify finite Weyl groupoids of rank three. Our algorithm terminates within a short time, and produces a finite list of examples. In the appendix we list the root systems characterizing the Weyl groupoids of the classification: There are $55$ of them which correspond to pairwise non-isomorphic Weyl groupoids. The number of positive roots in these root systems varies between $6$ and $37$. Among our root systems are the usual root systems of type $A_3$, $B_3$, and $C_3$, but for most of the other examples we don’t have yet an explanation.
It is remarkable that the number $37$ has a particular meaning for simplicial arrangements in the real projective plane. An arrangement is the complex generated by a family of straight lines not forming a pencil. The vertices of the complex are the intersection points of the lines, the edges are the segments of the lines between two vertices, and the faces are the connected components of the complement of the set of lines generating the arrangement. An arrangement is called simplicial, if all faces are triangles. Simplicial arrangements have been introduced in [@a-Melchi41]. The classification of simplicial arrangements in the real projective plane is an open problem. The largest known exceptional example is generated by $37$ lines. Grünbaum conjectures that the list given in [@a-Gruenb09] is complete. In our appendix we provide some data of our root systems which can be used to compare Grünbaum’s list with Weyl groupoids. There is an astonishing analogy between the two lists, but more work has to be done to be able to explain the precise relationship. This would be desirable in particular since our classification of finite Weyl groupoids of rank three does not give any evidence for the range of solutions besides the explicit computer calculation.
In order to ensure the termination of our algorithm, besides Theorem \[cartan\_6\] we use a weak convexity property of certain affine hyperplanes, see Theorem \[convex\_diff2\]: We can show that any positive root in an affine hyperplane “next to the origin” is either simple or can can be written as the sum of a simple root and another positive root. Our algorithm finally becomes practicable by the use of Proposition \[pr:suminR\], which can be interpreted as another weak convexity property for affine hyperplanes. It is hard to say which of these theorems are the most valuable because avoiding any of them makes the algorithm impracticable (unless one has some replacement).
The paper is organized as follows. We start with two sections proving the necessary theorems to formulate the algorithm: The results which do not require that the rank is three are in Section \[gen\_res\], the obstructions for rank three in Section \[rk3\_obst\]. We then describe the algorithm in the next section. Finally we summarize the resulting data and make some observations in the last section.
**Acknowledgement.** We would like to thank B. M[ü]{}hlherr for pointing out to us the importance of the number $37$ for simplicial arrangements in the real projective plane.
Cartan schemes and Weyl groupoids {#gen_res}
=================================
We mainly follow the notation in [@a-CH09a; @a-CH09b]. The fundaments of the general theory have been developed in [@a-HeckYam08] using a somewhat different terminology. Let us start by recalling the main definitions.
Let $I$ be a non-empty finite set and $\{{\alpha }_i\,|\,i\in I\}$ the standard basis of ${\mathbb{Z}}^I$. By [@b-Kac90 §1.1] a generalized Cartan matrix ${C}=({c}_{ij})_{i,j\in I}$ is a matrix in ${\mathbb{Z}}^{I\times I}$ such that
1. ${c}_{ii}=2$ and ${c}_{jk}\le 0$ for all $i,j,k\in I$ with $j\not=k$,
2. if $i,j\in I$ and ${c}_{ij}=0$, then ${c}_{ji}=0$.
Let $A$ be a non-empty set, ${\rho }_i : A \to A$ a map for all $i\in I$, and ${C}^a=({c}^a_{jk})_{j,k \in I}$ a generalized Cartan matrix in ${\mathbb{Z}}^{I \times I}$ for all $a\in A$. The quadruple $${\mathcal{C}}= {\mathcal{C}}(I,A,({\rho }_i)_{i \in I}, ({C}^a)_{a \in A})$$ is called a *Cartan scheme* if
1. ${\rho }_i^2 = \id$ for all $i \in I$,
2. ${c}^a_{ij} = {c}^{{\rho }_i(a)}_{ij}$ for all $a\in A$ and $i,j\in I$.
Let ${\mathcal{C}}= {\mathcal{C}}(I,A,({\rho }_i)_{i \in I}, ({C}^a)_{a \in A})$ be a Cartan scheme. For all $i \in I$ and $a \in A$ define ${\sigma }_i^a \in
\operatorname{Aut}({\mathbb{Z}}^I)$ by $$\begin{aligned}
{\sigma }_i^a ({\alpha }_j) = {\alpha }_j - {c}_{ij}^a {\alpha }_i \qquad
\text{for all $j \in I$.}
\label{eq:sia}
\end{aligned}$$ The *Weyl groupoid of* ${\mathcal{C}}$ is the category ${\mathcal{W}}({\mathcal{C}})$ such that ${\mathrm{Ob}}({\mathcal{W}}({\mathcal{C}}))=A$ and the morphisms are compositions of maps ${\sigma }_i^a$ with $i\in I$ and $a\in A$, where ${\sigma }_i^a$ is considered as an element in $\operatorname{Hom}(a,{\rho }_i(a))$. The category ${\mathcal{W}}({\mathcal{C}})$ is a groupoid in the sense that all morphisms are isomorphisms. The set of morphisms of ${\mathcal{W}}({\mathcal{C}})$ is denoted by $\operatorname{Hom}({\mathcal{W}}({\mathcal{C}}))$, and we use the notation $$\Homsfrom{a}=\mathop{\cup }_{b\in A}\operatorname{Hom}(a,b) \quad
\text{(disjoint union)}.$$ For notational convenience we will often neglect upper indices referring to elements of $A$ if they are uniquely determined by the context. For example, the morphism ${\sigma }_{i_1}^{{\rho }_{i_2}\cdots {\rho }_{i_k}(a)}
\cdots \s_{i_{k-1}}^{{\rho }_{i_k(a)}}{\sigma }_{i_k}^a\in \operatorname{Hom}(a,b)$, where $k\in {\mathbb{N}}$, $i_1,\dots,i_k\in I$, and $b={\rho }_{i_1}\cdots {\rho }_{i_k}(a)$, will be denoted by ${\sigma }_{i_1}\cdots {\sigma }_{i_k}^a$ or by ${\mathrm{id}}_b{\sigma }_{i_1}\cdots \s_{i_k}$. The cardinality of $I$ is termed the *rank of* ${\mathcal{W}}({\mathcal{C}})$. A Cartan scheme is called *connected* if its Weyl groupoid is connected, that is, if for all $a,b\in A$ there exists $w\in \operatorname{Hom}(a,b)$. The Cartan scheme is called *simply connected*, if $\operatorname{Hom}(a,a)=\{{\mathrm{id}}_a\}$ for all $a\in A$.
Let ${\mathcal{C}}$ be a Cartan scheme. For all $a\in A$ let $${(R\re)^{a}}=\{ {\mathrm{id}}_a {\sigma }_{i_1}\cdots \s_{i_k}({\alpha }_j)\,|\,
k\in {\mathbb{N}}_0,\,i_1,\dots,i_k,j\in I\}\subset {\mathbb{Z}}^I.$$ The elements of the set ${(R\re)^{a}}$ are called *real roots* (at $a$). The pair $({\mathcal{C}},({(R\re)^{a}})_{a\in A})$ is denoted by ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$. A real root ${\alpha }\in {(R\re)^{a}}$, where $a\in A$, is called positive (resp. negative) if ${\alpha }\in {\mathbb{N}}_0^I$ (resp. ${\alpha }\in -{\mathbb{N}}_0^I$). In contrast to real roots associated to a single generalized Cartan matrix, ${(R\re)^{a}}$ may contain elements which are neither positive nor negative. A good general theory, which is relevant for example for the study of Nichols algebras, can be obtained if ${(R\re)^{a}}$ satisfies additional properties.
Let ${\mathcal{C}}={\mathcal{C}}(I,A,({\rho }_i)_{i\in I},({C}^a)_{a\in A})$ be a Cartan scheme. For all $a\in A$ let $R^a\subset {\mathbb{Z}}^I$, and define $m_{i,j}^a= |R^a \cap (\ndN_0 {\alpha }_i + \ndN_0 {\alpha }_j)|$ for all $i,j\in
I$ and $a\in A$. We say that $${\mathcal{R}}= {\mathcal{R}}({\mathcal{C}}, (R^a)_{a\in A})$$ is a *root system of type* ${\mathcal{C}}$, if it satisfies the following axioms.
1. $R^a=R^a_+\cup - R^a_+$, where $R^a_+=R^a\cap \ndN_0^I$, for all $a\in A$.
2. $R^a\cap \ndZ{\alpha }_i=\{{\alpha }_i,-{\alpha }_i\}$ for all $i\in I$, $a\in A$.
3. ${\sigma }_i^a(R^a) = R^{{\rho }_i(a)}$ for all $i\in I$, $a\in A$.
4. If $i,j\in I$ and $a\in A$ such that $i\not=j$ and $m_{i,j}^a$ is finite, then $({\rho }_i{\rho }_j)^{m_{i,j}^a}(a)=a$.
The axioms (R2) and (R3) are always fulfilled for ${\mathcal{R}}{^\mathrm{re}}$. The root system ${\mathcal{R}}$ is called *finite* if for all $a\in A$ the set $R^a$ is finite. By [@a-CH09a Prop.2.12], if ${\mathcal{R}}$ is a finite root system of type ${\mathcal{C}}$, then ${\mathcal{R}}={\mathcal{R}}{^\mathrm{re}}$, and hence ${\mathcal{R}}{^\mathrm{re}}$ is a root system of type ${\mathcal{C}}$ in that case.
In [@a-CH09a Def.4.3] the concept of an *irreducible* root system of type ${\mathcal{C}}$ was defined. By [@a-CH09a Prop.4.6], if ${\mathcal{C}}$ is a Cartan scheme and ${\mathcal{R}}$ is a finite root system of type ${\mathcal{C}}$, then ${\mathcal{R}}$ is irreducible if and only if for all $a\in A$ the generalized Cartan matrix $C^a$ is indecomposable. If ${\mathcal{C}}$ is also connected, then it suffices to require that there exists $a\in A$ such that $C^a$ is indecomposable.
Let ${\mathcal{C}}={\mathcal{C}}(I,A,({\rho }_i)_{i\in I},({C}^a)_{a\in A})$ be a Cartan scheme. Let $\Gamma $ be a nondirected graph, such that the vertices of $\Gamma $ correspond to the elements of $A$. Assume that for all $i\in I$ and $a\in A$ with ${\rho }_i(a)\not=a$ there is precisely one edge between the vertices $a$ and ${\rho }_i(a)$ with label $i$, and all edges of $\Gamma $ are given in this way. The graph $\Gamma $ is called the *object change diagram* of ${\mathcal{C}}$.
![The object change diagram of a Cartan scheme of rank three (nr. 15 in Table 1)[]{data-label="fig:14posroots"}](wg14){width="6cm"}
In the rest of this section let $\cC={\mathcal{C}}(I,A,({\rho }_i)_{i\in I}, (C^a)_{a\in A})$ be a Cartan scheme such that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite root system. For brevity we will write $R^a$ instead of ${(R\re)^{a}}$ for all $a\in A$. We say that a subgroup $H\subset {\mathbb{Z}}^I$ is a *hyperplane* if ${\mathbb{Z}}^I/H\cong {\mathbb{Z}}$. Then ${\mathrm{rk}}\,H=\#I-1$ is the rank of $H$. Sometimes we will identify ${\mathbb{Z}}^I$ with its image under the canonical embedding ${\mathbb{Z}}^I\to {\mathbb{Q}}\otimes _{\mathbb{Z}}{\mathbb{Z}}^I\cong {\mathbb{Q}}^I$.
\[le:hyperplane\] Let $a\in A$ and let $H\subset {\mathbb{Z}}^I$ be a hyperplane. Suppose that $H$ contains ${\mathrm{rk}}\,H$ linearly independent elements of $R^a$. Let $\mathfrak{n}_H$ be a normal vector of $H$ in ${\mathbb{Q}}^I$ with respect to a scalar product $(\cdot ,\cdot )$ on ${\mathbb{Q}}^I$. If $(\mathfrak{n}_H,{\alpha })\ge 0$ for all ${\alpha }\in R^a_+$, then $H$ contains ${\mathrm{rk}}\,H$ simple roots, and all roots contained in $H$ are linear combinations of these simple roots.
The assumptions imply that any positive root in $H$ is a linear combination of simple roots contained in $H$. Since $R^a=R^a_+\cup -R^a_+$, this implies the claim.
Let $a\in A$ and let $H\subset {\mathbb{Z}}^I$ be a hyperplane. Suppose that $H$ contains ${\mathrm{rk}}\,H$ linearly independent elements of $R^a$. Then there exist $b\in A$ and $w\in \operatorname{Hom}(a,b)$ such that $w(H)$ contains ${\mathrm{rk}}\,H$ simple roots. \[le:hyperplane2\]
Let $(\cdot ,\cdot )$ be a scalar product on ${\mathbb{Q}}^I$. Choose a normal vector $\mathfrak{n}_H$ of $H$ in ${\mathbb{Q}}^I$ with respect to $(\cdot ,\cdot )$. Let $m=\# \{{\alpha }\in R^a_+\,|\,(\mathfrak{n}_H,{\alpha })<0\}$. Since ${\mathcal{R}}\re
({\mathcal{C}})$ is finite, $m$ is a nonnegative integer. We proceed by induction on $m$. If $m=0$, then $H$ contains ${\mathrm{rk}}\,H$ simple roots by Lemma \[le:hyperplane\]. Otherwise let $j\in I$ with $(\mathfrak{n}_H,{\alpha }_j)<0$. Let $a'={\rho }_j(a)$ and $H'=\s_j^a(H)$. Then $\s_j^a(\mathfrak{n}_H)$ is a normal vector of $H'$ with respect to the scalar product $(\cdot ,\cdot )'=
(\s_j^{{\rho }_j(a)}(\cdot ),\s_j^{{\rho }_j(a)}(\cdot ))$. Since $\s_j^a:R^a_+\setminus \{{\alpha }_j\}\to R^{a'}_+\setminus \{{\alpha }_j\}$ is a bijection and $\s_j^a({\alpha }_j)=-{\alpha }_j$, we conclude that $$\begin{aligned}
\# \{\beta \in R^{a'}_+\,|\,(\s^a_j(\mathfrak{n}_H),\beta )'<0\}=
\# \{{\alpha }\in R^a_+\,|\,(\mathfrak{n}_H,{\alpha })<0\}-1.
\end{aligned}$$ By induction hypothesis there exists $b\in A$ and $w'\in \operatorname{Hom}(a',b)$ such that $w'(H')$ contains ${\mathrm{rk}}\,H'={\mathrm{rk}}\,H$ simple roots. Then the claim of the lemma holds for $w=w'\s_j^a$.
The following “volume” functions will be useful for our analysis. Let $k\in {\mathbb{N}}$ with $k\le \#I$. By the Smith normal form there is a unique left ${\mathrm{GL}}({\mathbb{Z}}^I)$-invariant right ${\mathrm{GL}}({\mathbb{Z}}^k)$-invariant function ${\mathrm{Vol}}_k:({\mathbb{Z}}^I)^k\to {\mathbb{Z}}$ such that $$\begin{aligned}
{\mathrm{Vol}}_k(a_1{\alpha }_1,\dots,a_k{\alpha }_k)=|a_1\cdots a_k| \quad
\text{for all $a_1,\dots,a_k\in {\mathbb{Z}}$,}\end{aligned}$$ where $|\cdot |$ denotes absolute value. In particular, if $k=1$ and $\beta \in {\mathbb{Z}}^I\setminus \{0\}$, then $\Vol
_1(\beta )$ is the largest integer $v$ such that $\beta =v\beta '$ for some $\beta '\in {\mathbb{Z}}^I$. Further, if $k=\#I$ and $\beta _1,\dots,\beta _k\in {\mathbb{Z}}^I$, then ${\mathrm{Vol}}_k(\beta _1,\dots,\beta _k)$ is the absolute value of the determinant of the matrix with columns $\beta _1,\dots,\beta _k$.
Let $a\in A$, $k\in \{1,2,\dots,\#I\}$, and let $\beta _1,\dots,\beta _k\in R^a$ be linearly independent elements. We write $V^a(\beta _1,\dots,\beta _k)$ for the unique maximal subgroup $V\subseteq {\mathbb{Z}}^I$ of rank $k$ which contains $\beta _1,\dots,\beta _k$. Then ${\mathbb{Z}}^I/V^a(\beta _1,\dots,\beta _k)$ is free. In particular, $V^a(\beta _1,\dots,\beta _{\#I})={\mathbb{Z}}^I$ for all $a\in A$ and any linearly independent subset $\{\beta _1,\dots,\beta _{\#I}\}$ of $R^a$.
\[de:base\] Let $W\subseteq {\mathbb{Z}}^I$ be a cofree subgroup (that is, ${\mathbb{Z}}^I/W$ is free) of rank $k$. We say that $\{\beta _1,\dots,\beta _k\}$ is a *base for $W$ at $a$*, if $\beta _i\in W$ for all $i\in \{1,\dots,k\}$ and $W\cap R^a\subseteq \sum _{i=1}^k{\mathbb{N}}_0\beta _i\cup
-\sum _{i=1}^k{\mathbb{N}}_0\beta _i$.
Now we discuss the relationship of linearly independent roots in a root system. Recall that ${\mathcal{C}}$ is a Cartan scheme such that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite root system of type ${\mathcal{C}}$.
\[th:genposk\] Let $a\in A$, $k\in \{1,\dots,\#I\}$, and let $\beta _1,\dots,\beta _k\in R^a$ be linearly independent roots. Then there exist $b\in A$, $w\in \operatorname{Hom}(a,b)$, and a permutation $\tau $ of $I$ such that $$w(\beta _i)\in {\mathrm{span}}_{\mathbb{Z}}\{{\alpha }_{\tau (1)},
\ldots ,{\alpha }_{\tau (i)}\} \cap R^b_+$$ for all $i\in \{1,\dots,k\}$.
Let $r=\#I$. Since $R^a$ contains $r$ simple roots, any linearly independent subset of $R^a$ can be enlarged to a linearly independent subset of $r$ elements. Hence it suffices to prove the theorem for $k=r$. We proceed by induction on $r$. If $r=1$, then the claim holds.
Assume that $r>1$. Lemma \[le:hyperplane2\] with $H=V^a(\beta _1,\dots,\beta _{r-1})$ tells that there exist $d\in A$ and $v\in \operatorname{Hom}(a,d)$ such that $v(H)$ is spanned by simple roots. By multiplying $v$ from the left with the longest element of ${\mathcal{W}}({\mathcal{C}})$ in the case that $v(\beta _r)\in -{\mathbb{N}}_0^I$, we may even assume that $v(\beta _r)\in {\mathbb{N}}_0^I$. Now let $J$ be the subset of $I$ such that $\#J=r-1$ and ${\alpha }_i\in v(H)$ for all $i\in J$. Consider the restriction ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})|_{J}$ of ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ to the index set $J$, see [@a-CH09a Def. 4.1]. Since $v(\beta _i)\in H$ for all $i\in \{1,\dots,r-1\}$, induction hypothesis provides us with $b\in A$, $u\in \operatorname{Hom}(d,b)$, and a permutation $\tau '$ of $J$ such that $u$ is a product of simple reflections $\s_i$, where $i\in J$, and $$uv(\beta _n)\in
{\mathrm{span}}_{\mathbb{Z}}\{{\alpha }_{\tau '(j_1)}, \ldots ,{\alpha }_{\tau '(j_n)}\}
\cap R^b_+$$ for all $n\in \{1,2,\dots,r-1\}$, where $J=\{j_1,\dots,j_{r-1}\}$. Since $v(\beta _r)\notin v(H)$ and $v(\beta _r)\in {\mathbb{N}}_0^I$, the $i$-th entry of $v(\beta _r)$, where $i\in I\setminus J$, is positive. This entry does not change if we apply $u$. Therefore $uv(\beta _r)\in {\mathbb{N}}_0^I$, and hence the theorem holds with $w=uv\in \operatorname{Hom}(a,b)$ and with $\tau $ the unique permutation with $\tau (n)=\tau '(j_n)$ for all $n\in \{1,\dots,r-1\}$.
\[simple\_rkk\] Let $a\in A$, $k\in \{1,\dots,\#I\}$, and let $\beta _1,\dots,\beta _k\in R^a$ be linearly independent elements. Then $\{\beta _1,\dots,\beta _k\}$ is a base for $V^a(\beta _1,\dots,\beta _k)$ at $a$ if and only if there exist $b\in A$, $w\in \operatorname{Hom}(a,b)$, and a permutation $\tau $ of $I$ such that $w(\beta _i)={\alpha }_{\tau (i)}$ for all $i\in \{1,\dots,k\}$. In this case ${\mathrm{Vol}}_k(\beta _1,\dots,\beta _k)=1$.
The if part of the claim holds by definition of a base and by the axioms for root systems.
Let $b,w$ and $\tau $ be as in Theorem \[th:genposk\]. Let $i\in
\{1,\dots,k\}$. The elements $w(\beta _1),\dots,w(\beta _i)$ are linearly independent and are contained in $V^b({\alpha }_{\tau (1)}, \dots ,
{\alpha }_{\tau (i)})$. Thus ${\alpha }_{\tau (i)}$ is a rational linear combination of $w(\beta _1),\dots,w(\beta _i)$. Now by assumption, $\{w(\beta _1),\dots,w(\beta _k)\}$ is a base for $V^b(w(\beta _1),\dots,w(\beta _k))$ at $b$. Hence ${\alpha }_{\tau
(i)}$ is a linear combination of the positive roots $w(\beta _1),\dots,w(\beta _i)$ with nonnegative integer coefficients. This is possible only if $\{w(\beta _1),\dots,w(\beta _i)\}$ contains ${\alpha }_{\tau (i)}$. By induction on $i$ we obtain that ${\alpha }_{\tau (i)}=w(\beta _i)$.
In the special case $k=\#I$ the above corollary tells that the bases of $\ndZ
^I$ at an object $a\in A$ are precisely those subsets which can be obtained as the image, up to a permutation, of the standard basis of ${\mathbb{Z}}^I$ under the action of an element of ${\mathcal{W}}({\mathcal{C}})$.
In [@p-CH09a] the notion of an ${\mathcal{F}}$-sequence was given, and it was used to explain the structure of root systems of rank two. Consider on ${\mathbb{N}}_0^2$ the total ordering $\le _{\mathbb{Q}}$, where $(a_1,a_2)\le _{\mathbb{Q}}(b_1,b_2)$ if and only if $a_1 b_2\le a_2 b_1$. A finite sequence $(v_1,\dots ,v_n)$ of vectors in ${\mathbb{N}}_0^2$ is an ${\mathcal{F}}$-sequence if and only if $v_1<_{\mathbb{Q}}v_2
<_{\mathbb{Q}}\cdots <_{\mathbb{Q}}v_n$ and one of the following holds.
- $n=2$, $v_1=(0,1)$, and $v_2=(1,0)$.
- $n>2$ and there exists $i\in \{2,3,\dots,n-1\}$ such that $v_i=v_{i-1}+v_{i+1}$ and $(v_1,\dots,v_{i-1}.v_{i+1},\dots,v_n)$ is an ${\mathcal{F}}$-sequence.
In particular, any ${\mathcal{F}}$-sequence of length $\ge 3$ contains $(1,1)$.
\[pr:R=Fseq\] [@p-CH09a Prop.3.7] Let ${\mathcal{C}}$ be a Cartan scheme of rank two. Assume that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite root system. Then for any $a\in A$ the set $R^a_+$ ordered by $\le _\QQ$ is an ${\mathcal{F}}$-sequence.
\[pr:sumoftwo\] [@p-CH09a Cor. 3.8] Let ${\mathcal{C}}$ be a Cartan scheme of rank two. Assume that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite root system. Let $a\in A$ and let $\beta \in R^a_+$. Then either $\beta $ is simple or it is the sum of two positive roots.
\[co:r2conv\] Let $a\in A$, $n\in {\mathbb{N}}$, and let ${\alpha },\beta \in R^a$ such that $\beta -n{\alpha }\in R^a$. Assume that $\{{\alpha },\beta -n{\alpha }\}$ is a base for $V^a({\alpha },\beta )$ at $a$. Then $\beta -k{\alpha }\in R^a$ for all $k\in \{1,2,\dots ,n\}$.
By Corollary \[simple\_rkk\] there exist $b\in A$, $w\in \operatorname{Hom}(a,b)$, and $i,j\in I$ such that $w({\alpha })={\alpha }_i$, $w(\beta -n{\alpha })={\alpha }_j$. Then $n{\alpha }_i+{\alpha }_j=w(\beta )\in R^b_+$. Hence $(n-k){\alpha }_i+{\alpha }_j\in R^b$ for all $k\in \{1,2,\dots,n\}$ by Proposition \[pr:sumoftwo\] and (R2). This yields the claim of the corollary.
\[co:cij\] Let $a\in A$, $k\in {\mathbb{Z}}$, and $i,j\in I$ such that $i\not=j$. Then ${\alpha }_j+k{\alpha }_i\in R^a$ if and only if $0\le k\le -c^a_{i j}$,
Axiom (R1) tells that ${\alpha }_j+k{\alpha }_i\notin R^a$ if $k<0$. Since $c^{{\rho }_i(a)}_{i j}=c^a_{i j}$ by (C2), Axiom (R3) gives that $\al
_j-c^a_{i j}{\alpha }_i=\sigma _i^{{\rho }_i(a)}({\alpha }_j)\in R^a$ and that ${\alpha }_j+k{\alpha }_i\notin R^a$ if $k>-c^a_{i j}$. Finally, if $0<k<-c^a_{i j}$ then ${\alpha }_j+k{\alpha }_i\in R^a$ by Corollary \[co:r2conv\] for ${\alpha }={\alpha }_i$, $\beta ={\alpha }_j-c^a_{i j}{\alpha }_i$, and $n=-c^a_{i j}$.
Proposition \[pr:sumoftwo\] implies another important fact.
\[root\_is\_sum\] Let ${\mathcal{C}}$ be a Cartan scheme. Assume that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite root system of type ${\mathcal{C}}$. Let $a\in A$ and ${\alpha }\in R^a_+$. Then either ${\alpha }$ is simple, or it is the sum of two positive roots.
Assume that ${\alpha }$ is not simple. Let $i\in I$, $b\in A$, and $w\in \operatorname{Hom}(b,a)$ such that ${\alpha }=w({\alpha }_i)$. Then $\ell(w)>0$. We may assume that for all $j\in I$, $b'\in A$, and $w'\in \operatorname{Hom}(b',a)$ with $w'(\alpha_j)={\alpha }$ we have $\ell(w')\ge\ell(w)$. Since $w(\alpha_i)\in {\mathbb{N}}_0^I$, we obtain that $\ell(w\s_i)>\ell(w)$ [@a-HeckYam08 Cor. 3]. Therefore, there is a $j\in I\setminus \{i\}$ with $\ell(w\s_j)<\ell(w)$. Let $w=w_1w_2$ such that $\ell(w)=\ell(w_1)+\ell(w_2)$, $\ell(w_1)$ minimal and $w_2=\ldots \s_i\s_j\s_i\s_j^b$. Assume that $w_2=\s_i\cdots \s_i\s_j^b$ — the case $w_2=\s_j\cdots \s_i \s_j^b$ can be treated similarly. The length of $w_1$ is minimal, thus $\ell(w_1\s_j)>\ell(w_1)$, and $\ell(w)=\ell(w_1)+\ell(w_2)$ yields that $\ell(w_1\s_i)>\ell(w_1)$. Using once more [@a-HeckYam08 Cor. 3] we conclude that $$\begin{aligned}
\label{eq:twopos}
w_1(\alpha_i)\in {\mathbb{N}}_0^I,\quad w_1(\alpha_j)\in {\mathbb{N}}_0^I.\end{aligned}$$ Let $\beta=w_2(\alpha_i)$. Then $\beta \in \NN_0 \alpha_i+\NN_0 \alpha_j$, since $\ell (w_2\s_i)>\ell (w_2)$. Moreover, $\beta $ is not simple. Indeed, $\alpha=w(\alpha_i)=w_1(\beta)$, so $\beta$ is not simple, since $\ell(w_1)<\ell(w)$ and $\ell(w)$ was chosen of minimal length. By Proposition \[pr:sumoftwo\] we conclude that $\beta$ is the sum of two positive roots $\beta_1$, $\beta_2\in {\mathbb{N}}_0{\alpha }_i+{\mathbb{N}}_0{\alpha }_j$. It remains to check that $w_1(\beta_1)$, $w_1(\beta_2)$ are positive. But this follows from .
Obstructions for Weyl groupoids of rank three {#rk3_obst}
=============================================
In this section we analyze the structure of finite Weyl groupoids of rank three. Let ${\mathcal{C}}$ be a Cartan scheme of rank three, and assume that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite irreducible root system of type ${\mathcal{C}}$. In this case a hyperplane in ${\mathbb{Z}}^I$ is the same as a cofree subgroup of rank two, which will be called a *plane* in the sequel. For simplicity we will take $I=\{1,2,3\}$, and we write $R^a$ for the set of positive real roots at $a\in A$.
Recall the definition of the functions ${\mathrm{Vol}}_k$, where $k\in \{1,2,3\}$, from the previous section. As noted, for three elements ${\alpha },\beta,\gamma\in\ZZ^3$ we have ${\mathrm{Vol}}_3(\alpha,\beta,\gamma )=1$ if and only if $\{\alpha,\beta,\gamma\}$ is a basis of ${\mathbb{Z}}^3$. Also, we will heavily use the notion of a base, see Definition \[de:base\].
\[rootmultiple\] Let $a\in A$ and $\alpha,\beta \in R^a$. Assume that ${\alpha }\not=\pm \beta $ and that $\{{\alpha },\beta\}$ is not a base for $V^a({\alpha },\beta )$ at $a$. Then there exist $k,l\in {\mathbb{N}}$ and $\delta \in R^a$ such that $\beta -k{\alpha }=l\delta $ and $\{{\alpha },\delta \}$ is a base for $V^a({\alpha },\beta )$ at $a$.
The claim without the relation $k>0$ is a special case of Theorem \[th:genposk\]. The relation $\beta \not=\delta $ follows from the assumption that $\{{\alpha },\beta \}$ is not a base for $V^a({\alpha },\beta )$ at $a$.
Let $a\in A$ and $\alpha,\beta \in R^a$ such that ${\alpha }\not=\pm \beta $. Then $\{{\alpha },\beta \}$ is a base for $V^a(\al, \beta )$ if and only if ${\mathrm{Vol}}_2({\alpha },\beta )=1$ and ${\alpha }-\beta \notin R^a$. \[le:base2\]
Assume first that $\{{\alpha },\beta \}$ is a base for $V^a({\alpha },\beta )$ at $a$. By Corollary \[simple\_rkk\] we may assume that ${\alpha }$ and $\beta $ are simple roots. Therefore ${\mathrm{Vol}}_2({\alpha },\beta )=1$ and ${\alpha }-\beta \notin R^a$.
Conversely, assume that ${\mathrm{Vol}}_2({\alpha },\beta )=1$, ${\alpha }-\beta \notin R^a$, and that $\{{\alpha },\beta \}$ is not a base for $V^a({\alpha },\beta )$ at $a$. Let $k,l,\delta $ as in Lemma \[rootmultiple\]. Then $$1={\mathrm{Vol}}_2({\alpha },\beta )={\mathrm{Vol}}_2({\alpha },\beta -k{\alpha })=l{\mathrm{Vol}}_2({\alpha },\delta ).$$ Hence $l=1$, and $\{{\alpha },\delta \}=\{{\alpha },\beta -k{\alpha }\}$ is a base for $V^a({\alpha },\beta )$ at $a$. Then $\beta -{\alpha }\in R^a$ by Corollary \[co:r2conv\] and since $k>0$. This gives the desired contradiction to the assumption ${\alpha }-\beta \notin R^a$.
Recall that a semigroup ordering $<$ on a commutative semigroup $(S,+)$ is a total ordering such that for all $s,t,u\in S$ with $s<t$ the relations $s+u<t+u$ hold. For example, the lexicographic ordering on ${\mathbb{Z}}^I$ induced by any total ordering on $I$ is a semigroup ordering.
\[posrootssemigroup\] Let $a\in A$, and let $V\subset {\mathbb{Z}}^I$ be a plane containing at least two positive roots of $R^a$. Let $<$ be a semigroup ordering on ${\mathbb{Z}}^I$ such that $0<\gamma $ for all $\gamma \in R^a_+$, and let ${\alpha },\beta $ denote the two smallest elements in $V\cap R^a_+$ with respect to $<$. Then $\{{\alpha },\beta \}$ is a base for $V$ at $a$.
Let ${\alpha }$ be the smallest element of $V\cap R^a_+$ with respect to $<$, and let $\beta $ be the smallest element of $V\cap (R^a_+\setminus \{{\alpha }\})$. Then $V=V^a(\al, \beta )$ by (R2). By Lemma \[rootmultiple\] there exists $\delta \in V\cap
R^a$ such that $\{{\alpha },\delta \}$ is a base for $V$ at $a$. First suppose that $\delta <0$. Let $m\in {\mathbb{N}}_0$ be the smallest integer with $\delta +(m+1){\alpha }\notin R^a$. Then $\delta +n{\alpha }<0$ for all $n\in {\mathbb{N}}_0$ with $n\le m$. Indeed, this holds for $n=0$ by assumption. By induction on $n$ we obtain from $\delta +n{\alpha }<0$ and the choice of ${\alpha }$ that $\delta +n{\alpha }<-{\alpha }$, since $\delta $ and ${\alpha }$ are not collinear. Hence $\delta +(n+1){\alpha }<0$. We conclude that $-(\delta +m{\alpha })>0$. Moreover, $\{{\alpha },-(\delta +m{\alpha })\}$ is a base for $V$ at $a$ by Lemma \[le:base2\] and the choice of $m$. Therefore, by replacing $\{{\alpha },\delta \}$ by $\{{\alpha },-(\delta +m{\alpha })\}$, we may assume that $\delta >0$. Since $\beta >0$, we conclude that $\beta =k{\alpha }+l\delta $ for some $k,l\in
{\mathbb{N}}_0$. Since $\beta $ is not a multiple of ${\alpha }$, this implies that $\beta =\delta $ or $\beta >\delta $. Then the choice of $\beta $ and the positivity of $\delta $ yield that $\delta =\beta $, that is, $\{{\alpha },\beta \}$ is a base for $V$ at $a$.
\[le:badroots\] Let $k\in {\mathbb{N}}_{\ge 2}$, $a\in A$, ${\alpha }\in R^a_+$, and $\beta \in {\mathbb{Z}}^I$ such that ${\alpha }$ and $\beta $ are not collinear and ${\alpha }+k\beta \in R^a$. Assume that ${\mathrm{Vol}}_2({\alpha },\beta )=1$ and that $(-{\mathbb{N}}{\alpha }+{\mathbb{Z}}\beta )
\cap {\mathbb{N}}_0^I=\emptyset $. Then $\beta \in R^a$ and ${\alpha }+l\beta \in R^a$ for all $l\in \{1,2,\dots,k\}$.
We prove the claim indirectly. Assume that $\beta \notin R^a$. By Lemma \[posrootssemigroup\] there exists a base $\{\gamma _1,\gamma
_2\}$ for $V^a({\alpha },\beta )$ at $a$ such that $\gamma _1,\gamma _2\in R^a_+$. The assumptions of the lemma imply that there exist $m_1,l_1\in {\mathbb{N}}_0$ and $m_2,l_2\in {\mathbb{Z}}$ such that $\gamma _1=m_1{\alpha }+m_2\beta $, $\gamma _2=l_1{\alpha }+l_2\beta $. Since $\beta \notin R^a$, we obtain that $m_1\ge 1$ and $m_2\ge 1$. Therefore relations ${\alpha },{\alpha }+k\beta \in R^a_+$ imply that $\{{\alpha },{\alpha }+k\beta \}=\{\gamma _1,\gamma _2\}$. The latter is a contradiction to ${\mathrm{Vol}}_2(\gamma _1,\gamma _2)=1$ and ${\mathrm{Vol}}_2({\alpha },{\alpha }+k\beta )=k>1$. Thus $\beta \in R^a$. By Lemma \[rootmultiple\] we obtain that $\{\beta ,{\alpha }-m\beta \}$ is a base for $V^a({\alpha },\beta )$ at $a$ for some $m\in {\mathbb{N}}_0$. Then Corollary \[co:r2conv\] and the assumption that ${\alpha }+k\beta \in R^a$ imply the last claim of the lemma.
We say that a subset $S$ of ${\mathbb{Z}}^3$ is *convex*, if any rational convex linear combination of elements of $S$ is either in $S$ or not in ${\mathbb{Z}}^3$. We start with a simple example.
\[le:square\] Let $a\in A$. Assume that $c^a_{12}=0$.
\(1) Let $k_1,k_2\in {\mathbb{Z}}$. Then ${\alpha }_3+k_1{\alpha }_1+k_2{\alpha }_2\in R^a$ if and only if $0\le k_1\le -c^a_{13}$ and $0\le k_2\le -c^a_{23}$.
\(2) Let $\gamma \in ({\alpha }_3+{\mathbb{Z}}{\alpha }_1+{\mathbb{Z}}{\alpha }_2)\cap R^a$. Then $\gamma -{\alpha }_1\in R^a$ or $\gamma +{\alpha }_1\in R^a$. Similarly $\gamma -{\alpha }_2\in R^a$ or $\gamma +{\alpha }_2\in R^a$.
\(1) The assumption $c^a_{12}=0$ implies that $c^{{\rho }_1(a)}_{23}=c^a_{23}$, see [@a-CH09a Lemma4.5]. Applying ${\sigma }_1^{{\rho }_1(a)}$, ${\sigma }_2^{{\rho }_2(a)}$, and ${\sigma }_1{\sigma }_2^{{\rho }_2{\rho }_1(a)}$ to ${\alpha }_3$ we conclude that ${\alpha }_3-c^a_{13}{\alpha }_1$, ${\alpha }_3-c^a_{23}{\alpha }_2$, ${\alpha }_3-c^a_{13}\al
_1-c^a_{23}{\alpha }_2\in R^a_+$. Thus Lemma \[le:badroots\] implies that ${\alpha }_3+m_1{\alpha }_1+m_2{\alpha }_2\in R^a$ for all $m_1,m_2\in {\mathbb{Z}}$ with $0\le m_1\le -c^a_{13}$ and $0\le m_2\le -c^a_{23}$. Further, (R1) gives that ${\alpha }_3+k_1{\alpha }_1+k_2{\alpha }_2\notin R^a$ if $k_1<0$ or $k_2<0$. Applying again the simple reflections ${\sigma }_1$ and ${\sigma }_2$, a similar argument proves the remaining part of the claim. Observe that the proof does not use the fact that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is irreducible.
\(2) Since $c^a_{12}=0$, the irreducibility of ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ yields that $c^a_{13},c^a_{23}<0$ by [@a-CH09a Def.4.5, Prop.4.6]. Hence the claim follows from (1).
\[pr:suminR\] Let $a\in A$ and let $\gamma _1,\gamma _2,\gamma _3\in R^a$. Assume that ${\mathrm{Vol}}_3(\gamma _1,\gamma _2,\gamma _3)=1$ and that $\gamma _1-\gamma _2,\gamma _1-\gamma _3\notin R^a$. Then $\gamma _1+\gamma _2\in R^a$ or $\gamma _1+\gamma _3\in R^a$.
Since $\gamma _1-\gamma _2\notin R^a$ and ${\mathrm{Vol}}_3(\gamma _1,\gamma
_2,\gamma _3)=1$, Theorem \[th:genposk\] and Lemma \[le:base2\] imply that there exists $b\in A$, $w\in \operatorname{Hom}(a,b)$ and $i_1,i_2,i_3\in I$ such that $w(\gamma _1)={\alpha }_{i_1}$, $w(\gamma _2)={\alpha }_{i_2}$, and $w(\gamma _3)={\alpha }_{i_3}+k_1{\alpha }_{i_1}+k_2{\alpha }_{i_2}$ for some $k_1,k_2\in {\mathbb{N}}_0$. Assume that $\gamma _1+\gamma _2\notin R^a$. Then $c^b_{i_1i_2}=0$. Since $\gamma _3-\gamma _1\notin R^a$, Lemma \[le:square\](2) with $\gamma =w(\gamma _3)$ gives that $\gamma _3+\gamma _1\in R^a$. This proves the claim.
\[le:root\_diffs1\] Assume that $R^a\cap ({\mathbb{N}}_0{\alpha }_1+{\mathbb{N}}_0{\alpha }_2)$ contains at most $4$ positive roots.
\(1) The set $S_3:=({\alpha }_3+{\mathbb{Z}}{\alpha }_1+{\mathbb{Z}}{\alpha }_2)\cap R^a$ is convex.
\(2) Let $\gamma \in S_3$. Then $\gamma ={\alpha }_3$ or $\gamma -{\alpha }_1\in R^a$ or $\gamma -{\alpha }_2\in R^a$.
Consider the roots of the form $w^{-1}({\alpha }_3)\in R^a$, where $w\in \Homsfrom{a}$ is a product of reflections ${\sigma }_1^b$, ${\sigma }_2^b$ with $b\in A$. All of these roots belong to $S_3$. Using Lemma \[le:badroots\] the claims of the lemma can be checked case by case, similarly to the proof of Lemma \[le:square\].
The lemma can be proven by elementary calculations, since all nonsimple positive roots in $({\mathbb{Z}}{\alpha }_1+{\mathbb{Z}}{\alpha }_2)\cap R^a$ are of the form say ${\alpha }_1+k{\alpha }_2$, $k\in {\mathbb{N}}$. We will see in Theorem \[th:class\] that the classification of connected Cartan schemes of rank three admitting a finite irreducible root system has a finite set of solutions. Thus it is possible to check the claim of the lemma for any such Cartan scheme. Using computer calculations one obtains that the lemma holds without any restriction on the (finite) cardinality of $R^a\cap ({\mathbb{N}}_0{\alpha }_1+{\mathbb{N}}_0{\alpha }_2)$.
\[le:root\_diffs2\] Let ${\alpha },\beta ,\gamma \in R^a$ such that ${\mathrm{Vol}}_3({\alpha },\beta ,\gamma )=1$. Assume that ${\alpha }-\beta $, $\beta -\gamma $, ${\alpha }-\gamma \notin R^a$ and that $\{{\alpha },\beta ,\gamma \}$ is not a base for ${\mathbb{Z}}^I$ at $a$. Then the following hold.
\(1) There exist $w\in \Homsfrom{a}$ and $n_1,n_2\in {\mathbb{N}}$ such that $w({\alpha })$, $w(\beta )$, and $w(\gamma -n_1{\alpha }-n_2\beta )$ are simple roots.
\(2) None of the vectors ${\alpha }-k\beta $, ${\alpha }-k\gamma $, $\beta -k{\alpha }$, $\beta -k\gamma $, $\gamma -k{\alpha }$, $\gamma -k\beta $, where $k\in {\mathbb{N}}$, is contained in $R^a$.
\(3) ${\alpha }+\beta $, ${\alpha }+\gamma $, $\beta +\gamma \in R^a$.
\(4) One of the sets $\{{\alpha }+2\beta ,\beta +2\gamma ,\gamma +2{\alpha }\}$ and $\{2{\alpha }+\beta ,2\beta +\gamma ,2\gamma +{\alpha }\}$ is contained in $R^a$, the other one has trivial intersection with $R^a$.
\(5) None of the vectors $\gamma -{\alpha }-k\beta $, $\gamma -k{\alpha }-\beta $, $\beta -\gamma -k{\alpha }$, $\beta -k\gamma -{\alpha }$, ${\alpha }-\beta -k\gamma $, ${\alpha }-k\beta -\gamma $, where $k\in {\mathbb{N}}_0$, is contained in $R^a$.
\(6) Assume that ${\alpha }+2\beta \in R^a$. Let $k\in {\mathbb{N}}$ such that ${\alpha }+k\beta \in R^a$, ${\alpha }+(k+1)\beta \notin R^a$. Let ${\alpha }'={\alpha }+k\beta $, $\beta '=-\beta $, $\gamma '=\gamma +\beta $. Then ${\mathrm{Vol}}_3({\alpha }',\beta ',\gamma ')=1$, $\{{\alpha }',\beta ',\gamma '\}$ is not a base for ${\mathbb{Z}}^I$ at $a$, and none of ${\alpha }'-\beta '$, ${\alpha }'-\gamma '$, $\beta '-\gamma '$ is contained in $R^a$.
\(7) None of the vectors ${\alpha }+3\beta $, $\beta +3\gamma $, $\gamma +3{\alpha }$, $3{\alpha }+\beta $, $3\beta +\gamma $, $3\gamma +{\alpha }$ is contained in $R^a$. In particular, $k=2$ holds in (6).
\(1) By Theorem \[th:genposk\] there exist $m_1,m_2,n_1,n_2,n_3\in \ndN
_0$, $i_1,i_2,i_3\in I$, and $w\in \Homsfrom{a}$, such that $w({\alpha })={\alpha }_{i_1}$, $w(\beta )=m_1{\alpha }_{i_1}+m_2{\alpha }_{i_2}$, and $w(\gamma )=n_1{\alpha }_{i_1}+n_2{\alpha }_{i_2}+n_3{\alpha }_{i_3}$. Since $\det w\in
\{\pm 1\}$ and ${\mathrm{Vol}}_3({\alpha },\beta ,\gamma )=1$, this implies that $m_2=n_3=1$. Further, $\beta -{\alpha }\notin R^a$, and hence $w(\beta )={\alpha }_{i_2}$ by Corollary \[co:cij\]. Since $\{{\alpha },\beta ,\gamma \}$ is not a base for ${\mathbb{Z}}^I$ at $a$, we conclude that $w(\gamma )\not={\alpha }_{i_3}$. Then Corollary \[co:cij\] and the assumptions $\gamma -{\alpha }$, $\gamma -\beta
\notin R^a$ imply that $w(\gamma )\notin {\alpha }_{i_3}+{\mathbb{N}}_0{\alpha }_{i_1}$ and $w(\gamma )\notin {\alpha }_{i_3}+{\mathbb{N}}_0{\alpha }_{i_2}$. Thus the claim is proven.
\(2) By (1), $\{{\alpha },\beta \}$ is a base for $V^a({\alpha },\beta )$ at $a$. Thus ${\alpha }-k\beta \notin R^a$ for all $k\in {\mathbb{N}}$. The remaining claims follow by symmetry.
\(3) Suppose that ${\alpha }+\beta \notin R^a$. By (1) there exist $b\in A$, $w\in \operatorname{Hom}(a,b)$, $i_1,i_2,i_3\in I$ and $n_1,n_2\in {\mathbb{N}}$ such that $w({\alpha })={\alpha }_{i_1}$, $w(\beta )={\alpha }_{i_2}$, and $w(\gamma )={\alpha }_{i_3}+n_1{\alpha }_{i_1}+n_2{\alpha }_{i_2}\in R^b_+$. By Theorem \[root\_is\_sum\] there exist $n'_1,n'_2\in {\mathbb{N}}_0$ such that $n'_1\le n_1$, $n'_2\le n_2$, $n'_1+n'_2<n_1+n_2$, and $${\alpha }_{i_3}+n'_1{\alpha }_{i_1}+n'_2{\alpha }_{i_2}\in R^b_+,\quad
(n_1-n'_1){\alpha }_{i_1}+(n_2-n'_2){\alpha }_{i_2}\in R^b_+.$$ Since ${\alpha }+\beta \notin R^a$, Proposition \[pr:R=Fseq\] yields that $R^b_+\cap
{\mathrm{span}}_{\mathbb{Z}}\{{\alpha }_{i_1},{\alpha }_{i_2}\}=\{{\alpha }_{i_1},{\alpha }_{i_2}\}$. Thus $\gamma -{\alpha }\in R^a$ or $\gamma -\beta \in R^a$. This is a contradiction to the assumption of the lemma. Hence ${\alpha }+\beta \in R^a$. By symmetry we obtain that ${\alpha }+\gamma $, $\beta +\gamma \in R^a$.
\(4) Suppose that ${\alpha }+2\beta $, $2{\alpha }+\beta \notin R^a$. By (1) the set $\{{\alpha },\beta \}$ is a base for $V^a({\alpha },\beta )$ at $a$, and ${\alpha }+\beta \in R^a$ by (3). Then Proposition \[pr:R=Fseq\] implies that $R^a\cap {\mathrm{span}}_{\mathbb{Z}}\{{\alpha },\beta \}=\{\pm {\alpha },\pm \beta ,\pm ({\alpha }+\beta
)\}$. Thus (1) and Lemma \[le:root\_diffs1\](2) give that $\gamma -{\alpha }\in
R^a$ or $\gamma -\beta \in R^a$, a contradiction to the initial assumption of the lemma. Hence by symmetry each of the sets $\{{\alpha }+2\beta ,2\al
+\beta \}$, $\{{\alpha }+2\gamma ,2{\alpha }+\gamma \}$, $\{\beta +2\gamma ,2\beta
+\gamma \}$ contains at least one element of $R^a$.
Assume now that $\gamma +2{\alpha }$, $\gamma +2\beta \in R^a$. By changing the object via (1) we may assume that ${\alpha }$, $\beta $, and $\gamma -n_1{\alpha }-n_2\beta $ are simple roots for some $n_1,n_2\in {\mathbb{N}}$. Then Lemma \[le:badroots\] applies to $\gamma +2{\alpha }\in R^a_+$ and $\beta -{\alpha }$, and tells that $\beta
-{\alpha }\in R^a$. This gives a contradiction.
By the previous two paragraphs we conclude that if $\gamma +2{\alpha }\in R^a$, then $\gamma +2\beta \notin R^a$, and hence $\beta +2\gamma \in R^a$. Similarly, we also obtain that ${\alpha }+2\beta \in R^a$. By symmetry this implies (4).
\(5) By symmetry it suffices to prove that $\gamma -({\alpha }+k\beta )\notin R^a$ for all $k\in {\mathbb{N}}_0$. For $k=0$ the claim holds by assumption.
First we prove that $\gamma -({\alpha }+2\beta )\notin R^a$. By (3) we know that $\gamma +{\alpha }$, ${\alpha }+\beta \in R^a$, and $\gamma -\beta \notin R^a$ by assumption. Since ${\mathrm{Vol}}_2(\gamma +{\alpha },{\alpha }+\beta )=1$, Lemma \[le:base2\] gives that $\{\gamma +{\alpha }, {\alpha }+\beta \}$ is a base for $V^a(\gamma +{\alpha },{\alpha }+\beta )$ at $a$. Since $\gamma -({\alpha }+2\beta )=(\gamma +{\alpha })
-2({\alpha }+\beta )$, we conclude that $\gamma -({\alpha }+2\beta )\notin R^a$.
Now let $k\in {\mathbb{N}}$. Assume that $\gamma -({\alpha }+k\beta )\in R^a$ and that $k$ is minimal with this property. Let ${\alpha }'=-{\alpha }$, $\beta '=-\beta $, $\gamma '=\gamma -({\alpha }+k\beta )$. Then ${\alpha }',\beta ',\gamma '\in R^a$ with ${\mathrm{Vol}}_3({\alpha }',\beta ',\gamma ')=1$. Moreover, ${\alpha }'-\beta '\notin
R^a$ by assumption, ${\alpha }'-\gamma '=-(\gamma -k\beta )\notin R^a$ by (2), and $\beta '-\gamma '=-(\gamma -{\alpha }-(k-1)\beta )\notin R^a$ by the minimality of $k$. Further, $\{{\alpha }',\beta ',\gamma '\}$ is not a base for $R^a$, since $\gamma =\gamma '-{\alpha }'-k\beta '$. Hence Claim (3) holds for $\al
',\beta ',\gamma '$. In particular, $$\gamma '+\beta '=\gamma -({\alpha }+(k+1)\beta )\in R^a.$$ This and the previous paragraph imply that $k\ge 3$.
We distinguish two cases depending on the parity of $k$. First assume that $k$ is even. Let ${\alpha }'=\gamma +{\alpha }$ and $\beta '=-({\alpha }+k/2\beta )$. Then ${\mathrm{Vol}}_2({\alpha }',\beta ')=1$ and ${\alpha }'+2\beta '=\gamma -({\alpha }+k\beta )\in R^a$. Lemma \[le:badroots\] applied to ${\alpha }',\beta '$ gives that $\gamma
-k/2\beta ={\alpha }'+\beta '\in R^a$, which contradicts (2).
Finally, the case of odd $k$ can be excluded similarly by considering $V^a(\gamma +{\alpha },\gamma -({\alpha }+(k+1)\beta ))$.
\(6) We get ${\mathrm{Vol}}_3({\alpha }',\beta ',\gamma ')=1$ since ${\mathrm{Vol}}_3({\alpha },\beta ,\gamma )=1$ and ${\mathrm{Vol}}_3$ is invariant under the right action of ${\mathrm{GL}}({\mathbb{Z}}^3)$. Further, $\beta '-\gamma '=-(2\beta +\gamma )\notin R^a$ by (4), and ${\alpha }'-\gamma '\notin R^a$ by (5). Finally, $({\alpha }',\beta ',\gamma ')$ is not a base for ${\mathbb{Z}}^I$ at $a$, since $R^a\ni \gamma -n_1{\alpha }-n_2\beta =\gamma '-n_1{\alpha }'+(1+n_2-kn_1)\beta '$, where $n_1,n_2\in {\mathbb{N}}$ are as in (1).
\(7) We prove that $\gamma +3{\alpha }\notin R^a$. The rest follows by symmetry. If $2{\alpha }+\beta \in R^a$, then $\gamma +2{\alpha }\notin R^a$ by (4), and hence $\gamma +3{\alpha }\notin R^a$. Otherwise ${\alpha }+2\beta ,\gamma +2{\alpha }\in R^a$ by (4). Let $k$, ${\alpha }'$, $\beta '$, $\gamma '$ be as in (6). Then (6) and (3) give that $R^a\ni \gamma '+{\alpha }'=\gamma +{\alpha }+(k+1)\beta $. Since $\gamma +{\alpha }\in
R^a$, Lemma \[le:badroots\] implies that $\gamma +{\alpha }+2\beta \in R^a$. Let $w$ be as in (1). If $\gamma +3{\alpha }\in R^a$, then Lemma \[le:badroots\] for the vectors $w(\gamma +{\alpha }+2\beta )$ and $w({\alpha }-\beta )$ implies that $w({\alpha }-\beta )\in R^a$, a contradiction. Thus $\gamma +3{\alpha }\notin R^a$.
Recall that $\cC$ is a Cartan scheme of rank three and ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite irreducible root system of type $\cC$.
\[root\_diffs\] Let $a\in A$ and ${\alpha },\beta,\gamma\in R^a$. If ${\mathrm{Vol}}_3({\alpha },\beta,\gamma)=1$ and none of $\alpha-\beta$, $\alpha-\gamma$, $\beta-\gamma$ are contained in $R^a$, then $\{{\alpha },\beta,\gamma \}$ is a base for ${\mathbb{Z}}^I$ at $a$.
Assume to the contrary that $\{{\alpha },\beta ,\gamma \}$ is not a base for ${\mathbb{Z}}^I$ at $a$. Exchanging ${\alpha }$ and $\beta $ if necessary, by Lemma \[le:root\_diffs2\](4) we may assume that ${\alpha }+2\beta \in R^a$. By Lemma \[le:root\_diffs2\](6),(7) the triple $({\alpha }+2\beta
,-\beta , \gamma +\beta )$ satisfies the assumptions of Lemma \[le:root\_diffs2\], and $({\alpha }+2\beta )+2(-\beta )={\alpha }\in R^a$. Hence $2{\alpha }+3\beta =2({\alpha }+2\beta )+(-\beta )\notin R^a$ by Lemma \[le:root\_diffs2\](4). Thus $V^a({\alpha },\beta )\cap R^a=\{\pm {\alpha },
\pm ({\alpha }+\beta ),\pm ({\alpha }+2\beta ), \pm \beta \}$ by Proposition \[pr:R=Fseq\], and hence, using Lemma \[le:root\_diffs2\](1), we obtain from Lemma \[le:root\_diffs1\](2) that $\gamma -{\alpha }\in R^a$ or $\gamma -\beta \in R^a$. This is a contradiction to our initial assumption, and hence $\{{\alpha },\beta ,\gamma \}$ is a base for ${\mathbb{Z}}^I$ at $a$.
\[convex\_diff2\] Let $a\in A$ and $\gamma _1,\gamma _2,{\alpha }\in R^a$. Assume that $\{\gamma _1,\gamma _2\}$ is a base for $V^a(\gamma _1,\gamma _2)$ at $a$ and that ${\mathrm{Vol}}_3(\gamma _1,\gamma _2,{\alpha })=1$. Then either $\{\gamma _1,\gamma _2,{\alpha }\}$ is a base for ${\mathbb{Z}}^I$ at $a$, or one of ${\alpha }-\gamma _1$, $\al-\gamma _2$ is contained in $R^a$.
For the proof of Theorem \[th:class\] we need a bound for the entries of the Cartan matrices of ${\mathcal{C}}$. To get this bound we use the following.
\[le:someroots\] Let $a\in A$.
\(1) At most one of $c^a_{12}$, $c^a_{13}$, $c^a_{23}$ is zero.
\(2) ${\alpha }_1+{\alpha }_2+{\alpha }_3\in R^a$.
\(3) Let $k\in {\mathbb{Z}}$. Then $k{\alpha }_1+{\alpha }_2+{\alpha }_3\in R^a$ if and only if $k_1\le k\le k_2$, where $$\begin{aligned}
k_1=
\begin{cases}
0 & \text{if $c^a_{23}<0$,}\\
1 & \text{if $c^a_{23}=0$,}
\end{cases}
\quad
k_2=
\begin{cases}
-c^a_{12}-c^a_{13} & \text{if $c^{{\rho }_1(a)}_{23}<0$,}\\
-c^a_{12}-c^a_{13}-1 & \text{if $c^{{\rho }_1(a)}_{23}=0$.}
\end{cases}
\end{aligned}$$
\(4) We have $2{\alpha }_1+{\alpha }_2+{\alpha }_3\in R^a$ if and only if either $c^a_{12}+c^a_{13}\le -3$ or $c^a_{12}+c^a_{13}=-2$, $c^{\rfl
_1(a)}_{23}<0$.
\(5) Assume that $$\begin{aligned}
\#(R^a_+\cap ({\mathbb{Z}}{\alpha }_1+{\mathbb{Z}}{\alpha }_2))\ge 5.
\label{eq:Rbig}
\end{aligned}$$ Then there exist $k\in {\mathbb{N}}_0$ such that $k{\alpha }_1+2{\alpha }_2+{\alpha }_3\in R^a$. Let $k_0$ be the smallest among all such $k$. Then $k_0$ is given by the following. $$\begin{aligned}
\begin{cases}
0 & \text{if $c^a_{23}\le -2$,}\\
1 & \text{if $-1\le c^a_{23}\le 0$,
$c^a_{21}+c^a_{23}\le -2$, $c^{{\rho }_2(a)}_{13}<0$,}\\
1 & \text{if $-1\le c^a_{23}\le 0$,
$c^a_{21}+c^a_{23}\le -3$, $c^{{\rho }_2(a)}_{13}=0$,}\\
2 & \text{if $c^a_{21}=c^a_{23}=-1$, $c^{{\rho }_2(a)}_{13}=0$,}\\
2 & \text{if $c^a_{21}=-1$, $c^a_{23}=0$, $c^{{\rho }_2(a)}_{13}\le -2$,}\\
3 & \text{if $c^a_{21}=-1$, $c^a_{23}=0$, $c^{{\rho }_2(a)}_{13}=-1$,
$c^{{\rho }_2(a)}_{12}\le -3$,}\\
3 & \text{if $c^a_{21}=-1$, $c^a_{23}=0$, $c^{{\rho }_2(a)}_{13}=-1$,
$c^{{\rho }_2(a)}_{12}=-2$, $c^{{\rho }_1{\rho }_2(a)}_{23}<0$,}\\
4 & \text{otherwise.}
\end{cases}
\end{aligned}$$ Further, if $c^a_{13}=0$ then $k_0\le 2$.
We may assume that ${\mathcal{C}}$ is connected. Then, since ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is irreducible, Claim (1) holds by [@a-CH09a Def.4.5, Prop.4.6].
\(2) The claim is invariant under permutation of $I$. Thus by (1) we may assume that $c^a_{23}\not=0$. Hence ${\alpha }_2+{\alpha }_3\in
R^a$. Assume first that $c^a_{13}=0$. Then $c^{{\rho }_1(a)}_{13}=0$ by (C2), $c^{{\rho }_1(a)}_{23}\not=0$ by (1), and ${\alpha }_2+{\alpha }_3\in R^{{\rho }_1(a)}_+$. Hence $\s^{{\rho }_1(a)}_1({\alpha }_2+{\alpha }_3)=-c^a_{12}{\alpha }_1+{\alpha }_2+{\alpha }_3\in
R^a$. Therefore (2) holds by Lemma \[le:badroots\] for ${\alpha }={\alpha }_2+{\alpha }_3$ and $\beta ={\alpha }_1$.
Assume now that $c^a_{13}\not=0$. By symmetry and the previous paragraph we may also assume that $c^a_{12},c^a_{23}\not=0$. Let $b={\rho }_1(a)$. If $c^b_{23}=0$ then ${\alpha }_1+{\alpha }_2+{\alpha }_3\in R^b$ by the previous paragraph. Then $$R^a\ni \s^b_1({\alpha }_1+{\alpha }_2+{\alpha }_3)
=(-c^a_{12}-c^a_{13}-1){\alpha }_1+{\alpha }_2+{\alpha }_3,$$ and the coefficient of ${\alpha }_1$ is positive. Further, ${\alpha }_2+{\alpha }_3\in R^a$, and hence (2) holds in this case by Lemma \[le:badroots\]. Finally, if $c^b_{23}\not=0$, then ${\alpha }_2+\al
_3\in R^b_+$, and hence $(-c^a_{12}-c^a_{13}){\alpha }_1+{\alpha }_2+{\alpha }_3\in R^a$. Since $-c^a_{12}-c^a_{13}>0$, (2) follows again from Lemma \[le:badroots\].
\(3) If $c^a_{23}<0$, then ${\alpha }_2+{\alpha }_3\in R^a$ and $-k{\alpha }_1+{\alpha }_2+{\alpha }_3\notin R^a$ for all $k\in {\mathbb{N}}$. If $c^a_{23}=0$, then ${\alpha }_1+{\alpha }_2+{\alpha }_3\in R^a$ by (2), and $-k{\alpha }_1+{\alpha }_2+{\alpha }_3\notin R^a$ for all $k\in {\mathbb{N}}_0$. Applying the same argument to $R^{{\rho }_1(a)}$ and using the reflection $\s^{{\rho }_1(a)}_1$ and Lemma \[le:badroots\] gives the claim.
\(4) This follows immediately from (3).
\(5) The first case follows from Corollary \[co:cij\] and the second and third cases are obtained from (4) by interchanging the elements $1$ and $2$ of $I$. We also obtain that if $k_0$ exists then $k_0\ge 2$ in all other cases. By and Proposition \[pr:R=Fseq\] we conclude that ${\alpha }_1+{\alpha }_2\in R^a$. Then $c^a_{21}<0$ by Corollary \[co:cij\], and hence we are left with calculating $k_0$ if $-1\le c^a_{23}\le 0$, $c^a_{21}+c^a_{23}=-2$, $c^{{\rho }_2(a)}_{13}=0$, or $c^a_{21}=-1$, $c^a_{23}=0$. By (1), if $c^{{\rho }_2(a)}_{13}=0$ then $c^{{\rho }_2(a)}_{23}\not=0$, and hence $c^a_{23}<0$ by (C2). Thus we have to consider the elements $k{\alpha }_1+2{\alpha }_2+{\alpha }_3$, where $k\ge 2$, under the assumption that $$\begin{aligned}
c^a_{21}=c^a_{23}=-1, \, c^{{\rho }_2(a)}_{13}=0 \quad \text{or}\quad
c^a_{21}=-1,\, c^a_{23}=0.
\label{eq:ccond1}
\end{aligned}$$ Since $c^a_{21}=-1$, Condition gives that $$c^{{\rho }_2(a)}_{12}\le -2,$$ see [@a-CH09a Lemma4.8]. Further, the first set of equations in implies that $c^{{\rho }_1{\rho }_2(a)}_{13}=0$, and hence $c^{{\rho }_1{\rho }_2(a)}_{23}<0$ by (1). Since ${\sigma }_2^a(2{\alpha }_1+2{\alpha }_2+{\alpha }_3)=2{\alpha }_1+{\alpha }_3-c^a_{23}\al
_2$, the first set of equations in and (4) imply that $k_0=2$. Similarly, Corollary \[co:cij\] tells that $k_0=2$ under the second set of conditions in if and only if $c^{{\rho }_2(a)}_{13}\le -2$.
It remains to consider the situation for $$\begin{aligned}
c^a_{21}=-1,\,c^a_{23}=0,\,c^{{\rho }_2(a)}_{13}=-1.
\label{eq:ccond2}
\end{aligned}$$ Indeed, equation $c^a_{23}=0$ implies that $c^{{\rho }_2(a)}_{23}=0$ by (C2), and hence $c^{{\rho }_2(a)}_{13}<0$ by (1), Assuming we obtain that ${\sigma }_2^a(3{\alpha }_1+2{\alpha }_2+{\alpha }_3)=3{\alpha }_1+{\alpha }_2+{\alpha }_3$, and hence (3) implies that $k_0=3$ if and only if the corresponding conditions in (5) are valid.
The rest follows by looking at $\sigma _1 \sigma _2^a(4{\alpha }_1+2{\alpha }_2+\al
_3)$ and is left to the reader. The last claim holds since $c^a_{13}=0$ implies that $c^a_{23}\not=0$ by (1). The assumption $\#(R^a_+\cap ({\mathbb{Z}}{\alpha }_1+{\mathbb{Z}}{\alpha }_2))\ge 5$ is needed to exclude the case $c^a_{21}=-1$, $c^{{\rho }_2(a)}_{12}=-2$, $c^{{\rho }_1\rfl
_2(a)}_{21}=-1$, where $R^a_+\cap ({\mathbb{Z}}{\alpha }_1+{\mathbb{Z}}{\alpha }_2)=\{{\alpha }_2,{\alpha }_1+{\alpha }_2,
2{\alpha }_1+{\alpha }_2,{\alpha }_1\}$, by using Proposition \[pr:R=Fseq\] and Corollary \[co:cij\], see also the proof of [@a-CH09a Lemma4.8].
\[cartan\_6\] Let $\cC$ be a Cartan scheme of rank three. Assume that ${\mathcal{R}}{^\mathrm{re}}$ is a finite irreducible root system of type $\cC$. Then all entries of the Cartan matrices of $\cC$ are greater or equal to $-7$.
It can be assumed that ${\mathcal{C}}$ is connected. We prove the theorem indirectly. To do so we may assume that $a\in A$ such that $c^a_{12}\le -8$. Then Proposition \[pr:R=Fseq\] implies that $\# (R^a_+\cap ({\mathbb{Z}}{\alpha }_1+{\mathbb{Z}}{\alpha }_2))\ge 5$. By Lemma \[le:someroots\] there exists $k_0\in \{0,1,2,3,4\}$ such that ${\alpha }:=k_0{\alpha }_1+2{\alpha }_2+{\alpha }_3\in R^a_+$ and ${\alpha }-{\alpha }_1\notin R^a$. By Lemma \[le:base2\] and the choice of $k_0$ the set $\{{\alpha },{\alpha }_1\}$ is a base for $V^a({\alpha },{\alpha }_1)$ at $a$. Corollary \[simple\_rkk\] implies that there exists a root $\gamma \in R^a$ such that $\{{\alpha },{\alpha }_1,\gamma \}$ is a base for ${\mathbb{Z}}^I$ at $a$. Let $d\in A$, $w\in \operatorname{Hom}(a,d)$, and $i_1,i_2,i_3\in I$ such that $w({\alpha })={\alpha }_{i_1}$, $w({\alpha }_1)={\alpha }_{i_2}$, $w(\gamma )={\alpha }_{i_3}$.
Let $b={\rho }_1(a)$. Again by Lemma \[le:someroots\] there exists $k_1\in \{0,1,2,3,4\}$ such that $\beta :=k_1{\alpha }_1+2{\alpha }_2+{\alpha }_3\in R^b_+$. Thus $$R^a_+\ni \s_1^b(\beta )=(-k_1-2c^a_{12}-c^a_{13}){\alpha }_1+2{\alpha }_2+\al
_3.$$ Further, $$-k_1-2c^a_{12}-c^a_{13}-k_0>-c^a_{12}$$ since $k_0\le 2$ if $c^a_{13}=0$. Hence ${\alpha }_{i_1}+(1-c^a_{12}){\alpha }_{i_2}
\in R^d$, that is, $c^d_{i_2 i_1}<c^a_{1 2}\le -8$. We conclude that there exists no lower bound for the entries of the Cartan matrices of ${\mathcal{C}}$, which is a contradiction to the finiteness of ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$. This proves the theorem.
The bound in the theorem is not sharp. After completing the classification one can check that the entries of the Cartan matrices of ${\mathcal{C}}$ are always at least $-6$. The entry $-6$ appears for example in the Cartan scheme corresponding to the root system with number $53$, see Corollary \[co:cij\].
\[Euler\_char\] Let ${\mathcal{C}}$ be an irreducible connected simply connected Cartan scheme of rank three. Assume that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite root system of type ${\mathcal{C}}$. Let $e$ be the number of vertices, $k$ the number of edges, and $f$ the number of ($2$-dimensional) faces of the object change diagram of ${\mathcal{C}}$. Then $e-k+f=2$.
Vertices of the object change diagram correspond to elements of $A$. Since ${\mathcal{C}}$ is connected and ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is finite, the set $A$ is finite. Consider the equivalence relation on $I\times A$, where $(i,a)$ is equivalent to $(j,b)$ for some $i,j\in I$ and $a,b\in A$, if and only if $i=j$ and $b\in \{a,{\rho }_i(a)\}$. (This is also known as the pushout of $I\times A$ along the bijections ${\mathrm{id}}:I\times A\to I\times A$ and ${\rho }:I\times A\to I\times A$, $(i,a)\mapsto (i,{\rho }_i(a))$.) Since ${\mathcal{C}}$ is simply connected, ${\rho }_i(a)\not=a$ for all $i\in I$ and $a\in A$. Then edges of the object change diagram correspond to equivalence classes in $I\times A$. Faces of the object change diagram can be defined as equivalence classes of triples $(i,j,a)\in I\times I\times A\setminus
\{(i,i,a)\,|\,i\in I,a\in A\}$, where $(i,j,a)$ and $(i',j',b)$ are equivalent for some $i,j,i',j'\in I$ and $a,b\in A$ if and only if $\{i,j\}=\{i',j'\}$ and $b\in \{({\rho }_j{\rho }_i)^m(a),
{\rho }_i({\rho }_j{\rho }_i)^m(a)\,|\,m\in {\mathbb{N}}_0\}$. Since ${\mathcal{C}}$ is simply connected, (R4) implies that the face corresponding to a triple $(i,j,a)$ is a polygon with $2m_{i,j}^a$ vertices.
For each face choose a triangulation by non-intersecting diagonals. Let $d$ be the total number of diagonals arising this way. Now consider the following two-dimensional simplicial complex $C$: The $0$-simplices are the objects. The $1$-simplices are the edges and the chosen diagonals of the faces of the object change diagram. The $2$-simplices are the $f+d$ triangles. Clearly, each edge is contained in precisely two triangles. By [@b-tomDieck91 Ch.III, (3.3), 2,3] the geometric realization $X$ of $C$ is a closed $2$-dimensional surface without boundary. The space $X$ is connected and compact.
Any two morphisms of ${\mathcal{W}}({\mathcal{C}})$ with same source and target are equal because ${\mathcal{C}}$ is simply connected. By [@a-CH09a Thm.2.6] this equality follows from the Coxeter relations. A Coxeter relation means for the object change diagram that for the corresponding face and vertex the two paths along the sides of the face towards the opposite vertex yield the same morphism. Hence diagonals can be interpreted as representatives of paths in a face between two vertices, and then all loops in $C$ become homotopic to the trivial loop. Hence $X$ is simply connected and therefore homeomorphic to a two-dimensional sphere by [@b-tomDieck91 Ch.III, Satz 6.9]. Its Euler characteristic is $2=e-(k+d)+(f+d)=e-k+f$.
\[re:planesandfaces\] Assume that ${\mathcal{C}}$ is connected and simply connected, and let $a\in A$. Then any pair of opposite $2$-dimensional faces of the object change diagram can be interpreed as a plane in ${\mathbb{Z}}^I$ containing at least two positive roots ${\alpha },\beta \in R^a_+$. Indeed, let $b\in A$ and $i_1,i_2\in I$ with $i_1\not=i_2$. Since ${\mathcal{C}}$ is connected and simply connected, there exists a unique $w\in \operatorname{Hom}(a,b)$. Then $V^a(w^{-1}({\alpha }_{i_1}),w^{-1}({\alpha }_{i_2}))$ is a plane in ${\mathbb{Z}}^I$ containing at least two positive roots. One can easily check that this plane is independent of the choice of the representative of the face determined by $(i_1,i_2,b)\in I\times I\times A$. Further, let $w_0\in \operatorname{Hom}(b,d)$, where $d\in A$, be the longest element in ${\mathrm{Hom}(b,{\mathcal{W}}({\mathcal{C}}))}$. Let $j_1,j_2\in I$ such that $w_0({\alpha }_{i_n})=-{\alpha }_{j_n}$ for $n=1,2$. Then $(j_1,j_2,d)$ determines the plane $$V^a( (w_0w)^{-1}({\alpha }_{j_1}),(w_0w)^{-1}({\alpha }_{j_2}))=
V^a(w^{-1}({\alpha }_{i_1}),w^{-1}({\alpha }_{i_2})).$$ This way we attached to any pair of ($2$-dimensional) opposite faces of the object change diagram a plane containing at least two positive roots.
![The object change diagram of the last root system of rank three[]{data-label="fig:37posroots"}](wg37){width="9cm"}
Let $<$ be a semigroup ordering on ${\mathbb{Z}}^I$ such that $0<\gamma $ for all $\gamma \in R^a_+$. Let ${\alpha },\beta \in R^a_+$ with ${\alpha }\not=\beta $, and assume that ${\alpha }$ and $\beta $ are the smallest elements in $R^a_+\cap
V^a({\alpha },\beta )$ with respect to $<$. Then $\{{\alpha },\beta \}$ is a base for $V^a({\alpha },\beta )$ at $a$ by Lemma \[posrootssemigroup\]. By Corollary \[simple\_rkk\] there exists $b\in A$ and $w\in \operatorname{Hom}(a,b)$ such that $w({\alpha }),w(\beta )\in R^b_+$ are simple roots. Hence any plane in ${\mathbb{Z}}^I$ containing at least two elements of $R^a_+$ can be obtained by the construction in the previous paragraph.
It remains to show that different pairs of opposite faces give rise to different planes. This follows from the fact that for any $b\in A$ and $i_1,i_2\in I$ with $i_1\not=i_2$ the conditions $$d\in A,\ u\in \operatorname{Hom}(b,d),\ j_1,j_2\in I,\
u({\alpha }_{i_1})={\alpha }_{j_1},\ u({\alpha }_{i_2})={\alpha }_{j_2}$$ have precisely two solutions: $u={\mathrm{id}}_b$ on the one side, and $u=w_0w_{i_1i_2}$ on the other side, where $w_{i_1i_2}=\cdots \s_{i_1}\s_{i_2}\s_{i_1}{\mathrm{id}}_b\in {\mathrm{Hom}(b,{\mathcal{W}}({\mathcal{C}}))}$ is the longest product of reflections ${\sigma }_{i_1}$, ${\sigma }_{i_2}$, and $w_0$ is an appropriate longest element of ${\mathcal{W}}({\mathcal{C}})$. The latter follows from the fact that $u$ has to map the base $\{\al
_{i_1},{\alpha }_{i_2},{\alpha }_{i_3}\}$ for ${\mathbb{Z}}^I$ at $b$, where $I=\{i_1,i_2,i_3\}$, to another base, and any base consisting of two simple roots can be extended precisely in two ways to a base of ${\mathbb{Z}}^I$: by adding the third simple root or by adding a uniquely determined negative root.
It follows from the construction and by [@a-CH09b Lemma 6.4] that the faces corresponding to a plane $V^a({\alpha },\beta )$, where ${\alpha },\beta \in R^a_+$ with ${\alpha }\not=\beta $, have as many edges as the cardinality of $V^a({\alpha },\beta )\cap R^a$ (or twice the cardinality of $V^a({\alpha },\beta )\cap R^a_+$).
\[sum\_rank2\] Let $\cC$ be a connected simply connected Cartan scheme of rank three. Assume that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite irreducible root system of type $\cC$. Let $a\in A$ and let $M$ be the set of planes containing at least two elements of $R^a_+$. Then $$\sum_{V\in M} \#(V\cap R^a_+) = 3(\# M-1).$$
Let $e,k,f$ be as in Proposition \[Euler\_char\]. Then $\#M=f/2$ by Remark \[re:planesandfaces\]. For any vertex $b\in A$ there are three edges starting at $b$, and any edge is bounded by two vertices. Hence counting vertices in two different ways one obtains that $3e=2k$. Proposition \[Euler\_char\] gives that $e-k+f=2$. Hence $2k = 3e = 3(2-f+k)$, that is, $k=3f-6$.
Any plane $V$ corresponds to a face which is a polygon consisting of $2\# (V\cap R^a_+)$ edges, see Remark \[re:planesandfaces\]. Summing up the edges twice over all planes (that is summing up over all faces of the object change diagram), each edge is counted twice. Hence $$2 \sum_{V\in M} 2\#(V\cap R^a_+) = 2k = 2(3f-6),$$ which is the formula claimed in the theorem.
\[ex\_square\] Let $\cC$ be a connected simply connected Cartan scheme of rank three. Assume that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite irreducible root system of type $\cC$. Then there exists an object $a\in A$ and $\alpha,\beta,\gamma\in R^a_+$ such that $\{\alpha,\beta,\gamma\}=\{\al_1,\al_2,\al_3\}$ and $$\label{square_hexagon}
\#(V^a(\alpha,\beta)\cap R^a_+)=2, \quad
\#(V^a(\alpha,\gamma)\cap R^a_+)=3.$$ Further $\alpha+\gamma, \beta +\gamma , \alpha+\beta+\gamma\in R^a_+$.
Let $M$ be as in Thm. \[sum\_rank2\]. Let $a$ be any object and assume $\#(V\cap R^a_+)>2$ for all $V\in M$, then $\sum_{V\in M} \#(V\cap R^a_+) \ge 3\# M$ contradicting Thm. \[sum\_rank2\]. Hence for all objects $a$ there exists a plane $V$ with $\#(V\cap R^a_+)=2$. Now consider the object change diagram and count the number of faces: Let $2q_i$ be the number of faces with $2i$ edges. Then Thm. \[sum\_rank2\] translates to $$\label{thm_trans}
\sum_{i\ge 2} i q_i = -3+3\sum_{i\ge 2} q_i.$$ Assume that there exists no object adjacent to a square and a hexagon. Since ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is irreducible, no two squares have a common edge, see Lemma \[le:someroots\](1). Look at the edges ending in vertices of squares, and count each edge once for both polygons adjacent to it. One checks that there are at least twice as many edges adjacent to a polygon with at least $8$ vertices as edges of squares. This gives that $$\sum_{i\ge 4} 2i \cdot 2q_i \ge 2\cdot 4\cdot 2q_2.$$ By Equation we then have $-3+3\sum_{i\ge 2}q_i\ge 4q_2+2q_2+3q_3$, that is, $q_2 < \sum_{i\ge 4}q_i$. But then in average each face has more than $6$ edges which contradicts Thm. \[sum\_rank2\]. Hence there is an object $a$ such that there exist $\alpha,\beta,\gamma\in R^a_+$ as above satisfying Equation . We have $\alpha+\gamma, \beta +\gamma , \alpha+\beta+\gamma\in R^a_+$ by Lemma \[le:someroots\](1),(2) and Corollary \[co:cij\].
The classification {#sec:class}
==================
In this section we explain the classification of connected simply connected Cartan schemes of rank three such that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite irreducible root system. We formulate the main result in Theorem \[th:class\]. The proof of Theorem \[th:class\] is performed using computer calculations based on results of the previous sections. Our algorithm described below is sufficiently powerful: The implementation in $C$ terminates within a few hours on a usual computer. Removing any of the theorems, the calculations would take at least several weeks.
\(1) Let $\cC$ be a connected Cartan scheme of rank three with $I=\{1,2,3\}$. Assume that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite irreducible root system of type $\cC$. Then there exists an object $a\in A$ and a linear map $\tau \in \operatorname{Aut}({\mathbb{Z}}^I)$ such that $\tau ({\alpha }_i)\in \{{\alpha }_1,{\alpha }_2,{\alpha }_3\}$ for all $i\in I$ and $\tau (R^a_+)$ is one of the sets listed in Appendix \[ap:rs\]. Moreover, $\tau (R^a_+)$ with this property is uniquely determined.
\(2) Let $R$ be one of the $55$ subsets of ${\mathbb{Z}}^3$ appearing in Appendix \[ap:rs\]. There exists up to equivalence a unique connected simply connected Cartan scheme ${\mathcal{C}}(I,A,({\rho }_i)_{i\in I},(C^a)_{a\in A})$ such that $R\cup -R$ is the set of real roots $R^a$ in an object $a\in A$. Moreover ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite irreducible root system of type $\cC
$. \[th:class\]
Let $<$ be the lexicographic ordering on ${\mathbb{Z}}^3$ such that $\al_3<\al_2<\al_1$. Then ${\alpha }>0$ for any ${\alpha }\in {\mathbb{N}}_0^3\setminus \{0\}$.
Let ${\mathcal{C}}$ be a connected Cartan scheme with $I=\{1,2,3\}$. Assume that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite irreducible root system of type ${\mathcal{C}}$. Let $a\in A$. By Theorem \[root\_is\_sum\] we may construct $R^a_+$ inductively by starting with $R^a_+=\{{\alpha }_3,{\alpha }_2,{\alpha }_1\}$, and appending in each step a sum of a pair of positive roots which is greater than all roots in $R^a_+$ we already have. During this process, we keep track of all planes containing at least two positive roots, and the positive roots on them. Lemma \[posrootssemigroup\] implies that for any known root ${\alpha }$ and new root $\beta $ either $V^a({\alpha },\beta )$ contains no other known positive roots, or $\beta $ is not part of the unique base for $V^a({\alpha },\beta )$ at $a$ consisting of positive roots. In the first case the roots ${\alpha },\beta $ generate a new plane. It can happen that ${\mathrm{Vol}}_2({\alpha },\beta )>1$, and then $\{{\alpha },\beta \}$ is not a base for $V^a({\alpha },\beta )$ at $a$, but we don’t care about that. In the second case the known roots in $V^a({\alpha },\beta )\cap R^a_+$ together with $\beta $ have to form an $\cF$-sequence by Proposition \[pr:R=Fseq\].
Sometimes, by some theorem (see the details below) we know that it is not possible to add more positive roots to a plane. Then we can mark it as “finished”.
Remark that to obtain a finite number of root systems as output, we have to ensure that we compute only irreducible systems since there are infinitely many inequivalent reducible root systems of rank two. Hence starting with $\{{\alpha }_3,{\alpha }_2,{\alpha }_1\}$ will not work. However, by Corollary \[ex\_square\], starting with $\{{\alpha }_3,{\alpha }_2,\al_2+\al_3,{\alpha }_1,\al_1+\al_2,\al_1+\al_2+\al_3\}$ will still yield at least one root system for each desired Cartan scheme (notice that any roots one would want to add are lexicographically greater).
In this section, we will call [*root system fragment*]{} (or [*rsf*]{}) the following set of data associated to a set of positive roots $R$ in construction:
- normal vectors for the planes with at least two positive roots
- labels of positive roots on these planes
- Cartan entries corresponding to the root systems of the planes
- an array of flags for finished planes
- the sum $s_R$ of $\#(V\cap R)$ over all planes $V$ with at least two positive roots, see Theorem \[sum\_rank2\]
- for each root $r\in R$ the list of planes it belongs to.
These data can be obtained directly from $R$, but the calculation is faster if we continuously update them.
We divide the algorithm into three parts. The main part is Algorithm 4.4, see below.
The first part updates a root system fragment to a new root and uses Theorems \[root\_diffs\] and \[cartan\_6\] to possibly refuse doing so:
[**Algorithm ..**]{} [**AppendRoot**]{}($\alpha$,$B$,$\tilde B$,$\hat\alpha$)\
[*Append a root to an rsf*]{}.\
[**Input:**]{} a root $\alpha$, an rsf $B$, an empty rsf $\tilde B$, a root $\hat\alpha$.\
[**Output:**]{} $\begin{cases}
0: & \mbox{if } \alpha \mbox{ may be appended, new rsf is then in } \tilde B, \\
1: & \mbox{if } \alpha \mbox{ may not be appended}, \\
2: & \mbox{if $\alpha \in R^a_+$ implies the existence of
$\beta \in R^a_+$}\\
& \mbox{with $\hat\alpha<\beta <\alpha $.}
\end{cases}$\
[**.**]{}
Let $r$ be the number of planes containing at least two elements of $R$. For documentation purposes let $V_1,\dots,V_r$ denote these planes. For any $i\in \{1,\dots,r\}$ let $v_i$ be a normal vector for $V_i$, and let $R_i$ be the $\cF$-sequence of $V_i\cap R$. Set $i \leftarrow 1$, $g \leftarrow 1$, $c\leftarrow [\:]$, $p \leftarrow [\:]$, $d\leftarrow\{\:\}$. During the algorithm $c$ will be an ordered subset of $\{1,\dots,r\}$, $p$ a corresponding list of “positions”, and $d$ a subset of $R$.
\[A1\_2\] If $i\le r$ and $g\ne 0$, then compute the scalar product $g:=(\alpha , v_i)$. (Then $g=\det ({\alpha },\gamma _1,\gamma _2)
=\pm {\mathrm{Vol}}_3({\alpha },\gamma _1,\gamma _2)$, where $\{\gamma _1,\gamma _2\}$ is the basis of $V_i$ consisting of positive roots.) Otherwise go to Step \[A1\_6\].
\[A1\_3\] If $g=0$ then do the following: If $V_i$ is not finished yet, then check if ${\alpha }$ extends $R_i$ to a new ${\mathcal{F}}$-sequence. If yes, add the roots of $R_i$ to $d$, append $i$ to $c$, append the position of the insertion of ${\alpha }$ in $R_i$ to $p$, let $g \leftarrow 1$, and go to Step 5.
If $g^2=1$, then use Corollary \[convex\_diff2\]: Let $\gamma_1$ and $\gamma_2$ be the beginning and the end of the $\cF
$-sequence $R_i$, respectively. (Then $\{\gamma _1,\gamma _2\}$ is a base for $V_i$ at $a$). Let $\delta_1 \leftarrow \alpha - \gamma_1$, $\delta_2 \leftarrow \alpha - \gamma_2$. If $\delta_1,\delta _2\notin R$, then return $1$ if $\delta
_1,\delta _2 \le \hat {\alpha }$ and return $2$ otherwise.
Set $i \leftarrow i+1$ and go to Step \[A1\_2\].
\[A1\_6\] If there is no objection appending $\alpha$ so far, i.e. $g \ne 0$, then copy $B$ to $\tilde B$ and include $\alpha$ into $\tilde B$: use $c,p$ to extend existing ${\mathcal{F}}$-sequences, and use (the complement of) $d$ to create new planes. Finally, apply Theorem \[cartan\_6\]: If there is a Cartan entry lesser than $-7$ then return 1, else return 0. If $g=0$ then return 2.
The second part looks for small roots which we must include in any case. The function is based on Proposition \[pr:suminR\]. This is a strong restriction during the process.
[**Algorithm ..**]{} [**RequiredRoot**]{}($R$,$B$,$\hat\alpha$)\
[*Find a smallest required root under the assumption that all roots $\le \hat
{\alpha }$ are known*]{}.\
[**Input:**]{} $R$ a set of roots, $B$ an rsf for $R$, $\hat \alpha $ a root.\
[**Output:**]{} $\begin{cases}
0 & \mbox{if we cannot determine such a root}, \\
1,\varepsilon & \mbox{if we have found a small missing root $\varepsilon $
with $\varepsilon >\hat \alpha $},\\
2 & \mbox{if the given configuration is impossible}.
\end{cases}$\
[**.**]{}
Initialize the return value $f \leftarrow 0$.
\[A2\_0\] We use the same notation as in Algo. 4.2, step 1. For all $\gamma _1$ in $R$ and all $(j,k)\in \{1,\dots,r\}\times \{1,\dots,r\}$ such that $j\not=k$, $\gamma _1\in R_j\cap R_k$, and both $R_j,R_k$ contain two elements, let $\gamma _2,\gamma _3\in R$ such that $R_j=\{\gamma _1,\gamma _2\}$, $R_k=\{\gamma _1,\gamma _3\}$. If ${\mathrm{Vol}}_3(\gamma _1,\gamma _2,\gamma _3) = 1$, then do Steps \[A2\_a\] to \[A2\_b\].
\[A2\_a\] $\xi_2 \leftarrow \gamma_1+\gamma_2$, $\xi_3 \leftarrow \gamma_1+\gamma_3$.
If $\hat\alpha \ge \xi_2$: If $\hat\alpha \ge \xi_3$ or plane $V_k$ is already finished, then return 2. If $f=0$ or $\varepsilon > \xi_3$, then $\varepsilon \leftarrow \xi_3$, $f\leftarrow 1$. Go to Step \[A2\_0\] and continue loop.
If $\hat\alpha \ge \xi_3$: If plane $V_j$ is already finished, then return 2. If $f=0$ or $\varepsilon > \xi_2$, then $\varepsilon \leftarrow \xi_2$, $f\leftarrow 1$.
\[A2\_b\] Go to Step \[A2\_0\] and continue loop.
Return $f,\varepsilon$.
Finally, we resursively add roots to a set, update the rsf and include required roots:
[**Algorithm ..**]{} [**CompleteRootSystem**]{}($R$,$B$,$\hat\alpha$,$u$,$\beta$)\
[*\[mainalg\]Collects potential new roots, appends them and calls itself again*]{}.\
[**Input:**]{} $R$ a set of roots, $B$ an rsf for $R$, $\hat\alpha$ a lower bound for new roots, $u$ a flag, $\beta$ a vector which is necessarily a root if $u=$ True.\
[**Output:**]{} Root systems containing $R$.\
[**.**]{} \[A3\]
Check Theorem \[sum\_rank2\]: If $s_R = 3(r-1)$, where $r$ is the number of planes containing at least two positive roots, then output $R$ (and continue). We have found a potential root system.
If we have no required root yet, i.e. $u=$ False, then\
$f,\varepsilon:=$RequiredRoot$(R,B,\hat{\alpha })$. If $f=1$, then we have found a required root; we call CompleteRootSystem($R,B,\hat\alpha, True, \varepsilon$) and terminate. If $f=2$, then terminate.
Potential new roots will be collected in $Y\leftarrow \{\:\}$; $\tilde B$ will be the new rsf.
For all planes $V_i$ of $B$ which are not finished, do Steps \[A3\_a\] to \[A3\_b\].
\[A3\_a\] $\nu \leftarrow 0$.
For $\zeta$ in the set of roots that may be added to the plane $V_i$ such that $\zeta> \hat\alpha$, do the following:
- set $\nu \leftarrow \nu+1$.
- If $\zeta \notin Y$, then $Y \leftarrow Y \cup \{\zeta\}$. If moreover $u=$ False or $\beta > \zeta$, then
- $y \leftarrow$ AppendRoot($\zeta,B,\tilde B,\hat\alpha$);
- if $y = 0$ then CompleteRootSystem($R\cup\{\zeta\},\tilde B,\zeta ,
u, \beta$).
- if $y = 1$ then $\nu \leftarrow \nu-1$.
\[A3\_b\] If $\nu = 0$, then mark $V_i$ as finished in $\tilde B$.
if $u =$ True and AppendRoot($\beta,B,\tilde B,\hat\alpha$) = 0, then call\
CompleteRootSystem($R\cup\{\beta\},\tilde B,\beta, \textrm{False},
\beta$).\
Terminate the function call.
Note that we only used necessary conditions for root systems, so after the computation we still need to check which of the sets are indeed root systems. A short program in [Magma]{} confirms that Algorithm 4.4 yields only root systems, for instance using this algorithm:
[**Algorithm ..**]{} [**RootSystemsForAllObjects**]{}($R$)\
[*Returns the root systems for all objects if $R=R^a_+$ determines a Cartan scheme ${\mathcal{C}}$ such that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is an irreducible root system*]{}.\
[**Input:**]{} $R$ the set of positive roots at one object.\
[**Output:**]{} the set of root systems at all objects, or $\{\}$ if $R$ does not yield a Cartan scheme as desired.\
[**.**]{} \[A4\]
$N \leftarrow [R]$, $M \leftarrow \{\}$.
While $|N| > 0$, do steps \[begwhile\] to \[endwhile\].
Let $F$ be the last element of $N$. Remove $F$ from $N$ and include it to $M$.\[begwhile\]
Let $C$ be the Cartan matrix of $F$. Compute the three simple reflections given by $C$.
For each simple reflection $s$, do:\[endwhile\]
- Compute $G:=\{s(v)\mid v\in F\}$. If an element of $G$ has positive and negative coefficients, then return $\{\}$. Otherwise mutliply the negative roots of $G$ by $-1$.
- If $G\notin M$, then append $G$ to $N$.
Return $M$.
We list all $55$ root systems in Appendix \[ap:rs\]. It is also interesting to summarize some of the invariants, which is done in Table 1. Let ${\mathcal O}=\{R^a \mid a \in A\}$ denote the set of different root systems. By identifying objects with the same root system one obtains a quotient Cartan scheme of the simply connected Cartan scheme of the classification. In the fifth column we give the automorphism group of one (equivalently, any) object of this quotient. The last column gives the multiplicities of planes; for example $3^7$ means that there are $7$ different planes containing precisely $3$ positive roots.
Nr. $|R_+^a|$ $|{\mathcal O}|$ $|A|$ $\operatorname{Hom}(a)$ planes
------ ----------- ------------------ ------- --------------------------- --------------------------------------------------------
$1$ $6$ $1$ $24$ $A_3$ $2^{3}, 3^{4}, $
$2$ $7$ $4$ $32$ $A_1\times A_1\times A_1$ $2^{3}, 3^{6}, $
$3$ $8$ $5$ $40$ $B_2$ $2^{4}, 3^{6}, 4^{1}, $
$4$ $9$ $1$ $48$ $B_3$ $2^{6}, 3^{4}, 4^{3}, $
$5$ $9$ $1$ $48$ $B_3$ $2^{6}, 3^{4}, 4^{3}, $
$6$ $10$ $5$ $60$ $A_1\times A_2$ $2^{6}, 3^{7}, 4^{3}, $
$7$ $10$ $10$ $60$ $A_2$ $2^{6}, 3^{7}, 4^{3}, $
$8$ $11$ $9$ $72$ $A_1\times A_1\times A_1$ $2^{7}, 3^{8}, 4^{4}, $
$9$ $12$ $21$ $84$ $A_1\times A_1$ $2^{8}, 3^{10}, 4^{3}, 5^{1}, $
$10$ $12$ $14$ $84$ $A_2$ $2^{9}, 3^{7}, 4^{6}, $
$11$ $13$ $4$ $96$ $G_2\times A_1$ $2^{9}, 3^{12}, 4^{3}, 6^{1}, $
$12$ $13$ $12$ $96$ $A_1\times A_1\times A_1$ $2^{10}, 3^{10}, 4^{3}, 5^{2}, $
$13$ $13$ $2$ $96$ $B_3$ $2^{12}, 3^{4}, 4^{9}, $
$14$ $13$ $2$ $96$ $B_3$ $2^{12}, 3^{4}, 4^{9}, $
$15$ $14$ $56$ $112$ $A_1$ $2^{11}, 3^{12}, 4^{4}, 5^{2}, $
$16$ $15$ $16$ $128$ $A_1\times A_1\times A_1$ $2^{13}, 3^{12}, 4^{6}, 5^{2}, $
$17$ $16$ $36$ $144$ $A_1\times A_1$ $2^{14}, 3^{15}, 4^{6}, 5^{1}, 6^{1}, $
$18$ $16$ $24$ $144$ $A_2$ $2^{15}, 3^{13}, 4^{6}, 5^{3}, $
$19$ $17$ $10$ $160$ $B_2\times A_1$ $2^{16}, 3^{16}, 4^{7}, 6^{2}, $
$20$ $17$ $10$ $160$ $B_2\times A_1$ $2^{16}, 3^{16}, 4^{7}, 6^{2}, $
$21$ $17$ $10$ $160$ $B_2\times A_1$ $2^{18}, 3^{12}, 4^{7}, 5^{4}, $
$22$ $18$ $30$ $180$ $A_2$ $2^{18}, 3^{18}, 4^{6}, 5^{3}, 6^{1}, $
$23$ $18$ $90$ $180$ $A_1$ $2^{19}, 3^{16}, 4^{6}, 5^{5}, $
$24$ $19$ $25$ $200$ $A_1\times A_1\times A_1$ $2^{20}, 3^{20}, 4^{6}, 5^{4}, 6^{1}, $
$25$ $19$ $8$ $192$ $G_2\times A_1$ $2^{21}, 3^{18}, 4^{6}, 6^{4}, $
$26$ $19$ $50$ $200$ $A_1\times A_1$ $2^{20}, 3^{20}, 4^{6}, 5^{4}, 6^{1}, $
$27$ $19$ $25$ $200$ $A_1\times A_1\times A_1$ $2^{20}, 3^{20}, 4^{6}, 5^{4}, 6^{1}, $
$28$ $19$ $8$ $192$ $G_2\times A_1$ $2^{24}, 3^{12}, 4^{6}, 5^{6}, 6^{1}, $
$29$ $20$ $27$ $216$ $B_2$ $2^{20}, 3^{26}, 4^{4}, 5^{4}, 8^{1}, $
$30$ $20$ $110$ $220$ $A_1$ $2^{21}, 3^{24}, 4^{6}, 5^{4}, 7^{1}, $
$31$ $20$ $110$ $220$ $A_1$ $2^{23}, 3^{20}, 4^{7}, 5^{5}, 6^{1}, $
$32$ $21$ $15$ $240$ $B_2\times A_1$ $2^{22}, 3^{28}, 4^{6}, 5^{4}, 8^{1}, $
$33$ $21$ $30$ $240$ $A_1\times A_1\times A_1$ $2^{26}, 3^{20}, 4^{9}, 5^{4}, 6^{2}, $
$34$ $21$ $5$ $240$ $B_3$ $2^{24}, 3^{24}, 4^{9}, 6^{4}, $
$35$ $22$ $44$ $264$ $A_2$ $2^{27}, 3^{25}, 4^{9}, 5^{3}, 6^{3}, $
$36$ $25$ $42$ $336$ $A_1\times A_1\times A_1$ $2^{33}, 3^{34}, 4^{12}, 5^{2}, 6^{3}, 8^{1}, $
$37$ $25$ $14$ $336$ $G_2\times A_1$ $2^{36}, 3^{30}, 4^{9}, 5^{6}, 6^{4}, $
$38$ $25$ $28$ $336$ $A_1\times A_2$ $2^{36}, 3^{30}, 4^{9}, 5^{6}, 6^{4}, $
$39$ $25$ $7$ $336$ $B_3$ $2^{36}, 3^{28}, 4^{15}, 6^{6}, $
$40$ $26$ $182$ $364$ $A_1$ $2^{35}, 3^{39}, 4^{10}, 5^{4}, 6^{3}, 8^{1}, $
$41$ $26$ $182$ $364$ $A_1$ $2^{37}, 3^{36}, 4^{9}, 5^{6}, 6^{3}, 7^{1}, $
$42$ $27$ $49$ $392$ $A_1\times A_1\times A_1$ $2^{38}, 3^{42}, 4^{9}, 5^{6}, 6^{3}, 8^{1}, $
$43$ $27$ $98$ $392$ $A_1\times A_1$ $2^{39}, 3^{40}, 4^{10}, 5^{6}, 6^{2}, 7^{2}, $
$44$ $27$ $98$ $392$ $A_1\times A_1$ $2^{39}, 3^{40}, 4^{10}, 5^{6}, 6^{2}, 7^{2}, $
$45$ $28$ $420$ $420$ $1$ $2^{41}, 3^{44}, 4^{11}, 5^{6}, 6^{2}, 7^{1}, 8^{1}, $
$46$ $28$ $210$ $420$ $A_1$ $2^{42}, 3^{42}, 4^{12}, 5^{6}, 6^{1}, 7^{3}, $
$47$ $28$ $70$ $420$ $A_2$ $2^{42}, 3^{42}, 4^{12}, 5^{6}, 6^{1}, 7^{3}, $
$48$ $29$ $56$ $448$ $A_1\times A_1\times A_1$ $2^{44}, 3^{46}, 4^{13}, 5^{6}, 6^{2}, 8^{2}, $
$49$ $29$ $112$ $448$ $A_1\times A_1$ $2^{45}, 3^{44}, 4^{14}, 5^{6}, 6^{1}, 7^{2}, 8^{1}, $
$50$ $29$ $112$ $448$ $A_1\times A_1$ $2^{45}, 3^{44}, 4^{14}, 5^{6}, 6^{1}, 7^{2}, 8^{1}, $
$51$ $30$ $238$ $476$ $A_1$ $2^{49}, 3^{44}, 4^{17}, 5^{6}, 6^{1}, 7^{1}, 8^{2}, $
$52$ $31$ $21$ $504$ $G_2\times A_1$ $2^{54}, 3^{42}, 4^{21}, 5^{6}, 6^{1}, 8^{3}, $
$53$ $31$ $21$ $504$ $G_2\times A_1$ $2^{54}, 3^{42}, 4^{21}, 5^{6}, 6^{1}, 8^{3}, $
$54$ $34$ $102$ $612$ $A_2$ $2^{60}, 3^{63}, 4^{18}, 5^{6}, 6^{4}, 8^{3}, $
$55$ $37$ $15$ $720$ $B_3$ $2^{72}, 3^{72}, 4^{24}, 6^{10}, 8^{3}, $
[Table 1: Invariants of irreducible root systems of rank three]{}
At first sight, one is tempted to look for a formula for the number of objects in the universal covering depending on the number of roots. There is an obvious one: consider the coefficients of $4/((1-x)^2(1-x^4))$. However, there are exceptions, for example nr. 29 with $20$ positive roots and $216$ objects (instead of $220$).
Rank 3 Nichols algebras of diagonal type with finite irreducible arithmetic root system are classified in [@a-Heck05b Table 2]. In Table 2 we identify the Weyl groupoids of these Nichols algebras.
----------------------------- ---- ---- ---- ---- ---- ---- ---- ---- ----
row in [@a-Heck05b Table 2] 1 2 3 4 5 6 7 8 9
Weyl groupoid 1 5 4 1 5 3 11 1 2
row in [@a-Heck05b Table 2] 10 11 12 13 14 15 16 17 18
Weyl groupoid 2 2 5 13 5 6 7 8 14
----------------------------- ---- ---- ---- ---- ---- ---- ---- ---- ----
Irreducible root systems of rank three {#ap:rs}
======================================
We give the roots in a multiplicative notation to save space: The word $1^x2^y3^z$ corresponds to $x\alpha_3+y\alpha_2+z\alpha_1$.
Notice that we have chosen a “canonical” object for each groupoid. Write $\pi(R^a_+)$ for the set $R^a_+$ where the coordinates are permuted via $\pi\in S_3$. Then the set listed below is the minimum of $\{\pi(R^a_+)\mid a\in A,\:\: \pi\in S_3\}$ with respect to the lexicographical ordering on the sorted sequences of roots.
Nr. $1$ with $6$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $123$\
Nr. $2$ with $7$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $23$, $123$\
Nr. $3$ with $8$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $123$, $1^{2}23$\
Nr. $4$ with $9$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $123$, $1^{2}23$, $1^{2}23^{2}$\
Nr. $5$ with $9$ positive roots:\
$1$, $2$, $3$, $12$, $23$, $1^{2}2$, $123$, $1^{2}23$, $1^{2}2^{2}3$\
Nr. $6$ with $10$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{2}23$, $1^{3}23$\
Nr. $7$ with $10$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $23$, $1^{2}2$, $123$, $1^{2}23$, $1^{2}2^{2}3$\
Nr. $8$ with $11$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{2}23$, $1^{3}23$, $1^{3}2^{2}3$\
Nr. $9$ with $12$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$\
Nr. $10$ with $12$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{2}23$, $1^{3}23$, $1^{2}2^{2}3$, $1^{3}2^{2}3$\
Nr. $11$ with $13$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}2^{2}$, $1^{3}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$\
Nr. $12$ with $13$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}23$, $1^{4}23$, $1^{4}2^{2}3$\
Nr. $13$ with $13$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{2}23$, $1^{3}23$, $1^{2}2^{2}3$, $1^{3}2^{2}3$, $1^{4}2^{2}3$\
Nr. $14$ with $13$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $123$, $13^{2}$, $1^{2}23$, $123^{2}$, $1^{2}23^{2}$, $1^{3}23^{2}$, $1^{3}2^{2}3^{2}$\
Nr. $15$ with $14$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$\
Nr. $16$ with $15$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$\
Nr. $17$ with $16$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$\
Nr. $18$ with $16$ positive roots:\
$1$, $2$, $3$, $12$, $23$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $12^{2}3$, $1^{3}23$, $1^{2}2^{2}3$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{4}2^{3}3$, $1^{4}2^{3}3^{2}$\
Nr. $19$ with $17$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}23$, $1^{4}23$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$\
Nr. $20$ with $17$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{5}2^{2}3^{2}$\
Nr. $21$ with $17$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}23$, $1^{2}2^{2}3$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{5}2^{3}3$, $1^{5}2^{3}3^{2}$, $1^{6}2^{3}3^{2}$\
Nr. $22$ with $18$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{5}2^{3}3$, $1^{6}2^{3}3$\
Nr. $23$ with $18$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $23$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $12^{2}3$, $1^{3}23$, $1^{2}2^{2}3$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{3}2^{3}3$, $1^{4}2^{3}3$, $1^{4}2^{3}3^{2}$\
Nr. $24$ with $19$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{7}2^{3}3^{2}$\
Nr. $25$ with $19$ positive roots:\
$1$, $2$, $3$, $12$, $23$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}23$, $1^{2}2^{2}3$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{6}2^{3}3$, $1^{6}2^{3}3^{2}$\
Nr. $26$ with $19$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $23$, $1^{2}2$, $12^{2}$, $123$, $1^{3}2$, $1^{2}23$, $12^{2}3$, $1^{3}23$, $1^{2}2^{2}3$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{3}2^{3}3$, $1^{4}2^{3}3$, $1^{4}2^{3}3^{2}$\
Nr. $27$ with $19$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $23$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $12^{2}3$, $1^{3}2^{2}$, $1^{3}23$, $1^{2}2^{2}3$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{3}2^{3}3$, $1^{4}2^{3}3$, $1^{4}2^{3}3^{2}$\
Nr. $28$ with $19$ positive roots:\
$1$, $2$, $3$, $12$, $23$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}2^{2}$, $1^{3}23$, $1^{2}2^{2}3$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{3}2^{3}3$, $1^{4}2^{3}3$, $1^{5}2^{3}3$, $1^{6}2^{3}3$, $1^{6}2^{4}3$\
Nr. $29$ with $20$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$\
Nr. $30$ with $20$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{7}2^{3}3^{2}$\
Nr. $31$ with $20$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}2^{2}$, $1^{3}23$, $1^{2}2^{2}3$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{4}2^{3}3$, $1^{5}2^{3}3$, $1^{6}2^{3}3^{2}$\
Nr. $32$ with $21$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{7}2^{3}3^{2}$\
Nr. $33$ with $21$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}2^{2}$, $1^{3}23$, $1^{2}2^{2}3$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{4}2^{3}3$, $1^{5}2^{3}3$, $1^{6}2^{3}3$, $1^{6}2^{3}3^{2}$\
Nr. $34$ with $21$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{5}2^{3}3$, $1^{5}2^{2}3^{2}$, $1^{6}2^{3}3$, $1^{6}2^{3}3^{2}$, $1^{7}2^{3}3^{2}$\
Nr. $35$ with $22$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}2^{2}$, $1^{3}23$, $1^{2}2^{2}3$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{4}2^{3}3$, $1^{5}2^{3}3$, $1^{5}2^{2}3^{2}$, $1^{5}2^{3}3^{2}$, $1^{6}2^{3}3^{2}$\
Nr. $36$ with $25$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{8}2^{3}3^{2}$\
Nr. $37$ with $25$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$\
Nr. $38$ with $25$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $12^{2}$, $123$, $1^{3}2$, $1^{2}23$, $12^{2}3$, $1^{3}23$, $1^{2}2^{2}3$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{3}2^{3}3$, $1^{3}2^{2}3^{2}$, $1^{4}2^{3}3$, $1^{5}2^{3}3$, $1^{4}2^{3}3^{2}$, $1^{5}2^{3}3^{2}$, $1^{6}2^{3}3^{2}$, $1^{7}2^{4}3^{2}$\
Nr. $39$ with $25$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}2^{2}$, $1^{3}23$, $1^{2}2^{2}3$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{4}2^{3}3$, $1^{5}2^{3}3$, $1^{5}2^{2}3^{2}$, $1^{6}2^{3}3$, $1^{5}2^{3}3^{2}$, $1^{6}2^{3}3^{2}$, $1^{7}2^{3}3^{2}$, $1^{7}2^{4}3^{2}$\
Nr. $40$ with $26$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$\
Nr. $41$ with $26$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$\
Nr. $42$ with $27$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$\
Nr. $43$ with $27$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{5}2^{2}3^{2}$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$\
Nr. $44$ with $27$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{7}2^{2}3^{2}$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$\
Nr. $45$ with $28$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{5}2^{2}3^{2}$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$\
Nr. $46$ with $28$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{5}2^{2}3^{2}$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$, $1^{9}2^{4}3^{2}$\
Nr. $47$ with $28$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{5}2^{2}3^{2}$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$, $1^{11}2^{4}3^{2}$\
Nr. $48$ with $29$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{5}2^{2}3^{2}$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{7}2^{2}3^{2}$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$\
Nr. $49$ with $29$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{5}2^{2}3^{2}$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$, $1^{9}2^{4}3^{2}$\
Nr. $50$ with $29$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{5}2^{2}3^{2}$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$, $1^{11}2^{4}3^{2}$\
Nr. $51$ with $30$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{5}2^{2}3^{2}$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{7}2^{2}3^{2}$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$, $1^{9}2^{4}3^{2}$\
Nr. $52$ with $31$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}23$, $1^{5}2$, $1^{4}23$, $1^{6}2$, $1^{5}23$, $1^{4}2^{2}3$, $1^{6}23$, $1^{5}2^{2}3$, $1^{7}23$, $1^{6}2^{2}3$, $1^{7}2^{2}3$, $1^{8}2^{2}3$, $1^{9}2^{2}3$, $1^{10}2^{2}3$, $1^{9}2^{3}3$, $1^{10}2^{3}3$, $1^{11}2^{3}3$, $1^{10}2^{3}3^{2}$, $1^{11}2^{3}3^{2}$, $1^{12}2^{3}3^{2}$\
Nr. $53$ with $31$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{5}2^{2}3^{2}$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{7}2^{2}3^{2}$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$, $1^{9}2^{4}3^{2}$, $1^{11}2^{4}3^{2}$\
Nr. $54$ with $34$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{5}2^{3}3$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{4}3$, $1^{8}2^{3}3^{2}$, $1^{9}2^{4}3$, $1^{9}2^{3}3^{2}$, $1^{9}2^{4}3^{2}$, $1^{11}2^{4}3^{2}$, $1^{11}2^{5}3^{2}$, $1^{12}2^{5}3^{2}$\
Nr. $55$ with $37$ positive roots:\
$1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{5}2^{3}3$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{9}2^{3}3$, $1^{8}2^{4}3$, $1^{8}2^{3}3^{2}$, $1^{9}2^{4}3$, $1^{9}2^{3}3^{2}$, $1^{10}2^{4}3$, $1^{9}2^{4}3^{2}$, $1^{11}2^{4}3^{2}$, $1^{11}2^{5}3^{2}$, $1^{12}2^{5}3^{2}$, $1^{13}2^{5}3^{2}$
[^1]: In this introduction by a Weyl groupoid we will mean the Weyl groupoid of a connected Cartan scheme, and we assume that the real roots associated to the Weyl groupoid form an irreducible root system in the sense of [@a-CH09a].
|
{
"pile_set_name": "arxiv"
}
|
---
abstract: 'Fragmentation functions for eta mesons are extracted at next-to-leading order accuracy of QCD in a global analysis of data taken in electron-positron annihilation and proton-proton scattering experiments. The obtained parametrization is in good agreement with all data sets analyzed and can be utilized, for instance, in future studies of double-spin asymmetries for single-inclusive eta production. The Lagrange multiplier technique is used to estimate the uncertainties of the fragmentation functions and to assess the role of the different data sets in constraining them.'
author:
- 'Christine A. Aidala'
- Frank Ellinghaus
- Rodolfo Sassot
- 'Joseph P. Seele'
- Marco Stratmann
title: Global Analysis of Fragmentation Functions for Eta Mesons
---
[^1]
Introduction
============
Fragmentation functions (FFs) are a key ingredient in the perturbative QCD (pQCD) description of processes with an observed hadron in the final-state. Similar to parton distribution functions (PDFs), which account for the universal partonic structure of the interacting hadrons, FFs encode the non-perturbative details of the hadronization process [@ref:ffdef]. When combined with the perturbatively calculable hard scattering cross sections, FFs extend the ideas of factorization to a much wider class of processes ranging from hadron production in electron-positron annihilation to semi-inclusive deep-inelastic scattering (SIDIS) and hadron-hadron collisions [@ref:fact].
Over the last years, our knowledge on FFs has improved dramatically [@ref:ff-overview] from first rough models of quark and gluon hadronization probabilities [@ref:feynman] to rather precise global analyses at next-to-leading order (NLO) accuracy of QCD, including estimates of uncertainties [@ref:dsspion; @ref:dssproton; @ref:akk; @ref:hirai]. While the most accurate and clean information used to determine FFs comes from single-inclusive electron-positron annihilation (SIA) into hadrons, such data do not allow disentanglement of quark from anti-quark fragmentation and constrain the gluon fragmentation only weakly through scaling violations and sub-leading NLO corrections. Modern global QCD analyses [@ref:dsspion; @ref:dssproton] utilize to the extent possible complementary measurements of hadron spectra obtained in SIDIS and hadron-hadron collisions to circumvent these shortcomings and to constrain FFs for all parton flavors individually.
Besides the remarkable success of the pQCD approach in describing all the available data simultaneously, the picture emerging from such comprehensive studies reveals interesting and sometimes unexpected patterns between the FFs for different final-state hadrons. For instance, the strangeness-to-kaon fragmentation function obtained in Ref. [@ref:dsspion] is considerably larger than those assumed previously in analyses of SIA data alone [@ref:kretzer]. This has a considerable impact on the extraction of the amount of strangeness polarization in the nucleon [@ref:dssv] from SIDIS data, which in turn is linked to the fundamental question of how the spin of the nucleon is composed of intrinsic spins and orbital angular momenta of quarks and gluons.
Current analyses of FFs comprise pions, kaons, protons [@ref:dsspion; @ref:dssproton; @ref:akk; @ref:hirai], and lambdas [@ref:dsv; @ref:akk] as final-state hadrons. In this respect, FFs are a much more versatile tool to explore non-perturbative aspects of QCD than PDFs where studies are mainly restricted to protons [@ref:cteq; @ref:otherpdfs]. In the following, we extend the global QCD analyses of FFs at NLO accuracy as described in Refs. [@ref:dsspion; @ref:dssproton] to eta mesons and estimate the respective uncertainties with the Lagrange multiplier method [@ref:lagrange; @ref:dsspion; @ref:dssv]. We obtain a parametrization from experimental data for single-inclusive eta meson production in SIA at various center-of-mass system (c.m.s.) energies $\sqrt{S}$ and proton-proton collisions at BNL-RHIC in a wide range of transverse momenta $p_T$. We note two earlier determinations of eta FFs in Refs. [@ref:greco] and [@ref:indumathi] which are based on normalizations taken from a Monte Carlo event generator and $SU(3)$ model estimates, respectively. In both cases, parametrizations are not available.
The newly obtained FFs provide fresh insight into the hadronization process by comparing to FFs for other hadrons. In particular, the peculiar wave function of the eta, $|\eta\rangle\simeq |u\bar{u}+d\bar{d}-2s\bar{s}\rangle$, with all light quarks and anti-quarks being present, may reveal new patterns between FFs for different partons and hadrons. The similar mass range of kaons and etas, $m_{K^0}\simeq 497.6\,\mathrm{MeV}$ and $m_{\eta}\simeq 547.9\,\mathrm{MeV}$, respectively, and the presence of strange quarks in both wave functions makes comparisons between the FFs for these mesons especially relevant. Of specific interest is also the apparently universal ratio of eta to neutral pion yields for $p_T\gtrsim 2\,\mathrm{GeV}$ in hadron-hadrons collisions across a wide range of c.m.s. energies, see, e.g., Ref. [@ref:phenix2006], and how this is compatible with the extracted eta and pion FFs.
In addition, the availability of eta FFs permits for the first time NLO pQCD calculations of double-spin asymmetries for single-inclusive eta meson production at high $p_T$ which have been measured at RHIC [@ref:ellinghaus] recently. Such calculations are of topical interest for global QCD analyses of the spin structure of the nucleon [@ref:dssv]. Finally, the set of eta FFs also provides the baseline for studies of possible modifications in a nuclear medium [@ref:nuclreview; @ref:nffs], for instance, in deuteron-gold collisions at RHIC [@ref:phenix2006].
The remainder of the paper is organized as follows: next, we give a brief outline of the analysis. In Sec. \[sec:results\] we present the results for the eta FFs, compare to data, and discuss our estimates of uncertainties. We conclude in Sec. \[sec:conclusions\].
Outline of the Analysis\[sec:outline\]
======================================
Technical framework and parametrization \[subsec:outline\]
----------------------------------------------------------
The pQCD framework at NLO accuracy for the scale evolution of FFs [@ref:evol] and single-inclusive hadron production cross sections in SIA [@ref:eenlo] and hadron-hadron collisions [@ref:ppnlo] has been in place for quite some time and does not need to be repeated here. Likewise, the global QCD analysis of the eta FFs itself follows closely the methods outlined in a corresponding fit of pion and kaon FFs in Ref. [@ref:dsspion], where all the details can be found. As in [@ref:dsspion; @ref:dssproton] we use the Mellin technique as described in [@ref:mellin; @ref:dssv] to implement all NLO expressions. Here, we highlight the differences to similar analyses of pion and kaon FFs and discuss their consequences for our choice of the functional form parameterizing the FFs of the eta meson.
As compared to lighter hadrons, in particular pions, data with identified eta mesons are less abundant and less precise. Most noticeable is the lack of any experimental information from SIDIS so far, which provided the most important constraints on the separation of contributions from $u$, $d$, and $s$ (anti-)quarks fragmenting into pions and kaons [@ref:dsspion]. Since no flavor-tagged data exist for SIA either, it is inevitable that a fit for eta FFs has considerably less discriminating power. Hence, instead of extracting the FFs for the light quarks and anti-quarks individually, we parametrize the flavor singlet combination at an input scale of $\mu_0=1\,\mathrm{GeV}$, assuming that all FFs are equal, i.e., $D^{\eta}_u=D^{\eta}_{\bar{u}}=D^{\eta}_d=D^{\eta}_{\bar{d}}=
D^{\eta}_s=D^{\eta}_{\bar{s}}$. We use the same flexible functional form as in Ref. [@ref:dsspion] with five fit parameters, $$\begin{aligned}
\label{eq:ansatz}
D_{i}^{\eta}(z,\mu_0) =
N_{i} \,z^{\alpha_{i}}(1-z)^{\beta_{i}} [1+\gamma_{i} (1-z)^{\delta_{i}}]\,\,\, \times\,\,\,\,\,\,\, \nonumber\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
\frac{1}{B[2+\alpha_{i},\beta_{i}+1]+\gamma_i B[2+\alpha_{i},\beta_{i}+\delta_{i}+1]}\;,\end{aligned}$$ where $z$ is the fraction of the four-momentum of the parton taken by the eta meson and $i=u,\bar{u},d,\bar{d},s,\bar{s}$. $B[a,b]$ denotes the Euler beta function with $a$ and $b$ chosen such that $N_i$ is normalized to the second moment $\int_0^1 zD_i^{\eta}(z,\mu_0)\, dz$ of the FFs.
Although the assumption of equal light quark FFs seems to be rather restrictive at first, such an ansatz can be anticipated in view of the wave function of the eta meson. One might expect a difference between strange and non-strange FFs though due to the larger mass of strange quarks, i.e., that the hadronization of $u$ or $d$ quarks is somewhat less likely as they need to pick up an $s\bar{s}$ pair from the vacuum to form the eta. Indeed, a “strangeness suppression" is found for kaon FFs [@ref:dsspion] leading, for instance, to $D_s^{K^{-}}>D_{\bar{u}}^{K^{-}}$. In case of the eta wave function one can argue, however, that also a fragmenting $s$ quark needs to pick up an $s\bar{s}$ pair from the vacuum. Nevertheless, we have explicitly checked that the introduction of a second independent parameterization like in (\[eq:ansatz\]) to discriminate between the strange and non-strange FFs, does not improve the quality of the fit to the currently available data. Clearly, SIDIS data would be required to further refine our assumptions in the light quark sector in the future.
The gluon-to-eta fragmentation $D_g^{\eta}$ is mainly constrained by data from RHIC rather than scaling violations in SIA. As for pion and kaon FFs in [@ref:dsspion], we find that a simplified functional form with $\gamma_g=0$ in Eq. (\[eq:ansatz\]) provides enough flexibility to accommodate all data.
Turning to the fragmentation of heavy charm and bottom quarks into eta mesons, we face the problem that none of the available data sets constraints their contributions significantly. Here, the lack of any flavor-tagged data from SIA hurts most as hadron-hadron cross sections at RHIC energies do not receive any noticeable contributions from heavy quark fragmentation. Introducing independent FFs for charm and bottom at their respective mass thresholds improves the overall quality of the fit but their parameters are essentially unconstrained. For this reason, we checked that taking the shape of the much better constrained charm and bottom FFs for pions, kaons, protons, and residual charged hadrons from [@ref:dsspion; @ref:dssproton], but allowing for different normalizations, leads to fits of comparable quality with only two additional free parameters.
The best fit is obtained for the charm and bottom FFs from an analysis of residual charged hadrons [@ref:dssproton], i.e., hadrons other than pions, kaons, and protons, and hence we use $$\begin{aligned}
\label{eq:ansatz-hq}
D_{c}^{\eta}(z,m_c) &=& D_{\bar{c}}^{\eta}(z,m_c) = N_c \,D_{c}^{res}(z,m_c)\;, \nonumber \\
D_{b}^{\eta}(z,m_b) &=& D_{\bar{b}}^{\eta}(z,m_b) = N_b \,D_{b}^{res}(z,m_b)\;.\end{aligned}$$ $N_c$ and $N_b$ denote the normalizations for the charm and bottom fragmentation probabilities at their respective initial scales, to be constrained by the fit to data. The parameters specifying the $D_{c,b}^{res}$ can be found in Tab. III of Ref. [@ref:dssproton]. The FFs in Eq. (\[eq:ansatz-hq\]) are included discontinuously as massless partons in the scale evolution of the FFs above their $\overline{\mathrm{MS}}$ thresholds $\mu=m_{c,b}$ with $m_{c}=1.43\,\mathrm{GeV}$ and $m_{b}=4.3\,\mathrm{GeV}$ denoting the mass of the charm and bottom quark, respectively.
In total, the parameters introduced in Eqs. (\[eq:ansatz\]) and (\[eq:ansatz-hq\]) to describe the FFs of quarks and gluons into eta mesons add up to 10. They are determined by a standard $\chi^2$ minimization for $N=140$ data points, where $$\label{eq:chi2}
\chi^2=\sum_{j=1}^N \frac{(T_j-E_j)^2}{\delta E_j^2}\;.$$ $E_j$ represents the experimentally measured value of a given observable, $\delta E_j$ its associated uncertainty, and $T_j$ is the corresponding theoretical estimate calculated at NLO accuracy for a given set of parameters in Eqs. (\[eq:ansatz\]) and (\[eq:ansatz-hq\]). For the experimental uncertainties $\delta E_i$ we take the statistical and systematic errors in quadrature for the time being.
Data sets included in the fit\[subsec:data\]
--------------------------------------------
A total of 15 data sets is included in our analysis. We use all SIA data with $\sqrt{S}>10\,\mathrm{GeV}$: HRS [@ref:hrs] and MARK II [@ref:mark2] at $\sqrt{S}=29\,\mathrm{GeV}$, JADE [@ref:jade1; @ref:jade2] and CELLO [@ref:cello] at $\sqrt{S}=34-35\,\mathrm{GeV}$, and ALEPH [@ref:aleph1; @ref:aleph2; @ref:aleph3], L3 [@ref:l31; @ref:l32], and OPAL [@ref:opal] at $\sqrt{S} = M_Z = 91.2\,\mathrm{GeV}$. Preliminary results from BABAR [@ref:babar] at $\sqrt{S}=10.54\,\mathrm{GeV}$ are also taken into account.
The availability of $e^+e^-$ data in approximately three different energy regions of $\sqrt{S} \simeq 10,$ 30, and $90\,\mathrm{GeV}$ helps to constrain the gluon fragmentation function from scaling violations. Also, the appropriate electroweak charges in the inclusive process $e^+e^-\to (\gamma,Z)\rightarrow \eta X$ vary with energy, see, e.g., App. A of Ref. [@ref:dsv] for details, and hence control which combinations of quark FFs are probed. Only the CERN-LEP data taken on the $Z$ resonance receive significant contributions from charm and bottom FFs.
Given that the range of applicability for FFs is limited to medium-to-large values of the energy fraction $z$, as discussed, e.g., in Ref. [@ref:dsspion], data points with $z<0.1$ are excluded from the fit. Whenever the data set is expressed in terms of the scaled three-momentum of the eta meson, i.e., $x_p\equiv 2p_{\eta}/\sqrt{S}$, we convert it to the usual scaling variable $z=x_p/\beta$, where $\beta=p_{\eta}/E_{\eta}=\sqrt{1-m_{\eta}^2/E_{\eta}^2}$. In addition to the cut $z>0.1$, we also impose that $\beta>0.9$ in order to avoid kinematic regions where mass effects become increasingly relevant. The cut on $\beta$ mainly affects the data at low $z$ from BABAR [@ref:babar].
In case of single-inclusive eta meson production in hadron-hadron collisions, we include data sets from PHENIX at $\sqrt{S}=200\,\mathrm{GeV}$ at mid-rapidity [@ref:phenix2006; @ref:phenix-run6] in our global analysis. The overall scale uncertainty of $9.7\%$ in the PHENIX measurement is not included in $\delta E_j$ in Eq. (\[eq:chi2\]). All data points have a transverse momentum $p_T$ of at least $2\,\mathrm{GeV}$. As we shall demonstrate below, these data provide an invaluable constraint on the quark and gluon-to-eta fragmentation probabilities. In general, hadron collision data probe FFs at fairly large momentum fractions $z\gtrsim 0.5$, see, e.g., Fig. 6 in Ref. [@ref:lhcpaper], complementing the information available from SIA. The large range of $p_T$ values covered by the recent PHENIX data [@ref:phenix-run6], $2\le p_T\le 20\,\mathrm{GeV}$, also helps to constrain FFs through scaling violations.
As in other analyses of FFs [@ref:dsspion; @ref:dssproton] we do not include eta meson production data from hadron-hadron collision experiments at much lower c.m.s. energies, like Fermilab-E706 [@ref:e706]. It is known that theoretical calculations at NLO accuracy do not reproduce such data very well without invoking resummations of threshold logarithms to all orders in pQCD [@ref:resum].
Results\[sec:results\]
======================
In this Section we discuss in detail the results of our global analysis of FFs for eta mesons at NLO accuracy of QCD. First, we shall present the parameters of the optimum fits describing the $D_i^{\eta}$ at the input scale. Next, we compare our fits to the data used in the analysis and give $\chi^2$ values for each individual set of data. Finally, we estimate the uncertainties in the extraction of the $D_i^{\eta}$ using the Lagrange multiplier technique and discuss the role of the different data sets in constraining the FFs.
Optimum fit to data \[subsec:fit\]
----------------------------------
In Tab. \[tab:para\] we list the set of parameters specifying the optimum fit of eta FFs at NLO accuracy in Eqs. (\[eq:ansatz\]) and (\[eq:ansatz-hq\]) at our input scale $\mu_0=1\,\mathrm{GeV}$ for the light quark flavors and the gluon. Charm and bottom FFs are included at their mass threshold $\mu_0=m_c$ and $\mu_0=m_b$, respectively [@ref:fortran].
The data sets included in our global analysis, as discussed in Sec. \[subsec:data\], and the individual $\chi^2$ values are presented in Tab. \[tab:data\].
Flavor $i$ $N_i$ $\alpha_i$ $\beta_i$ $\gamma_i$ $\delta_i$
--------------------------------- ------- ------------ ----------- ------------ ------------
$u,\bar{u},d,\bar{d},s,\bar{s}$ 0.038 1.372 1.487 2000.0 34.03
$g$ 0.070 10.00 9.260 0 0
$c,\bar{c}$ 1.051 - - - -
$b,\bar{b}$ 0.664 - - - -
: \[tab:para\]Parameters describing the NLO FFs for eta mesons, $D_i^{\eta}(z,\mu_0)$, in Eqs. (\[eq:ansatz\]) and (\[eq:ansatz-hq\]) at the input scale $\mu_0=1\,\mathrm{GeV}$. Inputs for the charm and bottom FFs refer to $\mu_0=m_c$ and $\mu_0=m_b$, respectively.
We note that the quoted number of points and $\chi^2$ values are based only on fitted data, i.e., $z>0.1$ and $\beta>0.9$ in SIA.
As can be seen, for most sets of data their partial contribution to the $\chi^2$ of the fit is typically of the order of the number of data points or even smaller. The most notable exceptions are the HRS [@ref:hrs] and ALEPH ’02 [@ref:aleph3] data, where a relatively small number of points have a significant $\chi^2$, which in turn leads to total $\chi^2$ per degree of freedom (d.o.f.) of about 1.6 for the fit. We have checked that these more problematic sets of data could be removed from the fit without reducing its constraining power or changing the obtained $D_i^{\eta}$ significantly. The resulting, fairly large $\chi^2/d.o.f.$ due to a few isolated data points is a common characteristic of all extractions of FFs made so far [@ref:dsspion; @ref:dssproton; @ref:akk; @ref:hirai; @ref:kretzer] for other hadron species.
The overall excellent agreement of our fit with experimental results for inclusive eta meson production in SIA and the tension with the HRS and ALEPH ’02 data is also illustrated in Fig. \[fig:sia-eta\]. It is worth pointing out that both ALEPH ’00 [@ref:aleph2] and BABAR [@ref:babar] data are well reproduced for all momentum fractions $z$ in spite of being at opposite ends of the c.m.s. energy range covered by experiments.
------------------------------------- ------------- ----------
Experiment data points $\chi^2$
fitted
BABAR [@ref:babar] 18 8.1
HRS [@ref:hrs] 13 51.6
MARK II [@ref:mark2] 7 3.8
JADE ’85 [@ref:jade1] 1 9.6
JADE ’90 [@ref:jade2] 3 1.2
CELLO [@ref:cello] 4 1.1
ALEPH ’92 [@ref:aleph1] 8 2.0
ALEPH ’00 [@ref:aleph2] 18 22.0
ALEPH ’02 [@ref:aleph3] 5 61.6
L3 ’92 [@ref:l31] 3 5.1
L3 ’94 [@ref:l32] 8 10.5
OPAL [@ref:opal] 9 9.0
PHENIX $2 \gamma$ [@ref:phenix2006] 12 4.1
PHENIX $3 \pi$ [@ref:phenix2006] 6 2.9
PHENIX ’06 [@ref:phenix-run6] 25 13.3
[**TOTAL:**]{} 140 205.9
------------------------------------- ------------- ----------
: \[tab:data\]Data used in the global analysis of eta FFs, the individual $\chi^2$ values for each set, and the total $\chi^2$ of the fit.
Our fit compares very well with all data on high-$p_T$ eta meson production in proton-proton collisions from RHIC [@ref:phenix2006; @ref:phenix-run6]. The latest set of PHENIX data [@ref:phenix-run6] significantly extends the range in $p_T$ at much reduced uncertainties and provides stringent constraints on the FFs as we shall demonstrate below. The normalization and trend of the data is nicely reproduced over a wide kinematical range as can be inferred from Figs. \[fig:hadronic2g\]-\[fig:hadronic06\]. In each case, the invariant cross section for $pp\rightarrow \eta X$ at $\sqrt{S}=200\,\mathrm{GeV}$ is computed at NLO accuracy, averaged over the pseudorapidity range of PHENIX, $|\eta|\le 0.35$, and using the NLO set of PDFs from CTEQ [@ref:cteq] along with the corresponding value of $\alpha_s$. Throughout our analysis we choose the transverse momentum of the produced eta as both the factorization and the renormalization scale, i.e., $\mu_f=\mu_r=p_T$.
Since the cross sections drop over several orders of magnitude in the given range of $p_T$, we show also the ratio (data-theory)/theory in the lower panels of Figs. \[fig:hadronic2g\]-\[fig:hadronic06\] to facilitate the comparison between data and our fit. One notices the trend of the theoretical estimates to overshoot the data near the lowest values of transverse momenta, $p_T\simeq 2\,\mathrm{GeV}$ which indicates that the factorized pQCD approach starts to fail. Compared to pion production at central pseudorapidities, see Fig. 6 in Ref. [@ref:dsspion], the breakdown of pQCD sets in at somewhat higher $p_T$ as is expected due to the larger mass of the eta meson.
The shaded bands in Figs. \[fig:hadronic2g\]-\[fig:hadronic06\] are obtained with the Lagrange multiplier method, see Sec. \[subsec:uncert\] below, applied to each data point. They correspond to the maximum variation of the invariant cross section computed with alternative sets of eta FFs consistent with an increase of $\Delta \chi^2=1$ or $\Delta \chi^2=2\%$ in the total $\chi^2$ of the best global fit to all SIA and $pp$ data.
In addition to the experimental uncertainties propagated to the extracted $D_i^{\eta}$, a large theoretical ambiguity is associated with the choice of the factorization and renormalization scales used in the calculation of the $pp\to \eta X$ cross sections. These errors are much more sizable than experimental ones and very similar to those estimated for $pp\to\pi X$ in Fig. 6 of Ref. [@ref:dsspion]. As in the DSS analysis for pion and kaon FFs [@ref:dsspion] the choice $\mu_f=\mu_r=p_T$ and $\mu_f=\mu_r=S$ in $pp$ collisions and SIA, respectively, leads to a nice global description of all data sets with a common universal set of eta FFs.
Next, we shall present an overview of the obtained FFs $D_i^{\eta}(z,Q)$ for different parton flavors $i$ and compare them to FFs for other hadrons. The upper row of panels in Fig. \[fig:eta-ff-comp\] shows the dependence of the FFs on the energy fraction $z$ taken by the eta meson at a scale $Q$ equal to the mass of the $Z$ boson, i.e., $Q =M_Z$. Recall that at our input scale $Q=\mu_0=1\,\mathrm{GeV}$ we assume that $D^{\eta}_u=D^{\eta}_{\bar{u}}=D^{\eta}_d=D^{\eta}_{\bar{d}}=
D^{\eta}_s=D^{\eta}_{\bar{s}}$, which is preserved under scale evolution. At such a large scale $Q=M_Z$ the heavy quark FFs are of similar size, which is not too surprising as mass effects are negligible, i.e., $m_{c,b}\ll M_Z$. The gluon-to-eta fragmentation function $D_g^{\eta}$ is slightly smaller but rises towards smaller values of $z$. Overall both the shape and the hierarchy between the different FFs $D_i^{\eta}$ is similar to those found, for instance, for pions; see Fig. 18 in [@ref:dsspion], with the exception of the “unfavored" strangeness-to-pion fragmentation function which is suppressed. In order to make the comparison to FFs for other hadrons more explicit, we show in the lower three rows of Fig. \[fig:eta-ff-comp\] the ratios of the obtained $D_i^{\eta}(z,M_z)$ to the FFs for pions, kaons, protons from the DSS analysis [@ref:dsspion; @ref:dssproton].
The eta and pion production yields are known to be consistent with a constant ratio of about a half in a wide range of c.m.s. energies in hadronic collisions for $p_T\gtrsim 2\,\mathrm{GeV}$, but the ratio varies from approximately 0.2 at $z\simeq 0.1$ to about 0.5 for $z\gtrsim 0.4$ in SIA [@ref:phenix2006]. It is interesting to see how these findings are reflected in the ratios of the eta and neutral pion FFs for the individual parton flavors. We find that $D^{\eta}_{u+\bar{u}}/D^{\pi^{0}}_{u+\bar{u}}$ follows closely the trend of the SIA data as is expected since gluon fragmentation enters only at NLO in the cross section calculations. For strangeness the rate of eta to pion FFs increases towards larger $z$ because of the absence of strange quarks in the pion wave functions.
Inclusive hadron production at small-to-medium values of $p_T$ is known to be dominated by gluon fragmentation at relatively large values of momentum fraction $z$ [@ref:dsspion; @ref:lhcpaper] largely independent of the c.m.s. energy $\sqrt{S}$. In the relevant range of $z$, $0.4\lesssim z \lesssim 0.6$, the ratio $D_g^{\eta}/D_g^{\pi^{0}}$ resembles the constant ratio of roughly 0.5 found in the eta-to-pion production yields. Both at larger and smaller values of $z$ the $D^{\eta}_g$ is suppressed with respect to $D_g^{\pi^{0}}$. In general, one should keep in mind that FFs always appear in complicated convolution integrals in theoretical cross section calculations [@ref:eenlo; @ref:ppnlo] which complicates any comparison of cross section and fragmentation function ratios for different hadrons.
The comparison to the DSS kaon FFs [@ref:dsspion] is shown in the panels in third row of Fig. \[fig:eta-ff-comp\]. Most remarkable is the ratio of the gluon FFs, which is approximately constant, $D_g^{\eta}/D_g^K \simeq 2$, over a wide region in $z$ but drops below one for $z\gtrsim 0.6$ At large $z$, $D^{\eta}_{u+\bar{u}}$ tends to be almost identical to $D^{K}_{u+\bar{u}}$, while $D^{\eta}_{s+\bar{s}}$ resembles $D^{K}_{s+\bar{s}}$ only at low $z$. The latter result might be understood due to the absence of strangeness suppression for $D^{K}_{s+\bar{s}}$, whereas a fragmenting $s$ quark needs to pick up an $\bar{s}$ quark from the vacuum to form the eta meson. It should be noted, however, that kaon FFs have considerably larger uncertainties than pion FFs [@ref:dsspion] which makes the comparisons less conclusive.
This is even more true for the proton FFs [@ref:dssproton]. Nevertheless, it is interesting to compare our $D_i^{\eta}$ to those for protons which is done in the lower panels of Fig. \[fig:eta-ff-comp\]. As for kaons, we observe a rather flat behavior of the ratio $D_g^{\eta}/D_g^{p}$, which drops below one at larger values of $z$. The corresponding rates for light quark FFs show the opposite trend and rise towards $z\to 1$.
Regarding the relative sizes of the fragmentation probabilities for light quarks and gluons into the different hadron species, we find that eta FFs are suppressed w.r.t. pion FFs (except for strangeness), are roughly similar to those for kaons, and larger than the proton FFs. This can be qualitatively understood from the hierarchy of the respective hadron masses. For $z\gtrsim 0.6$, the lack of decisive constraints from data prevents one from drawing any conclusions in this kinematic region.
As we have already discussed in Sec. \[subsec:outline\], due to the lack of any flavor tagged SIA data sensitive to the hadronization of charm and bottom quarks into eta mesons, we adopted the same functional form as for the fragmentation into residual charged hadrons [@ref:dssproton], i.e., hadrons other than pions, kaons, and protons. The fit favors a charm fragmentation almost identical to that for the residual hadrons ($N_c=1.058$) and a somewhat reduced distribution for bottom fragmentation ($N_b=0.664$). At variance to what is found for light quarks and gluons, after evolution, $D_{c+\bar{c}}^{\eta}$ and $D_{b+\bar{b}}^{\eta}$ differ significantly in size and shape from their counterparts for pions, kaons, and protons as can be also inferred from Fig. \[fig:eta-ff-comp\]. Future data are clearly needed here for any meaningful comparison.
Estimates of uncertainties \[subsec:uncert\]
--------------------------------------------
Given the relatively small number of data points available for the determination of the $D_i^{\eta}$ as compared to global fits of pion, kaon, and proton FFs [@ref:dsspion; @ref:dssproton], we refrain from performing a full-fledged error analysis. However, in order to get some idea of the uncertainties of the $D_i^{\eta}$ associated with experimental errors, how they propagate into observables, and the role of the different data sets in constraining the $D_i^{\eta}$, we perform a brief study based on Lagrange multipliers [@ref:lagrange; @ref:dsspion; @ref:dssv].
This method relates the range of variation of a physical observable ${\cal{O}}$ dependent on FFs to the variation in the $\chi^2$ function used to judge the goodness of the fit. To this end, one minimizes the function $$\label{eq:lm}
\Phi(\lambda,\{a_i\}) = \chi^2(\{a_i\}) + \lambda\, {\cal{O}}(\{a_i\})$$ with respect to the set of parameters $\{a_i\}$ describing the FFs in Eqs. (\[eq:ansatz\]) and (\[eq:ansatz-hq\]) for fixed values of $\lambda$. Each of the Lagrange multipliers $\lambda$ is related to an observable ${\cal{O}}(\{a_i\})$, and the choice $\lambda=0$ corresponds to the optimum global fit. From a series of fits for different values of $\lambda$ one can map out the $\chi^2$ profile for any observable ${\cal{O}}(\{a_i\})$ free of the assumptions made in the traditional Hessian approach [@ref:hessian].
As a first example and following the DSS analyses [@ref:dsspion; @ref:dssproton], we discuss the range of variation of the truncated second moments of the eta FFs, $$\label{eq:truncmom}
\xi^{\eta}_i(z_{\min},Q) \equiv \int_{z_{\min}}^1 z D_i^{\eta}(z,Q)\, dz,$$ for $z_{\min}=0.2$ and $Q=5\,\mathrm{GeV}$ around the values obtained in the optimum fit to data, $\xi^{\eta}_{i\,0}$. In a LO approximation, the second moments $\int_0^1 zD_i^{\eta}(z,Q)dz$ represent the energy fraction of the parent parton of flavor $i$ taken by the eta meson at a scale $Q$. The truncated moments in Eq. (\[eq:truncmom\]) discard the low-$z$ contributions, which are not constrained by data and, more importantly, where the framework of FFs does not apply. In general, FFs enter calculations of cross sections as convolutions over a wide range of $z$, and, consequently, the $\xi^{\eta}_i(z_{\min},Q)$ give a first, rough idea of how uncertainties in the FFs will propagate to observables.
The solid lines in Fig. \[fig:profiles-ffs\] show the $\xi^{\eta}_i(z_{\min},Q)$ defined in Eq. (\[eq:truncmom\]) for $i=u+\bar{u}$, $g$, $c+\bar{c}$, and $b+\bar{b}$ against the corresponding increase $\Delta \chi^2$ in the total $\chi^2$ of the fit. The two horizontal lines indicate a $\Delta \chi^2$ of one unit and an increase by $2\%$ which amounts to about 4 units in $\chi^2$, see Tab. \[tab:data\]. The latter $\Delta \chi^2$ should give a more faithful estimate of the relevant uncertainties in global QCD analyses [@ref:dsspion; @ref:dssproton; @ref:cteq; @ref:dssv] than an increase by one unit.
As can be seen, the truncated moment $\xi^{\eta}_{u+\overline{u}}$, associated with light quark FFs $D^{\eta}_{u+\overline{u}}=
D^{\eta}_{d+\overline{d}}=D^{\eta}_{s+\overline{s}}$, is constrained within a range of variation of approximately $^{+30\%}_{-20\%}$ around the value computed with the best fit, assuming a conservative increase in $\chi^2$ by $2\%$. The estimated uncertainties are considerably larger than the corresponding ones found for pion and kaon FFs, which are typically of the order of $\pm$3% and $\pm$10% for the light quark flavors [@ref:dsspion], respectively, but closer to the $\pm 20\%$ observed for proton and anti-proton FFs [@ref:dssproton]. For the truncated moment $\xi_g^{\eta}$ of gluons shown in the upper right panel of Fig. \[fig:profiles-ffs\], the range of uncertainty is slightly smaller than one found for light quarks and amounts to about $\pm 15\%$. The allowed variations are larger for charm and bottom FFs as can be inferred from the lower row of plots in Fig. \[fig:profiles-ffs\].
Apart from larger experimental uncertainties and the much smaller amount of SIA data for identified eta mesons, the lack of any information from SIDIS is particularly responsible for the large range of variations found for the light quarks in Fig. \[fig:profiles-ffs\]. We recall that the missing SIDIS data for produced eta mesons also forced us to assume that all light quark FFs are the same in Eq. (\[eq:ansatz\]). The additional ambiguities due to this assumption are not reflected in the $\chi^2$ profiles shown in Fig. \[fig:profiles-ffs\]. The FFs for charm and bottom quarks into eta mesons suffer most from the lack of flavor tagged data in SIA.
To further illuminate the role of the different data sets in constraining the $D_{i}^{\eta}$ we give also the partial contributions to $\Delta \chi^2$ of the individual data sets from $pp$ collisions and the combined SIA data in all panels of Fig. \[fig:profiles-ffs\]. Surprisingly, the light quark FFs are constrained best by the PHENIX $pp$ data from run ’06 and not by SIA data. SIA data alone would prefer a smaller value for $\xi^{\eta}_{u+\bar{u}}$ by about $10\%$, strongly correlated to larger moments for charm and bottom fragmentation, but the minimum in the $\chi^2$ profile is much less pronounced and very shallow, resulting in rather sizable uncertainties.
This unexpected result is most likely due to the fact that the SIA data from LEP experiments constrain mainly the flavor singlet combination, i.e., the sum of all quark flavors, including charm and bottom. Since there are no flavor tagged data available from SIA for eta mesons, the separation into contributions from light and heavy quark FFs is largely unconstrained by SIA data. Only the fairly precise data from BABAR at $\sqrt{S}\simeq 10\,\mathrm{GeV}$ provide some guidance as they constrain a different combination of the light $u$, $d$, and $s$ quark FFs weighted by the respective electric charges. Altogether, this seems to have a negative impact on the constraining power of the SIA data. For not too large values of $p_T$, data obtained in $pp$ collisions are in turn mainly sensitive to $D_g^{\eta}$ but in a limited range of $z$, $0.4\lesssim z \lesssim 0.6$, as mentioned above. Through the scale evolution, which couples quark and gluon FFs, these data provide a constraint on $\xi^{\eta}_{u+\bar{u}}$. In addition, the latest PHENIX data extend to a region of $p_T$ where quark fragmentation becomes important as well. To illustrate this quantitatively, Fig. \[fig:fractions\] shows the relative fractions of quarks and gluons fragmenting into the observed eta meson as a function of $p_T$ in $pp$ collisions for PHENIX kinematics. As can be seen, quark-to-eta FFs become dominant for $p_T\gtrsim 10\,\mathrm{GeV}$.
The $\chi^2$ profile for the truncated moment of the gluon, $\xi^{\eta}_g$, is the result of an interplay between the PHENIX run ’06 $pp$ data and the SIA data sets which constrain the moment $\xi^{\eta}_g$ towards smaller and larger values, respectively. This highlights the complementarity of the $pp$ and SIA data. SIA data have an impact on $\xi^{\eta}_g$ mainly through the scale evolution in the energy range from LEP to BABAR. In addition, SIA data provide information in the entire range of $z$, whereas the $pp$ data constrain only the large $z$ part of the truncated moment $\xi^{\eta}_g$. Consequently, the corresponding $\chi^2$ profile for $z_{\min}=0.4$ or $0.5$ would be much more dominated by $pp$ data. In general, the other data sets from PHENIX [@ref:phenix2006] do not have a significant impact on any of the truncated moments shown in Fig. \[fig:profiles-ffs\] due to their limited precision and covered kinematic range.
Compared to pion and kaon FFs [@ref:dsspion], all $\chi^2$ profiles in Fig. \[fig:profiles-ffs\] are significantly less parabolic, which prevents one from using the Hessian method [@ref:hessian] for estimating uncertainties. More importantly, the shapes of the $\chi^2$ profiles reflect the very limited experimental information presently available to extract eta FFs for all flavors reliably. Another indication in that direction are the different preferred minima for the values of the $\xi_i^{\eta}$ by the SIA and $pp$ data, although tolerable within the large uncertainties. Our fit is still partially driven by the set of assumptions on the functional form of and relations among different FFs, which we are forced to impose in order to keep the number of free fit parameters at level such that they can be actually determined by data. Future measurements of eta production in SIA, $pp$ collisions, and, in particular, SIDIS are clearly needed to test the assumptions made in our analysis and to further constrain the $D_i^{\eta}$.
The large variations found for the individual FFs in Fig. \[fig:profiles-ffs\] are strongly correlated, and, therefore, their impact on uncertainty estimates might be significantly reduced for certain observables. If, in addition, the observable of interest is only sensitive to a limited range of hadron momentum fractions $z$, than the corresponding $\chi^2$ profile may assume a more parabolic shape.
In order to illustrate this for a specific example, we compute the $\chi^2$ profiles related to variations in the theoretical estimates of the single-inclusive production of eta mesons in $pp$ collisions at PHENIX kinematics [@ref:phenix-run6]. The results are shown in Fig. \[fig:profiles-pp\] for four different values of $p_T$ along with the individual contributions to $\Delta \chi^2$ from the SIA and $pp$ data sets. As anticipated, we find a rather different picture as compared to Fig. \[fig:profiles-ffs\], with variations only ranging from $5$ to $10\%$ depending on the $p_T$ value and tolerating $\Delta \chi^2/\chi^2=2\%$. The corresponding uncertainty bands are also plotted in Fig. \[fig:hadronic06\] above for both $\Delta \chi^2=1$ and $\Delta \chi^2/\chi^2=2 \%$ and have been obtained for the other $pp$ data from PHENIX [@ref:phenix2006] shown in Figs. \[fig:hadronic2g\] and \[fig:hadronic3pi\] as well.
The uncertainties for $pp \to \eta X$ are smallest for intermediate $p_T$ values, where the latest PHENIX measurement [@ref:phenix-run6] is most precise and the three data sets [@ref:phenix2006; @ref:phenix-run6] have maximum overlap, and increase towards either end of the $p_T$ range of the run ’06 data. In particular at intermediate $p_T$ values, the main constraint comes from the PHENIX run ’06 data, whereas SIA data become increasingly relevant at low $p_T$. The previous $pp$ measurements from PHENIX [@ref:phenix2006] are limited to $p_T\lesssim 11\,\mathrm{GeV}$ and have considerably larger uncertainties and, hence, less impact on the fit.
Conclusions\[sec:conclusions\]
==============================
A first global QCD analysis of eta fragmentation functions at NLO accuracy has been presented based on the world data from electron-positron annihilation experiments and latest results from proton-proton collisions. The obtained parameterizations [@ref:fortran] reproduce all data sets very well over a wide kinematic range.
Even though the constraints imposed on the eta meson fragmentation functions by presently available data are significantly weaker than those for pions or kaons, the availability of eta FFs extends the applicability of the pQCD framework to new observables of topical interest. Among them are the double-spin asymmetry for eta production in longitudinally polarized proton-proton collisions at RHIC, eta meson production at the LHC, possible medium modifications in the hadronization in the presence of a heavy nucleus, and predictions for future semi-inclusive lepton-nucleon scattering experiments.
The obtained FFs still depend on certain assumptions, like $SU(3)$ symmetry for the light quarks, dictated by the lack of data constraining the flavor separation sufficiently well. Compared to FFs for other hadrons they show interesting patterns of similarities and differences which can be further tested with future data.
We are grateful to David R. Muller for help with the BABAR data. CAA gratefully acknowledges the support of the U.S. Department of Energy for this work through the LANL/LDRD Program. The work of FE and JPS was supported by grants no. DE-FG02-04ER41301 and no. DE-FG02-94ER40818, respectively. This work was supported in part by CONICET, ANPCyT, UBACyT, BMBF, and the Helmholtz Foundation.
[99]{} J. C. Collins and D. E. Soper, Nucl. Phys. B [**193**]{}, 381 (1981); B [**213**]{}, 545(E) (1983); B [**194**]{}, 445 (1982). See, e.g., J. C. Collins, D. E. Soper, and G. Sterman, “Perturbative QCD”, A. H. Mueller (ed.), Adv. Ser. Direct. High Energy Phys. [**5**]{}, 1 (1988) and references therein. See, e.g., S. Albino [*et al.*]{}, [arXiv:0804.2021]{} and references therein. R. D. Field and R. P. Feynman, Nucl. Phys. B [**136**]{}, 1 (1978). D. de Florian, R. Sassot, and M. Stratmann, Phys. Rev. D [**75**]{}, 114010 (2007). D. de Florian, R. Sassot, and M. Stratmann, Phys. Rev. D [**76**]{}, 074033 (2007). S. Albino, B. A. Kniehl, and G. Kramer, Nucl. Phys. B [**803**]{}, 42 (2008). M. Hirai, S. Kumano, and T. H. Nagai, Phys. Rev. C [**76**]{}, 065207 (2007). S. Kretzer, Phys. Rev. D [**62**]{}, 054001 (2000). D. de Florian, R. Sassot, M. Stratmann, and W. Vogelsang, Phys. Rev. Lett. [**101**]{}, 072001 (2008); Phys. Rev. D [**80**]{}, 034030 (2009). D. de Florian, M. Stratmann, and W. Vogelsang, Phys. Rev. D [**57**]{}, 5811 (1998). P. M. Nadolsky [*et al.*]{}, Phys. Rev. D [**78**]{}, 013004 (2008). A. D. Martin, W. J. Stirling, R. S. Thorne, and G. Watt, Eur. Phys. J. C [**63**]{}, 189 (2009); R. D. Ball [*et al.*]{}, Nucl. Phys. B [**838**]{}, 136 (2010). D. Stump [*et al.*]{}, Phys. Rev. D [**65**]{}, 014012 (2001). M. Greco and S. Rolli, Z. Phys. C [**60**]{}, 169 (1993). D. Indumathi, H. S. Mani, and A. Rastogi, Phys. Rev. D [**58**]{}, 094014 (1998); D. Indumathi and B. Misra, [arXiv:0901.0228]{}. S. S. Adler [*et al.*]{} \[PHENIX Collaboration\], Phys. Rev. C [**75**]{}, 024909 (2007). F. Ellinghaus \[PHENIX Collaboration\], [arXiv:0808.4124]{}. See, e.g., F. Arleo, Eur. Phys. J. C [**61**]{}, 603 (2009); A. Accardi, F. Arleo, W. K. Brooks, D. D’Enterria, and V. Muccifora, Riv. Nuovo Cim. [**032**]{}, 439 (2010). R. Sassot, M. Stratmann, and P. Zurita, Phys. Rev. D [**81**]{}, 054001 (2010). G. Curci, W. Furmanski, and R. Petronzio, Nucl. Phys. B [**[175]{}**]{}, 27 (1980); W. Furmanski and R. Petronzio, Phys. Lett. [**[97B]{}**]{}, 437 (1980); L. Beaulieu, E. G. Floratos, and C. Kounnas, Nucl. Phys. B [**166**]{}, 321 (1980); P. J. Rijken and W. L. van Neerven, Nucl. Phys. B [**487**]{}, 233 (1997); M. Stratmann and W. Vogelsang, Nucl. Phys. B [**496**]{}, 41 (1997); A. Mitov and S. Moch, Nucl. Phys. B [**751**]{}, 18 (2006); A. Mitov, S. Moch, and A. Vogt, Phys. Lett. B [**638**]{}, 61 (2006); S. Moch and A. Vogt, Phys. Lett. B [**659**]{}, 290 (2008). G. Altarelli, R. K. Ellis, G. Martinelli, and S. Y. Pi, Nucl. Phys. B [**160**]{}, 301 (1979); W. Furmanski and R. Petronzio, Z. Phys. C [**11**]{}, 293 (1982); P. Nason and B. R. Webber, Nucl. Phys. B [**421**]{}, 473 (1994) \[Erratum-ibid. B [**480**]{}, 755 (1996)\]. F. Aversa, P. Chiappetta, M. Greco, and J. P. Guillet, Nucl. Phys. B [**327**]{}, 105 (1989); D. de Florian, Phys. Rev. D [**67**]{}, 054004 (2003); B. Jäger, A. Schäfer, M. Stratmann, and W. Vogelsang, Phys. Rev. D [**67**]{}, 054005 (2003). M. Stratmann and W. Vogelsang, Phys. Rev. D [**64**]{}, 114007 (2001). S. Abachi [*et al.*]{} \[HRS Collaboration\], Phys. Lett. B [**205**]{}, 111 (1988). G. Wormser [*et al.*]{} \[MARK-II Collaboration\], Phys. Rev. Lett. [**61**]{}, 1057 (1988). W. Bartel [*et al.*]{} \[JADE Collaboration\], Z. Phys. C [**28**]{}, 343 (1985). D. Pitzl [*et al.*]{} \[JADE Collaboration\], Z. Phys. C [**46**]{}, 1 (1990) \[Erratum-ibid. C [**47**]{}, 676 (1990)\]. H. J. Behrend [*et al.*]{} \[CELLO Collaboration\], Z. Phys. C [**47**]{}, 1 (1990). D. Buskulic [*et al.*]{} \[ALEPH Collaboration\], Phys. Lett. B [**292**]{}, 210 (1992). R. Barate [*et al.*]{} \[ALEPH Collaboration\], Eur. Phys. J. C [**16**]{}, 613 (2000). A. Heister [*et al.*]{} \[ALEPH Collaboration\], Phys. Lett. B [**528**]{}, 19 (2002). O. Adriani [*et al.*]{} \[L3 Collaboration\], Phys. Lett. B [**286**]{}, 403 (1992). M. Acciarri [*et al.*]{} \[L3 Collaboration\], Phys. Lett. B [**328**]{}, 223 (1994). K. Ackerstaff [*et al.*]{} \[OPAL Collaboration\], Eur. Phys. J. C [**5**]{}, 411 (1998). F. Anulli \[BABAR Collaboration\], [arXiv:hep-ex/0406017]{}. A. Adare [*et al.*]{} \[PHENIX Collaboration\], [arXiv:1009.6224]{}. R. Sassot, M. Stratmann, and P. Zurita, [arXiv:1008.0540]{} (to appear in Phys. Rev. D). L. Apanasevich [*et al.*]{} \[Fermilab E706 Collaboration\], Phys. Rev. D [**68**]{}, 052001 (2003). D. de Florian and W. Vogelsang, Phys. Rev. D [**71**]{}, 114004 (2005). A [Fortran]{} package containing our NLO set of eta FFs can be obtained upon request from the authors. P. M. Nadolsky [*et al.*]{}, Phys. Rev. D [**78**]{}, 013004 (2008).
[^1]: address after Oct. $1^{\mathrm{st}}$: Physics Department, Brookhaven National Laboratory, Upton, NY 11973, USA
|
{
"pile_set_name": "arxiv"
}
|
Those words (choice profanity included) woke me with a start the other night. What was I thinking, organizing this trip to Vietnam to connect sons and daughters who lost fathers on both sides of the Vietnam War?
I have a lot of fears about this journey. There’s the mundane ones about getting sick, or bitten by something slimy. Maybe I'll become separated from the group because something in a shop caught my eye (this, given my nature, is the most likely scenario). But the deeper fears are right under the surface. What’s going to happen when we come face to face with the Vietnamese sons and daughters? Will they be angry? Worse, will I?
It was easy to push past these bigger fears earlier this year when I first formed the 2 Sides Project. Now the trip is getting closer—we leave four weeks from today—and they’re keeping me up at night.
I’m going to have to remember what I know in the daylight: there have been moments in my life when I’ve found people who shared my experience, who spoke the same language as me, who felt the same way I did about things. These moments are profound. They make me feel connected, anchored in the world. They are often turning points that lead me to a better place.
That was the case when I met other sons and daughters in the U.S. who lost fathers in the war. So, I’ll keep my focus on them. And on the amazing experience we have in store. Six of us will be meeting Vietnamese sons and daughters and visiting the sites where our fathers died. I’ll profile them all -- Mike, Ron, Margaret, Susan and Patty -- here in the coming weeks as we get ready. Come with us virtually. It’s going to be quite a journey, and we’re looking forward to sharing it with you.
|
{
"pile_set_name": "pile-cc"
}
|
Rentz RVs Inc. (RRV)
1. A stock is expected to pay a year-end dividend of $2.00, i.e., D1 = $2.00. The dividend is expected to decline at a rate of 5% a year forever (g = -5%). If the company’s expected and required rate of return is 15%, which of the following statements is CORRECT?
a. The company’s current stock price is $20.
b. The company’s dividend yield 5 years from now is expected to be 10%.
c. The constant growth model cannot be used because the growth rate is negative.
d. The company’s expected capital gains yield is 5%.
e. The company’s stock price next year is expected to be $9.50.
2. A share of common stock has just paid a dividend of $2.00. If the expected long-run growth rate for this stock is 2.0%, and if investors' required rate of return is 10.5%, what is the stock’s intrinsic value?
3. E. M. Roussakis Inc.'s stock currently sells for $50 per share. The stock’s dividend is projected to increase at a constant rate of 4% per year. The required rate of return on the stock, rs, is 15.50%. What is Roussakis' expected price 5 years from now?
4. Carter's preferred stock pays a dividend of $2.00 per quarter. If the price of the stock is $60.00, what is its nominal (not effective) annual expected rate of return?
5. Schnusenberg Corporation just paid a dividend of $1.25 per share, and that dividend is expected to grow at a constant rate of 7.00% per year in the future. The company's beta is 1.35, the required return on the market is 10.50%, and the risk-free rate is 4.00%. What is the intrinsic value for Schnusenberg’s stock?
6. Rentz RVs Inc. (RRV) is presently enjoying relatively high growth because of a surge in the demand for recreational vehicles. Management expects earnings and dividends to grow at a rate of 30% for the next 4 years, after which high gas prices will probably reduce the growth rate in earnings and dividends to zero, i.e., g = 0. The company’s last dividend, D0, was $1.25. RRV’s beta is 1.20, the market risk premium is 5.25%, and the risk-free rate is 3.00%. What is the intrinsic value of RRV’s common stock?
7. Using the information on Rentz RVs Inc. from problem 6, what is the dividend yield expected for the next year?
8. The Wei Company's last paid dividend was $2.75. The dividend growth rate is expected to be constant at 2.50% for 2 years, after which dividends are expected to grow at a rate of 8.00% forever. Wei’s required return (rs) is 12.00%. What is the intrinsic value of Wei's stock?
9. Using the information on Wei Company from problem 8, what should be the price of Wei’s stock at the end of Year 5
10. You are an analyst studying Beranek Technologies, which was founded 10 years ago. It has been profitable for the last 5 years, but it has needed all of its earnings to support growth and thus has never paid a dividend. Management has indicated that it plans to pay a $0.50 dividend 3 years from today, then to increase it at a relatively rapid rate for 2 years with 50% dividend growth in year 4 and 25% dividend growth in year 5, and then to increase its dividend at a constant growth rate of 6.00% per year thereafter. Assuming a required return of 15.00%, what is your estimate of the intrinsic value of Beranek's stock?
|
{
"pile_set_name": "pile-cc"
}
|
Exclusive Collection Of Rear View Cameras From TVC-Mall.com
A car rear view camera is a special type of video camera that is produced specifically for the purpose of being attached to the rear of a vehicle to aid in backing up, and to alleviate the rear blind spot. TVC-Mall.com’s rear view cameras are well-known for the powerful functions and premium quality materials. Recently, TVC-Mall.com has released its new models, and launched a rear view cameras promotion. Anyone who want to buy wholesale rear view cameras can visit TVC-Mall.com for more details.
TVC-Mall.com is a leader in cell phone accessories and other electronic accessories. Its rear view cameras are well-known for the powerful functions and premium quality materials. The new collection consists of many different designs. From IR night vision rear view cameras, to 2.4G wireless car rear view camera systems, TVC-Mall.com has everything to ensure customer satisfaction.
“We are excited to launch this promotion, and we encourage wholesalers and retailers to keep coming back to our store to see what new products are available. Those who want to buy wireless car rear view camera systems should visit our online store as soon as possible, because the promotion is for a limited time only,” says a sales manager of the company. “We have the global reach, expertise and infrastructure necessary to guarantee our customers that their data is secure.”
In addition, TVC-Mall.com’s online store features attractive low prices on its a hundred thousand of different styles ofelectronics and related accessories. Superior customer service, high-quality, speedy delivery, and affordable prices, are the reasons to choose TVC-Mall.com
About The TVC Mall (TVC-Mall.com)
Launched in 2008, TVC Mall has a sensitive marketing sense and it has established strong relationships with many original manufacturers of Apple products (iPhone, iPad, iWatch, etc.). Some Apple accessories used to have been sold at TVC-Mall.com before their official launch. The business used to be widely reported by some top media (like BusinessInsider.com, AppleInsider.com, CNET.com, etc.).
Please visit http://www.tvc-mall.com or subscribe its newsletter for the best deals, special prices, rebate savings, exclusive bundles and more.
|
{
"pile_set_name": "pile-cc"
}
|
---
author:
- |
\
Royal Society University Research Fellow\
School of Physics & Astronomy\
The University of Birmingham\
BIRMINGHAM B15 2TT, UK\
E-mail:
title: Experimental Tests of the Standard Model
---
BHAM-HEP/01-02\
31 October 2001
Introduction
============
The field of precise experimental tests of the electroweak sector of the Standard Model encompasses a wide range of experiments. The current status of these is reviewed in this report, with emphasis placed on new developments in the year preceding summer 2001. A theme common to many measurements is that theoretical and experimental uncertainties are comparable. The theoretical uncertainties, usually coming from the lack of higher-order calculations, can be at least as hard to estimate reliably as the experimental errors.
At low energies, new hadronic cross-section results in collisions are discussed. The new measurement of the muon anomalous magnetic moment at Brookhaven is reported and compared with recent Standard Model calculations. Results from the now complete LEP data sample are reviewed, together with recent results from the Tevatron, HERA and SLD. The synthesis of many of these results into a global test of the Standard Model via a comprehensive fit is summarised. Finally, prospects for the next few years are considered.
Many results presented here are preliminary: they are not labelled explicitly for lack of space. References should be consulted for details.
R and $\mathbf{\alpha(M_Z^2)}$
==============================
The BES-II detector at the BEPC electron-positron collider in Beijing, China, has been operating since 1997. Many measurements have been made in the centre-of-mass energy range $2<\sqrt{s}<5$ GeV, but of relevance to electroweak physics are those of the ratio $$R = \frac{\sigma(\Mepem\to\mathrm{hadrons})}{\sigma_0(\Mepem\to\Mmpmm)}$$ where the denominator, $\sigma_0(\Mepem\to\Mmpmm)=4\pi\alpha^2(0)/(3s)$, is the lowest-order QED prediction. The BES measurements [@bib:besr] of R are presented in Figure \[fig:besr\], where the improvement in quality over previous, often very early, measurements is clear. Around 1000 hadronic events are used at each energy, and an average precision of 6.6% is obtained at each of the 85 energy points. The point-to-point correlated error is estimated to be 3.3%, providing a factor of 2 to 3 improvement over earlier measurements.
In order to achieve such an improvement, detailed studies of the detector acceptance for hadronic events at low $\sqrt{s}$ were made, in collaboration with the Lund Monte Carlo team. The experimental acceptance for hadronic events varies in the range 50 to 87% from 2 to 4.8 GeV respectively, so the modelling at low $\sqrt{s}$ is of most concern. Good descriptions of the hadronic event data were obtained from a tuned version of the [LUARLW]{} generator, and the hadronic model-dependent uncertainty is estimated to be as low as 2-3%.
At even lower energies, analysis continues of the large data sample from CMD-2 [@bib:cmd2] at the VEPP-2M collider at Novosibirsk taken over $0.36<\sqrt{s}<1.4$ GeV. Many exclusive final-states are studied, with the main contribution to the overall cross-section arising from $\pi^+\pi^-$ production.
A key application of the low energy R measurements is in the prediction of the value of the electromagnetic coupling at the mass scale. This is modified from its zero-momentum value, $\alpha(0)=1/137.03599976(50)$, by vacuum polarisation loop corrections: $$\alpha(M_Z^2)=\frac{\alpha(0)}{1-\Delta\alpha_{e\mu\tau}(M_Z^2)-
\dahad(M_Z^2)-\Delta\alpha_{top}(M_Z^2)} .$$ The contributions from leptonic and top quark loops ($\Delta\alpha_{e\mu\tau}$ and $\Delta\alpha_{top}$, respectively) are sufficiently well calculated knowing only the particle masses. The $\dahad$ term contains low-energy hadronic loops, and must be calculated via a dispersion integral: $$\dahad(M_Z^2) = - \frac{\alpha
M_Z^2}{3\pi} \Re\int_{4m_{\pi}^2}^\infty ds
\frac{R(s)}{s(s-M_Z^2-i\epsilon)} .$$ The R data points must, at least, be interpolated to evaluate this integral. More sophisticated methods are employed by different authors, and use may also be made of $\tau$ decay spectral function data via isospin symmetry. A recent calculation [@bib:pietrzyk] using minimal assumptions has obtained $\dahad(M_Z^2)=0.02761\pm0.00036$, approximately a factor two more precise than a previous similar estimate which did not use the new BES-II data. With extra theory-driven assumptions, an error as low as $\pm0.00020$ may be obtained [@bib:martin].
Prospects for further improvements in measurements of the hadronic cross-section at low energies are good: an upgraded accelerator in Beijing should give substantially increased luminosity; CLEO proposes to run at lower centre-of-mass energies than before to examine the region from 3 to 5 GeV; DA$\Phi$NE may be able to access the low energy range with radiative events; and finally the concept of a very low energy ring to work together with the present PEP-II LER could give access to the poorly covered region between 1.4 and 2 GeV.
The Muon Anomalous Magnetic Moment g-2
======================================
The Brookhaven E821 experiment has recently reported [@bib:e821] a new measurement of the muon anomalous magnetic moment, $a_{\mu}$, by measuring the spin-precession frequency, $\Mwa$, of polarised muons in a magnetic field: $$a_{\mu} \equiv \frac{g-2}{2} = \frac{\Mwa m_{\mu}c}{e\langle B\rangle}$$ The muons circulate in a special-purpose storage ring constructed to have an extremely uniform magnetic field across its aperture. The spin-precession frequency $\Mwa$ is measured by observing the time variation of production of decay electrons above a fixed energy cut-off (2 GeV), as shown in Figure \[fig:e821\]. The mean bending field is measured using two sets of NMR probes: one fixed set mounted around the ring and used for continuous monitoring, and another set placed on a trolley which can be pulled right around the evacuated beam chamber. In practice, the magnetic field is re-expressed in terms of the mean proton NMR frequency, $\omega_p$, and $a_{\mu}$ extracted from: $$a_{\mu} = \frac{R}{\lambda-R}$$ where $R=\omega_a/\omega_p$ and $\lambda$ is the ratio of muon to proton magnetic moments.
The latest E821 result, obtained using $0.95\times10^9$ $\mu^+$ decays is [@bib:e821]: $$a_{\mu^+} = (11\,659\,202\pm14\pm6)\times 10^{-10}$$ The overall precision obtained is relatively 1.3 parts per million: 1.2 ppm from statistics and 0.5 ppm from systematic errors. Data from a further $4\times10^9$ $\mu^+$ and $3\times10^9$ $\mu^-$ are in hand, and should result in a factor two improvement in the near future.
Interpretation of this result in terms of the Standard Model and possible new physics requires detailed calculations of loop corrections to the simple QED $\mu\mu\gamma$ vertex, which gives the original $g=2$ at lowest order. The corrections may be subdivided into electromagnetic (QED), weak and hadronic parts according to the type of loops. The QED and weak terms are respectively calculated to be $a_{\mu}(QED)
= (11\, 657\,470.57 \pm 0.29) \times 10^{-10}$, and $a_{\mu}(weak)
= (15.2 \pm 0.4) \times 10^{-10}$. The hadronic corrections, although much smaller than the QED correction, provide the main source of uncertainty on the predicted $a_{\mu}$. To $\mathcal{O}(\alpha^3)$, the dominant corrections may be subdivided into the lowest and higher-order vacuum polarisation terms and higher-order “light-on-light” terms. The lowest-order (vacuum polarisation) term is numerically much the largest. It can be calculated using a dispersion relation: $$a_{\mu}(had;LO) = \frac{\alpha^2(0)}{3\pi^2}\int_{4m_{\pi}^2}^{\infty}
ds \frac{R(s)\hat{K}(s)}{s^2}$$ where $\hat{K}(s)$ is a known bounded function. As for $\alpha(M_Z^2)$, optional additional theory-driven assumptions may be made. Recent estimates of the lowest-order vacuum polarisation term are shown in Table \[tab:lovp\]. There is some ambiguity at the level of $\sim$5$\times10^{-10}$ about the treatment of further photon radiation in some of these calculations, as it may be included either here or as a higher-order correction, depending also on whether the input experimental data includes final-states with extra photons. The estimates agree with each other within the overall errors, which is not surprising since the data employed is mostly in common. It is notable that the best value available at the time of the E821 publication was that of Davier and Höcker (“DH(98/2)”), which is numerically the lowest of the calculations.
The summed corrections are shown in figure \[fig:gminus2\] and compared with the new and previous measurements [@bib:oldgm2]. The major experimental improvement from the new E821 measurement is striking. At the time of publication, the most precise available calculation of $a_{\mu}$ led to a difference between data and theory of around 2.6 standard deviations [@bib:e821]. More recent calculations reduce that difference, in some cases to the one standard deviation level, thus also suggesting that the error on the prediction may have been too optimistic. At present there is therefore no reason to consider $a_{\mu}$ as giving evidence of physics beyond the Standard Model. The accuracy of the theoretical predictions will be even more severely challenged by an experimental measurement with a factor two smaller error, as expected in the near future. Theoretical progress is essential to obtain a maximum physics return from such a precise measurement.
Recent News from the Z$^{\mathbf{0}}$ Pole {#sec:asy}
==========================================
Measurements of the cross-section, width, and asymmetries have been available for many years from LEP and SLD data, and most results have now been finalised [@bib:lepls; @bib:alr]. Recently new results [@bib:afbbnew] have become available on the b quark forward-backward asymmetry ($\afbb$) from ALEPH and DELPHI, using inclusive lifetime-based b-tagging techniques and various quark charge indicators. Substantial improvements are obtained over earlier lifetime-tag measurements, so that this type of asymmetry measurement now has a comparable precision to that using a traditional lepton tag. The lepton and lifetime results are compatible, and together give a LEP average Z pole asymmetry of $$A_{\mathrm{FB}}^{\mathrm{0,b}} = 0.0990 \pm 0.0017 .$$
This result may be compared with other asymmetry measurements from LEP and SLD by interpreting $\afbb$ in terms of $\sintwl$. In doing this, it is effectively assumed that the b quark couplings are given by their Standard Model values. The result is shown in figure \[fig:sstw\], comparing to $\sintwl$ values derived from the leptonic forward-backward asymmetry from LEP ($A_{\mathrm{fb}}^{\mathrm{0,l}}$) [@bib:lepls]; that from the $\tau$ polarisation measurements ($P_{\tau}$) [@bib:ptau]; from the left-right polarisation asymmetry at SLD [@bib:alr]; from the charm forward-backward asymmetry [@bib:afbc]; and from inclusive hadronic event forward-backward asymmetry measurements ($Q_{\mathrm{fb}}$) [@bib:qfb]. The two most precise determinations of $\sintwl$, from $A_{\mathrm{LR}}$ and $\afbb$, differ at the level of 3.2 standard deviations. This might suggest that the b quark couplings to the differ from the Standard Model expectations, but such an interpretation is not compelling at present, and direct measurements via the left-right polarised forward-backward b quark asymmetry at SLD are not precise enough to help. Future improvements in b quark asymmetry measurements using the existing LEP data samples may help elucidate this issue, but scope for such improvement is limited.
LEP-2 and Fermion-Pair Production
=================================
With the completion of LEP-2 data-taking at the end of 2000, the integrated luminosity collected at energies of 161 GeV and above has reached 700 per experiment, in total giving each 1 fb$^{-1}$ from the entire LEP programme. Following on from the measurements of the LEP-1 Z lineshape and forward-backward asymmetries, studies of fermion-pair production have continued at LEP-2. At these higher energies, fermion-pair events may be subdivided into those where the pair invariant mass has “radiatively returned” to the Z region or below, and non-radiative events with close to the full centre-of-mass energy. The cross-sections and forward-backward asymmetries for non-radiative events at the full range of LEP-2 energies are shown in Figures \[fig:fflep1\] and \[fig:fflep2\] for hadronic, muon and tau pair final states, averaged between all four LEP experiments [@bib:ffmeas]. Analogous measurements have been made for electrons, b and c quarks [@bib:ffmeas]. The Standard Model expectations describe the data well. Limits can be placed on new physics from these data [@bib:ffmeas]. As an example, limits may be placed on new Z$^\prime$ bosons which do not mix with the , as indicated in Table \[tab:fflim\].
Z’s and W’s at Colliders with Hadrons
=====================================
Electroweak fermion-pair production has also been studied at the Tevatron, in the Drell-Yan process. Updated results on high mass electron pairs were presented at this conference [@bib:cdfdy; @bib:gerber]: both cross-sections and asymmetries are well described by the Standard Model expectations, and extend beyond the LEP-2 mass reach to around 500 GeV (see Figure \[fig:drellyan\]). As indicated in the figure, there is some sensitivity to new physics models, and improvements on that of LEP should come with the Run 2 data.
W production in collisions provided, before LEP-2, the only direct measurements of the W mass, using reconstructed electron and muon momenta and inferred missing momentum information. The main results from CDF and D0 from Run 1 data have been available for some time [@bib:mwtev]. D0 have recently updated their Run 1 results with a new analysis making use of electrons close to calorimeter cell edges [@bib:gerber]. The main importance of the extra data is to allow a better calorimeter calibration from Z events. Measurements of the W mass from the Tevatron are summarised in Table \[tab:mwhad\].
The high tail of the distribution of the transverse mass of the lepton-missing momentum system provides information about the W width. CDF finalised their Run 1 result ($\Gamma_{\mathrm{W}}=$ 2.05$\pm0.13$ GeV) [@bib:gwcdf] some time ago. D0 presented a new measurement using all the Run 1 data, of $\Gamma_{\mathrm{W}}=$ 2.23$\pm0.17$ GeV, at this conference [@bib:gerber].
The presence of the W and Z bosons is primarily probed at HERA via t-channel exchange. The charged and neutral current differential cross-sections as a function of $Q^2$ are shown in Figures \[fig:heracc\] and \[fig:heranc\] respectively. The charged current process proceeds only by W exchange, and is sensitive to the W mass via the propagator term (and also, indirectly, via the overall normalisation). The effect of exchange can be seen in the high-$Q^2$ neutral current region where it gives rise to a difference between the e$^-$p and e$^+$p cross-sections.
Real W production may also have been observed at HERA, by looking for events with high transverse momentum electrons or muons, missing transverse momentum, and a recoiling hadronic system. For transverse momenta of the recoiling hadronic system above 40 GeV, H1 and ZEUS together observe 6 events compared to an expectation of 2.0$\pm$0.3, which is 90% composed of W production and decay [@bib:whera]. These events have been interpreted as possible evidence of new physics, but within the framework of the Standard Model their natural interpretation is as W production.
W Physics at LEP-2
==================
Each LEP experiment now has a sample of around 12000 W-pair events from the full LEP-2 data sample. Event selections are well established, and have needed only minor optimisations for the highest energy data. Typical selection performances give efficiencies and purities in the 80-90% range for almost all channels – channels with $\tau$ decays being the most challenging. The measured W-pair cross-section [@bib:sigww] is shown in Figure \[fig:sigww\], and compared to the predictions of the RacoonWW [@bib:racoon] and YFSWW [@bib:yfsww] Monte Carlo programs. These programs incorporate full $\Oalpha$ corrections to the doubly-resonant W-pair production diagrams, and give a cross-section approximately 2% lower than earlier predictions. The agreement can be tested by comparing the experimental and predicted cross-sections as a function of centre-of-mass energy. The new calculations describe the normalisation of the data well, the old ones over-estimate it by between two and three standard deviations of the experimental error [@bib:sigww].
The selected W-pair events are also used to measure the W decay branching ratios. The combined LEP results [@bib:sigww] are shown in Table \[tab:wbr\]. The leptonic results are consistent with lepton universality, and so are combined to measure the average leptonic branching ratio, corrected to massless charged leptons. This measurement now has a better than 1% relative error, and is consistent with the Standard Model expectation of 10.83%. It is significantly more precise than a value extracted from the Tevatron W and Z cross-section data, assuming Standard Model production of W’s, which is Br(W$\to\ell\nu)=10.43\pm0.25$% [@bib:wbrtev].
The W mass and width are measured above the W-pair threshold at LEP-2 by direct reconstruction of the W decay products [@bib:mwlepex], using measured lepton momenta and jet momenta and energies. Events with two hadronically decaying W’s (“$\Wqqqq$”), or where one W decays hadronically and the other leptonically (“$\Wqqlv$”), are used by all experiments. A kinematic fit is made to the reconstructed event quantities, constraining the total energy and momentum to be that of the colliding beam particles, thus reconstructing the unobserved neutrino in mixed hadronic-leptonic decay events. This fit significantly improves the resolution on the W mass. The reconstructed mass distributions can be fitted to obtain the W mass, or the W mass and width together. Other, more complicated, techniques to extract the most W mass information from the fitted events are used by some experiments. ALEPH and OPAL also use the small amount of information contained in $\Wlvlv$ events, which has been included in the $\Wqqlv$ results quoted.
After the kinematic fit, the W mass statistical sensitivity is very similar for the two event types. The systematic error sources are largely different between the two channels: the main correlated systematics come from the knowledge of the LEP beam energy, and hadronisation modelling. The W mass measurements obtained by the four LEP experiments, and averaged by channel, are shown in table \[tab:lepmw\]. There is good consistency between all the measurements, and the overall precision [@bib:mwlep] now improves significantly on the 60 MeV from hadron colliders. If the W width is also fitted, the W mass measurement is essentially unchanged, and a LEP combined value of $\Gamma_{\mathrm{W}}=2.150\pm0.091$ GeV is found.
The 39 MeV error on the combined LEP result includes 26 MeV statistical and 30 MeV systematic contributions. Systematic errors are larger in the $\Wqqqq$ channel (see Table \[tab:lepmw\]), having the effect of deweighting that channel, to just 27%, in the average. With no systematic errors this deweighting would not occur, and the statistical error would be 22 MeV. The main systematic errors on the combined result are as follows [@bib:mwlep]: The LEP beam energy measurement contributes a highly correlated 17 MeV to all channels; hadronisation modelling uncertainties contribute another 17 MeV; “final-state interactions” (FSI) between the hadronic decay products of two W’s contribute 13 MeV; detector-related uncertainties – different for the different experiments – contribute 10 MeV; and uncertainties on photonic corrections contribute 8 MeV. The main improvements that are expected before the results are finalised lie in the areas of the LEP beam energy, where a concerted programme is in progress to reduce the error, and the final-state interactions.
The basic physical problem which gives rise to the uncertainty over final-state interactions is that when two W’s in the same event both decay hadronically, the decay distance is smaller than typical hadronisation scales. The hadronisation of the two systems may therefore not be independent, and so hadronisation models tuned to $\to\mathrm{q}\overline{\mathrm{q}}$ decays may not properly describe them. Phenomenological models are used to study possible effects, subdividing them into “colour reconnection” in the parton-shower phase of the Monte Carlo models, and possible Bose-Einstein correlations between identical particles formed in the hadronisation process.
A substantial effort has been spent in understanding the possible effects of FSI models. Recent work, in a collaborative effort between all four LEP experiments, has focused on determining the common sensitivity to different models between different experiments, and on developing ways to measure visible effects predicted by the models.
Sensitivity to the effect of colour reconnection models has been obtained by studying the particle flow between jets in $\Wqqqq$ events [@bib:cr]. This is illustrated in Figure \[fig:cr\]. The data show some sensitivity to the effects as predicted in the colour reconnection models, and work continues to combine results from the four LEP experiments to improve the sensitivity.
Bose-Einstein correlations are also being studied in data [@bib:bec], in this case by comparing the two-particle correlation functions, $\rho$, for single hadronically decaying W’s in $\Wqqlv$ events ($\rho^\mathrm{W}$), and for $\Wqqqq$ events ($\rho^{\mathrm{W}\mathrm{W}}$). This may be expressed as [@bib:chekanov]: $$\rho^{\mathrm{W}\mathrm{W}}(Q) = 2 \rho^{\mathrm{W}}(Q) +
\rho_{mix}^{\mathrm{W}\mathrm{W}}(Q) + \Delta\rho(Q)$$ where $\rho_{mix}^{\mathrm{W}\mathrm{W}}$ is evaluated from mixing hadronic W decays from $\Wqqlv$ decays, and $\Delta\rho$ is any extra part arising from correlations between particles from different W decays in $\Wqqqq$ events. Alternatively the ratio $D(Q)$ may be examined: $$D(Q) \equiv \frac{\rho^{\mathrm{W}\mathrm{W}}(Q)}{2 \rho^{\mathrm{W}}(Q) +
\rho_{mix}^{\mathrm{W}\mathrm{W}}(Q)} .$$ An observed $D(Q)$ distribution is shown in Figure \[fig:bec\]: a deviation from unity at low $Q$ would most clearly signal the effect of Bose-Einstein correlations between particles from different W’s. As illustrated in this figure, no evidence is observed of such an effect. As for colour reconnection, work is in progress to derive combined LEP results in order better to constrain the possible effect on the W mass measurement.
When the LEP measurement of $\mw$, given in Table \[tab:lepmw\] is combined with that from colliders as given in Table \[tab:mwhad\], a world average W mass of $80.451\pm0.033$ GeV is obtained. A similar combination of W width results gives $\Gamma_{\mathrm{W}}=2.134\pm0.069$ GeV.
Tests of the Gauge Couplings of Vector Bosons
=============================================
The gauge group of the Standard Model dictates the self-couplings of the vector bosons, both in form and strength. The direct measurement of these couplings therefore provides a fundamental test of the Standard Model gauge structure. Electroweak gauge couplings have been measured directly at both LEP and the Tevatron: at present constraints from LEP are more stringent.
W-pair production at LEP-2 involves the triple gauge coupling vertex in two of the three lowest-order doubly-resonant diagrams. Sensitivity to possible anomalous couplings is found in the W-pair cross-section, and the W production and decay angle distributions. Measurements have been reported at previous conferences [@bib:cctgcosaka], but no combined LEP results have been released recently because [@bib:racoon; @bib:kandy] higher-order corrections, previously neglected, are thought to be comparable to the current experimental precision [@bib:villa].
Other measurements of triple gauge boson couplings are made at LEP-2 [@bib:nctgc] in the neutral vector boson processes of $\gamma$ and production. The cross-section measured for the latter process is shown in Figure \[fig:sigzz\] and is well-described by Standard Model predictions. Measurements of quartic gauge couplings have also been made at LEP-2, and were discussed in detail in other contributions to this conference [@bib:qgchere].
Global Electroweak Tests
========================
Many of the individual results reported in preceding sections may be used together to provide a global test of consistency with the Standard Model. If consistency with the model is observed, it is justifiable to go on to deduce, in the framework of the Standard Model, the unknown remaining parameter, the mass of the Higgs boson, $\mh$. The LEP electroweak working group has, for a number of years, carried out such global tests via a combined fit to a large number of measurements sensitive to Standard Model parameters. These results are reported here for the data available at this conference. These global fits use the electroweak libraries ZFITTER version 6.36 [@bib:zfitter] and TOPAZ0 version 4.4 [@bib:topaz0] to provide the Standard Model predictions. Theoretical uncertainties are included following detailed studies of missing higher order electroweak corrections and their interplay with QCD corrections [@bib:precew]. The precise LEP, SLD and Tevatron electroweak data are included, as are $\sin^2\theta_W$ as measured in neutrino-nucleon (“$\nu$N”) scattering[^1] [@bib:nuN] and, new this year, atomic parity violation (“APV”) measurements in caesium [@bib:apv].
Before making the full fit, the precise electroweak data from LEP and SLD can be used together with $\alpha{(M_{\mathrm{Z}}^2)}$, the $\nu$N and APV results to predict the masses of the top quark, $\mtop$, and of the W, $\mw$. The result obtained is shown in Figure \[fig:mtopmw\] by the solid (red) contour. Also shown are the direct measurements (dotted/green contour) of $\mtop=174.3\pm5.1$ GeV from the Tevatron [@bib:mtop] and $\mw=80.451\pm0.033$ GeV obtained by combining LEP and results; and the expected relationship between $m_{\mathrm{W}}$ and $m_{\mathrm{top}}$ in the Standard Model for different $\mh$ (shaded/yellow). It can be seen that the precise input data predict values of $\mtop$ and $\mw$ consistent with those observed – in both cases within two standard deviations – demonstrating that the electroweak corrections can correctly predict the mass of heavy particles. For the W, the precision of the prediction via the Standard Model fit is similar to that of the direct measurement. For the top mass, the measurement is twice as precise as the prediction. It is observed in addition that both the precise input data and the direct $\mw$/$\mtop$ measurements favour a light Higgs boson rather than a heavy one.
Going further, the full fit is made including also the $\mtop$ and $\mw$ measurements. The overall $\chi^2$ of the fit is 22.9 for 15 degrees of freedom, corresponding to an 8.6% probability. To provide an impression of the contributions to this $\chi^2$, the best-fit value of each input datum is compared with the actual measurement, and the pull calculated as the difference between observation and best-fit divided by the measurement error. The results are shown in Figure \[fig:pulls\]. The poorest description is of $\afbb$, which is a reflection of the same disagreement discussed earlier in Section \[sec:asy\]. The best fit value of the Higgs mass is $\mh=88_{-35}^{+53}$ GeV, where the error is asymmetric because the leading corrections depend on $\log\mh$. The variation above the minimum value of the $\chi^2$ as a function of the mass of the Higgs boson, $m_{\mathrm{H}}$, is shown in Figure \[fig:blueband\]. The darker shaded/blue band enclosing the $\chi^2$ curve provides an estimate of the theoretical uncertainty on the shape of the curve. This band is a little broader than previously estimated because of the inclusion of a new higher-order (fermionic two-loop) calculation of $\mw$ [@bib:weiglein]. This has little effect via $\mw$ but does have an impact via $\sintwl=\kappa_W(1-\mw^2/\mz^2)$. This latter effect is controversial, and may well overestimate the true theoretical uncertainty, but it is currently included as equivalent two-loop calculations for Z widths and the effective mixing angle are not available. The $\chi^2$ curve may be used to derive a constraint on the Standard Model Higgs boson mass, namely $m_{\mathrm{H}} < 196$ GeV at 95% C.L. Also shown in the Figure is the effect of using an alternative theory-driven estimate of the hadronic corrections to $\dahad(M_{\mathrm{Z}}^2)$ [@bib:martin] (dashed curve). The effect on the $\mh$ prediction is sizable compared to the theoretical uncertainty, for example. The 95% C.L. upper limit on $\mh$ moves to 222 GeV with this $\dahad$ estimate.
A Forward Look, and Conclusions
===============================
The eleven years of data-taking by the LEP experiments, plus the contributions of SLD, have established that Standard Model radiative corrections describe precision electroweak measurements. Data analysis is close to complete on the LEP-1 data, taken from 1989-1995. Work continues to finish LEP-2 analyses, and final results can be expected over the next couple of years. Improvements can still be expected in the W mass measurement, from better understanding of final-state interaction effects in particular, and in gauge-coupling measurements where the full data sample is not yet included.
At the Tevatron, Run 2 data-taking has recently begun. Although luminosities are so far low, the expectation remains of accumulating 2 fb$^{-1}$ in the next couple of years, which should allow a W mass measurement with 30 MeV precision from each experiment [@bib:tevprospects], and a top mass measured to $\pm$3 GeV. Combining the former result with the final $\mw$ results from LEP-2 should provide a world average W mass measurement error close to 20 MeV. The effect such improvements could have, for example on the global fit $\Delta\chi^2$ as a function of $\mh$, are shown in Figure \[fig:forward\] (the central value of $\mh$ employed for the future is, of course, arbitrarily selected).
Further substantial improvements in precision will have to wait for the LHC and a future linear collider. The LHC should improve the W and top mass precisions by a further factor two. The main improvement would, of course, come from a discovery of the Higgs boson, and a direct indication of whether it is the simplest Standard Model particle.
In summary, precise tests of the electroweak sector of the Standard Model have been made by a wide range of experiments, from the g-2 measurement in muon decays to LEP and the Tevatron. Many of these tests have a high sensitivity to radiative corrections, and the radiative correction structure is now rather well-established. Two and three-loop calculations are essential in making sufficiently precise predictions for some processes, and more progress is still needed. A small number of measurements, for example the measurement of $\sintwl$ from the b forward-backward asymmetry at LEP, show two or three standard deviation differences from expectation which might point to possible cracks in the Standard Model description, but none are compelling at present. Further improvements in the quality of tests will arrive slowly over the next few years: in particular further elucidation of the electroweak symmetry-breaking mechanism will likely have to await an improved discovery reach for a Higgs boson.
Acknowledgments {#acknowledgments .unnumbered}
===============
The preparation of this talk was greatly eased by the work of the LEP electroweak working group, and cross-LEP working groups on the W mass, gauge coupling and fermion-pair measurements. In particular, I thank Martin Grünewald for his unstinting help, and Chris Hawkes for comments on this manuscript. I also benefitted from the assistance of P. Antilogus, E. Barberio, A. Bodek, D. Cavalli, G. Chiarelli, G. Cvetic, Y.S. Chung, M. Elsing, C. Gerber, F. Gianotti, R. Hawkings, G.S. Hi, J. Holt, F. Jegerlehner, M. Kuze, I. Logashenko, K. Long, W. Menges, K. Mönig, A. Moutoussi, C. Parkes, B. Pietrzyk, R. Tenchini, J. Timmermans, A. Valassi, W. Venus, H. Voss, P. Wells, F. Yndurain and Z.G. Zhao.
[99]{} J.Z.Bai , BES Collaboration, ; J.C.Chen, these proceedings. See, for example, R.R.Akhmetshin , CMD-2 Collaboration, . H.Burkhardt and B.Pietrzyk, [preprint LAPP-EXP 2001-03](http://wwwlapp.in2p3.fr/preplapp/LAPP_EX2001_03.pdf), to appear in Physics Letters. A.D.Martin, J.Outhwaite and M.G.Ryskin, . H.N.Brown , Muon g-2 Collaboration, ; I.Logashenko, these proceedings. D.H.Brown and W.A.Worstell, . R.Alemany, M.Davier and A.Höcker, . M.Davier and A.Höcker, . M.Davier and A.Höcker, . S.Narison, . F.Jegerlehner, . J.F.de Troconiz and F.J.Yndurain, . G.Cvetic, T.Lee and I.Schmidt, . J.Bailey ., ;\
R.M.Carey , Muon g-2 Collaboration, ; H.N.Brown , Muon g-2 Collaboration, . R.Barate , ALEPH Collaboration, ;\
P.Abreu , DELPHI Collaboration, ;\
M.Acciarri , L3 Collaboration, ;\
G.Abbiendi , OPAL Collaboration, ;\
M.Paganoni, these proceedings. K.Abe , SLD Collaboration, ; K.Abe , SLD Collaboration, ; V.Serbo, these proceedings. ALEPH Collaboration, ALEPH 2001-026 CONF 2001-020;\
DELPHI Collaboration, DELPHI 2001-027 CONF 468;\
P.Hansen, these proceedings. D.Buskulic , ALEPH Collaboration, ; ALEPH Collaboration, ALEPH 96-097 CONF 98-037;\
P.Abreu , DELPHI Collaboration, ;\
M.Acciarri , L3 Collaboration, ;\
G.Abbiendi , OPAL Collaboration, ;\
M.Casado, these proceedings. See, for example, P.Hansen, these proceedings. D.Buskulic , ALEPH Collaboration, ;\
P.Abreu , DELPHI Collaboration, ;\
M.Acciarri , L3 Collaboration, ;\
P.D.Acton , OPAL Collaboration, . JHolt, these proceedings; LEPEWWG f$\overline{\mathrm{f}}$ subgroup, [note LEP2FF/01-02](http://lepewwg.web.cern.ch/LEPEWWG/lep2/summer2001/summer2001.ps). T.Affolder , CDF Collaboration, . C.Gerber, these proceedings. T.Affolder , CDF Collaboration, ;\
S.Abachi , D0 Collaboration, . T.Affolder , CDF Collaboration, . C.Adloff , H1 Collaboration, ; H1 Collaboration, paper EPS-2001-787 submitted to this conference;\
ZEUS Collaboration, papers EPS-2001-631 and EPS-2001-633 submitted to this conference. C.Adloff , H1 Collaboration, ; H1 Collaboration, paper EPS-2001-787 submitted to this conference;\
ZEUS Collaboration, papers EPS-2001-630 and EPS-2001-632 submitted to this conference. H1 Collaboration, paper EPS-2001-802 submitted to this conference. The LEP Collaborations and the LEP WW Working Group, [note LEPEWWG/XSEC/2001-03](http://lepewwg.web.cern.ch/LEPEWWG/lepww/4f/Summer01/4f_s01_main.ps.gz); R. Chierici, these proceedings. A.Denner, S.Dittmaier, M.Roth and D.Wackeroth, ; M.Roth, these proceedings. S.Jadach , . S. Eno , note CDF/ANAL/ELECTROWEAK/CDFR/5139, D0note 3693. ALEPH Collaboration, ALEPH 2001-020 CONF 2001-017;\
DELPHI Collaboration, DELPHI 2001-103 CONF 531;\
L3 Collaboration, L3 Note 2637;\
OPAL Collaboration, OPAL Physics Notes PN422 and PN480;\
H.Ruiz, these proceedings. The LEP Collaborations and the LEP W Working Group, [note LEPEWWG/MASS/2001-02](http://lepewwg.web.cern.ch/LEPEWWG/lepww/mw/Summer01/mw_main.ps.gz); H.Ruiz, these proceedings. L3 Collaboration, L3 Note 2683. D.Duchesneau, these proceedings. DELPHI Collaboration, DELPHI 2001-060 CONF 488. O.Pooth, these proceedings. S.V.Chekanov, E.A.de Wolf and W.Kittel, . See, for example, S.Jezequel, in [*30th International Conference on High Energy Physics*]{}, Ed. by C.Lim and T.Yamanaka. S.Jadach , . S.Villa, these proceedings. The LEP Collaborations and the LEP WW Working Group, [note LEPEWWG/XSEC/2001-03](http://lepewwg.web.cern.ch/LEPEWWG/lepww/4f/Summer01/4f_s01_main.ps.gz); H.Rick, these proceedings. S.Jadach, W.Placzek and B.F.L.Ward, ;\
G.Passarino, in . A.Oh, these proceedings. F.Piccinini, these proceedings; M.Biglietti, these proceedings. D.Y.Bardin . . G.Montagna , . [CERN Yellow Report 95-03](http://www-spires.dur.ac.uk/cgi-bin/spiface/find/hep/www?rawcmd=find+rn+cern-95-03), eds. D.Bardin, W.Hollik and G.Passarino; D.Bardin, M.Grünewald and G.Passarino, . K.McFarland , CCFR/NuTeV Collaboration, ; K.McFarland for the NuTeV Collaboration, . G.P.Zeller , . C.S.Wood , ;\
S.C.Bennett and C.E.Wieman, ;\
A.Derevianko, ;\
M.G.Kozlov, S.G.Porsev and I.I.Tupitsyn,. L.Demortier , The Top Averaging Group for the CDF and D0 Collaborations, preprint FERMILAB-TM-2084. A.Freitas, W.Hollik, W.Walter, G.Weiglein, . See, for example, G.Chiarelli, these proceedings.
[^1]: A new $\nu$N scattering result was reported by the NuTeV Collaboration [@bib:newnutev] during the final stage of preparation of this contribution. The $\sin^2\theta_{\mathrm{W}}$ result obtained differs from the expected value by three standard deviations.
|
{
"pile_set_name": "arxiv"
}
|
Sony Bravia KDL-40HX803 review
Summary
Our Score
8/10
User Score
Review Price £898.95
Sony’s first 3D TV is finally here, in the 40in shape of the KDL-40HX803. And to be honest, we’re not expecting very much. For whenever we’ve seen Sony 3D TVs in action at big shows, they just haven’t looked as good as those of some rivals. So let’s hope the Japanese brand has managed to cram in plenty of last minute improvements!
Rather surprisingly, the 40HX803 doesn’t wear Sony’s new and rather stylish Monolith design. Instead you get a straightforward but sleek black bezel for the top, right and left sides, with a slightly proud metallic strip along the bottom edge. The set still looks nice, though, for all its non-Monolithic approach.
It doesn’t do the 40HX803‘s aesthetic impact any harm, either, that it employs edge LED lighting to deliver a reasonably slender profile. Though it’s nothing like as slim as Samsung’s edge LED icons. What’s more, its edge LED system is a dynamic one, meaning that sections of the edge lighting can be independently controlled for a hopefully more impressive contrast performance than you usually get with a standard edge LED-lit LCD TV.
Slightly surprisingly for such a slim screen, Sony has left most of its connections facing straight out of the TV’s rear, rather than using the side access approach that would suit wall hanging. But at least the number and variety of these connections is pretty prodigious.
For instance, it has four HDMIs, all built to the v1.4 specification, so that they’re compatible with 3D sources. Also of note are a USB input, an Ethernet port, and a 3D Sync terminal, which we’ll look at in turn.
The USB can play music, video and photo files directly into the TV, but also allows you to add Wi-Fi to the 40HX803 via an optional USB dongle. It’s a touch disappointing that the 40HX803 doesn’t carry built-in Wi-Fi for its money, but it’s hardly alone in preferring the optional upgrade route.
The Ethernet socket, meanwhile, has three uses. First, it supports the set’s built-in Freeview HD tuner, to deliver potential future interactive services like the BBC iPlayer. Second, it provides a wired means of importing files stored on a DLNA PC. Finally, it allows you to take the TV online to experience Sony’s Bravia Internet Video platform, which we’ll return to in a minute.
But first we’ve got to discuss the 3D Sync terminal. This is there because the 40HX803 doesn’t have a built-in 3D transmitter, unlike the Samsung and Panasonic 3D TVs we’ve tested. In fact, the 40HX803 doesn’t have 3D facilities at all in its standard form. You have to add an optional extra transmitter and optional pairs of active shutter glasses, with the transmitter costing £50 and the glasses setting you back £99 per pair. This effectively makes the 40HX803 £1,887 if you want 3D with two pairs of glasses.
We do understand Sony’s idea with this, to be fair. For it helps keep the 40HX803’s up-front price down, allowing people to add 3D later as their finances allow. But there’s no getting round the fact that once you’ve 3Ded it up, the 40HX803 hits a similar price level to Samsung’s 40C8000 integrated 3D TV. In other words, 3D continues to be very much a premium technology.
|
{
"pile_set_name": "pile-cc"
}
|
Deposits in your Bank of Internet savings account are fully FDIC insured, so your money is absolutely safe when you invest your funds in a Bank of Internet account.
The Bank of Internet online savings account has no maintenance fees, so it’s a great opportunity to earn a high interest rate with a free online bank account.
There are no monthly maintenance fees for this Bank of Internet account, plus there are no minimum balance requirements and no direct deposit requirements to avoid fees or to earn the great interest rate.
There is a $100 minimum opening deposit requirement, but once you open your account, you are not required to maintain a minimum balance thereafter to avoid fees or to earn the high APY.
The Bank of Internet High Yield Savings Account provides free online statements, and an ATM card is also available if needed.
You can also open this online savings account in conjunction with a free High Interest Checking Account from Bank of Internet for easy transfers between Bank of Internet accounts.
Check out our Bank of Internet Review for more details on Bank of Internet online banking services including money market accounts and CDs as well as home equity loans and home mortgage refinancing.
Then compare the Bank of Internet savings account with other High APY Online Bank Rates before opening this fee-free online bank account.
Open a High Yield Savings Account from Bank of Internet today to take advantage of the high interest rate with no fees for online banking.
|
{
"pile_set_name": "pile-cc"
}
|
Your inner Chimp can be your best friend or your worst enemy...this is the Chimp Paradox
Do you sabotage your own happiness and success? Are you struggling to make sense of yourself? Do your emotions sometimes dictate your life?
Dr. Steve Peters explains that we all have a being within our minds that can wreak havoc on every aspect of our lives—be it business or personal. He calls this being "the chimp," and it can work either for you or against you. The challenge comes when we try to tame the chimp, and persuade it to do our bidding.
The Chimp Paradox contains an incredibly powerful mind management model that can help you be happier and healthier, increase your confidence, and become a more successful person. This book will help you to:
—Recognize how your mind is working
—Understand and manage your emotions and thoughts
—Manage yourself and become the person you would like to be
Dr. Peters explains the struggle that takes place within your mind and then shows you how to apply this understanding. Once you're armed with this new knowledge, you will be able to utilize your chimp for good, rather than letting your chimp run rampant with its own agenda.
|
{
"pile_set_name": "pile-cc"
}
|
It is done, and submitted. You can play “Survival of the Tastiest” on Android, and on the web. Playing on the web works, but you have to simulate multi-touch for table moving and that can be a bit confusing.
There’s a lot I’d like to talk about. I’ll go through every topic, insted of making the typical what went right/wrong list.
Concept
Working over the theme was probably one of the hardest tasks I had to face.
Originally, I had an idea of what kind of game I wanted to develop, gameplay wise – something with lots of enemies/actors, simple graphics, maybe set in space, controlled from a top-down view. I was confident I could fit any theme around it.
In the end, the problem with a theme like “Evolution” in a game is that evolution is unassisted. It happens through several seemingly random mutations over time, with the most apt permutation surviving. This genetic car simulator is, in my opinion, a great example of actual evolution of a species facing a challenge. But is it a game?
In a game, you need to control something to reach an objective. That control goes against what evolution is supposed to be like. If you allow the user to pick how to evolve something, it’s not evolution anymore – it’s the equivalent of intelligent design, the fable invented by creationists to combat the very idea of evolution. Being agnostic and a Pastafarian, that’s not something that rubbed me the right way.
Hence, my biggest dillema when deciding what to create was not with what I wanted to create, but with what I did not. I didn’t want to create an “intelligent design” simulator and wrongly call it evolution.
This is a problem, of course, every other contestant also had to face. And judging by the entries submitted, not many managed to work around it. I’d say the only real solution was through the use of artificial selection, somehow. So far, I haven’t seen any entry using this at its core gameplay.
Alas, this is just a fun competition and after a while I decided not to be as strict with the game idea, and allowed myself to pick whatever I thought would work out.
My initial idea was to create something where humanity tried to evolve to a next level but had some kind of foe trying to stop them from doing so. I kind of had this image of human souls flying in space towards a monolith or a space baby (all based in 2001: A Space Odyssey of course) but I couldn’t think of compelling (read: serious) mechanics for that.
Borgs were my next inspiration, as their whole hypothesis fit pretty well into the evolution theme. But how to make it work? Are you the borg, or fighting the Borg?
The third and final idea came to me through my girlfriend, who somehow gave me the idea of making something about the evolution of Pasta. The more I thought about it the more it sounded like it would work, so I decided to go with it.
Conversations with my inspiring co-worker Roushey (who also created the “Mechanical Underdogs” signature logo for my intros) further matured the concept, as it involved into the idea of having individual pieces of pasta flying around and trying to evolve until they became all-powerful. A secondary idea here was that the game would work to explain how the Flying Spaghetti Monster came to exist – by evolving from a normal dinner table.
So the idea evolved more or less into this: you are sitting a table. You have your own plate, with is your “base”. There are 5 other guests at the table, each with their own plate.
Your plate can spawn little pieces of pasta. You do so by “ordering” them through a menu. Some pastas are better than others; some are faster, some are stronger. They have varying costs, which are debited from your credits (you start with a number of credits).
Once spawned, your pastas start flying around. Their instinct is to fly to other plates, in order to conquer them (the objective of the game is having your pasta conquer all the plates on the table). But they are really autonomous, so after being spawned, you have no control over your pasta (think DotA or LoL creeps).
Your pasta doesn’t like other people’s pasta, so if they meet, they shoot sauce at each other until one dies. You get credits for other pastas your own pasta kill.
Once a pasta is in the vicinity of a plate, it starts conquering it for its team. It takes around 10 seconds for a plate to be conquered; less if more pasta from the same team are around. If pasta from other team are around, though, they get locked down in their attempt, unable to conquer the plate, until one of them die (think Battlefield’s standard “Conquest” mode).
You get points every second for every plate you own.
Over time, the concept also evolved to use an Italian bistro as its main scenario.
Carlos, Carlos’ Bistro’s founder and owner
Setup
No major changes were made from my work setup. I used FDT and Starling creating an Adobe AIR (ActionScript) project, all tools or frameworks I already had some knowledge with.
One big change for me was that I livestreamed my work through a twitch.tv account. This was a new thing for me. As recommended by Roushey, I used a program called XSplit and I got to say, it is pretty amazing. It made the livestream pretty effortless and the features are awesome, even for the free version. It was great to have some of my friends watch me, and then interact with them and random people through chat. It was also good knowing that I was also recording a local version of the files, so I could make a timelapse video later.
Knowing the video was being recorded also made me a lot more self-conscious about my computer use, as if someone was watching over my shoulder. It made me realize that sometimes I spend too much time in seemingly inane tasks (I ended up wasting the longest time just to get some text alignment the way I wanted – it’ll probably drive someone crazy if they watch it) and that I do way too many typos where writing code. I pretty much spend half of the time writing a line and the other half fixing the crazy characters in it.
My own stream was probably boring to watch since I was coding for the most time. But livestreaming is one of the cool things to do as a spectator too. It was great seeing other people working – I had a few tabs opened on my second monitor all the time. It’s actually a bit sad, because if I could, I could have spent the whole weekend just watching other people working! But I had to do my own work, so I’d only do it once in a while, when resting for a bit.
Design
Although I wanted some simple, low-fi, high-contrast kind of design, I ended up going with somewhat realistic (vector) art. I think it worked very well, fitting the mood of the game, but I also went overboard.
For example: to know the state of a plate (who owns it, who’s conquering it and how much time they have left before conquering it, which pasta units are in the queue, etc), you have to look at the plate’s bill.
The problem I realized when doing some tests is that people never look at the bill! They think it’s some kind of prop, so they never actually read its details.
Plus, if you’re zoomed out too much, you can’t actually read it, so it’s hard to know what’s going on with the game until you zoom in to the area of a specific plate.
One other solution that didn’t turn out to be as perfect as I thought was how to indicate who a plate base belongs to. In the game, that’s indicated by the plate’s decoration – its color denotes the team owner. But it’s something that fits so well into the design that people never realized it, until they were told about it.
In the end, the idea of going with a full physical metaphor is one that should be done with care. Things that are very important risk becoming background noise, unless the player knows its importance.
Originally, I wanted to avoid any kind of heads-up display in my game. In the end, I ended up adding it at the bottom to indicate your credits and bases owned, as well as the hideous out-of-place-and-still-not-obvious “Call Waiter” button. But in hindsight, I should have gone with a simple HUD from the start, especially one that indicated each team’s colors and general state of the game without the need for zooming in and out.
Development
Development went fast. But not fast enough.
Even though I worked around 32+ hours for this Ludum Dare, the biggest problem I had to face in the end was overscoping. I had too much planned, and couldn’t get it all done.
Content-wise, I had several kinds of pasta planned (Wikipedia is just amazing in that regard), split into several different groups, from small Pastina to huge Pasta al forno. But because of time constraints, I ended up scratching most of them, and ended up with 5 different types of very small pasta – barely something to start when talking about the evolution of Pasta.
Pastas used in the game. Unfortunately, the macs where never used
Which is one of the saddest things about the project, really. It had the framework and the features to allow an endless number of elements in there, but I just didn’t have time to draw the rest of the assets needed (something I loved to do, by the way).
Other non-obvious features had to be dropped, too. For example, when ordering some pasta, you were supposed to select what kind of sauce you’d like with your pasta, each with different attributes. Bolognese, for example, is very strong, but inaccurate; Pesto is very accurate and has great range, but it’s weaker; and my favorite, Vodka, would triggers 10% loss of speed on the pasta hit by it.
The code for that is mostly in there. But in the end, I didn’t have time to implement the sauce selection interface; all pasta ended up using bolognese sauce.
To-do list: lots of things were not done
Actual programming also took a toll in the development time. Having been programming for a while, I like to believe I got to a point where I know how to make things right, but at the expense of forgetting how to do things wrong in a seemingly good way. What I mean is that I had to take a lot of shortcuts in my code to save time (e.g. a lot of singletons references for cross-communication rather than events or observers, all-encompassing check loops, not fast enough) that left a very sour taste in my mouth. While I know I used to do those a few years ago and survive, I almost cannot accept the state my code is in right now.
At the same time, I do know it was the right thing to do given the timeframe.
One small thing that had some impact was using a somewhat new platform for me. That’s Starling, the accelerated graphics framework I used in Flash. I had tested it before and I knew how to use it well – the API is very similar to Flash itself. However, there were some small details that had some impact during development, making me feel somewhat uneasy the whole time I was writing the game. It was, again, the right thing to do, but I should have used Starling more deeply before (which is the conundrum: I used it for Ludum Dare just so I could learn more about it).
Argument and user experience
One final aspect of the game that I learned is that making the game obvious for your players goes a long way into making it fun. If you have to spend the longest time explaining things, your game is doing something wrong.
And that’s exactly the problem Survival of the Tastiest ultimately faced. It’s very hard for people to understand what’s going on with the game, why, and how. I did have some introductory text at the beginning, but that was a last-minute thing. More importantly, I should have had a better interface or simplified the whole concept so it would be easier for people to understand.
That doesn’t mean the game itself should be simple. It just means that the experience and interface should be approachable and understandable.
Conclusion
I’m extremely happy with what I’ve done and, especially given that this was my first Ludum Dare. However, I feel like I’ve learned a lot of what not to do.
The biggest problem is overscoping. Like Eric Decker said, the biggest lesson we can learn with this is probably with scoping – deciding what to do beforehand in a way you can complete it without having to rush and do something half-assed.
I’m sure I will do more Ludum Dares in the future. But if there are any lessons I can take of it, they are to make it simple, to use frameworks and platforms you already have some absolute experience with (otherwise you’ll spend too much time trying to solve easy questions), and to scope for a game that you can complete in one day only (that way, you can actually take two days and make it cool).
This entry was posted
on Monday, August 27th, 2012 at 10:54 am and is filed under LD #24.
You can follow any responses to this entry through the RSS 2.0 feed.
You can skip to the end and leave a response. Pinging is currently not allowed.
3 Responses to ““Survival of the Tastiest” Post-mortem”
darn it , knowing that I missed your livestream makes me a sad panda ;( but more to the point, the game is … well for a startup its original to say the least ;D it has some really neat ideas and more importantly its designed arround touch screens whitch by the looks of the submission is something rare ;o or that could be just me and my short memory -_-! awesum game, love et <3
|
{
"pile_set_name": "pile-cc"
}
|
All Studio Posts
The upcoming AES 54th International Conferencem focusing on audio forensics, is set to take place June 12-14, 2014, at the Holiday Inn Bloomsbury in London. Dedicated to exploring techniques, technologies and advancements in the field of audio forensics, the conference will provide a platform for sharing research related to the forensic application of speech/signal processing, acoustical analyses, audio authentication and the examination of methodologies and best practices. Chairpersons for this conference are Mark Huckvale and Jeff M. Smith. This marks…
View this post
From the archives of the late, great Recording Engineer/Producer (RE/P) magazine, enjoy this in-depth discussion with engineer/ producer Val Garay, conducted by Robert Carr. This article dates back to the October 1983 issue. As a natural extension to his career as a musician during the early Sixties, Val Garay’s love for music lead him to pursue the art and science of audio engineering. Starting in 1969, he apprenticed at the Sound Factory, Hollywood, under rock-recording legend Dave Hassinger (Rolling Stones,…
View this post
Studio Technologies recently became Audinate’s 100th Dante licensee and is embracing the audio-over-Ethernet movement by developing a line of Dante-enabled products. “Studio Technologies prides itself on developing specialized solutions for its customers,” says Studio Technologies president Gordon Kapes. “Our users rely on us to deliver products that will enhance their workflow in both fixed and mobile broadcast applications. Dante has proven its technological excellence, and we are convinced that it is the correct, progressive solution for adding networking technology to…
View this post
Software company Plugin Alliance has announced the availability of bx_refinement and bx_saturator V2, two new native plug-ins from German software developer Brainworx. bx_refinement is the brainchild of mastering engineer Gebre Waddell of Stonebridge Mastering, who designed the original prototype as a tool to remove harshness, a problem he was encountering more and more in his work due to the transition to digital and the prevalence of over-compressed mixes. “Harsh recordings are one of the most common problems mixing and mastering…
View this post
Located outside Dallas, Cool Pony Media is a record label and artist development company that works with various music genres, as well as score-to-picture work. Brothers and co-founders, Mark and Mike Stitts, recently did an upgrade in part of their studio with help from API, and as a result, the team now uses THE BOX console on a daily basis for writing, tracking, creating stems, and mixing. “We’re amazed,” says Mark Stitts. “We have quite a bit of other API…
View this post
Article provided by Home Studio Corner. If you’ve been mixing for any length of time, you know how valuable the high-pass filter (HPF) can be. It removes excess low end from your non-bass-heavy tracks, allowing you to clean up the low frequencies, making room for the kick and bass. But then there’s this thing called a low frequency shelf. What’s that all about? In the picture below you can see both a high-pass filter and a low-frequency shelf. A…
View this post
Radial Engineering has announced that it has taken on the global sales, marketing and distribution of the Jensen Iso-Max range of products. Iso-Max is a range of isolators that provide ground isolation and noise abatement for audio and video in broadcast, home theater and commercial AV integration. Radial has a long history with Jensen. According to company president Peter Janis: “When Radial was founded in 1992, we started life as a distributor. One of our first product lines was Jensen.…
View this post
DPA Microphones has announced the appointment of Direct Imports as its distributor in New Zealand, signaling the company’s continued commitment toward growth and customer service in the country. From its headquarters in Hastings, Hawkes Bay, Direct Imports will carry a full stock of DPA products for live, recording and broadcast applications. “We are delighted to have been appointed the New Zealand distributor for DPA Microphones and honored to have this outstanding brand join our portfolio and complement our current range…
View this post
Record Factory Music Academy, a music production education facility in downtown Seoul, South Korea, delivers real-world recording experience to students, which is now aided with the addition of a Solid State Logic AWS 924 hybrid console/controller in its newly built studios. More than 1,000 students have gained an education since Record Factory Music Academy was established. Through hands-on workshops covering everything from MIDI production to in-studio engineering and music video creation, the facility is gaining a reputation for its advanced…
View this post
|
{
"pile_set_name": "pile-cc"
}
|
Jake Jones
James Murrell "Jake" Jones (November 23, 1920 – December 13, 2000) was a first baseman in Major League Baseball who played between and for the Chicago White Sox (1941–42, 1946–47) and Boston Red Sox (1947–48). Listed at 6'3", 197 lb., Jones batted and threw right-handed. He was born in Epps, Louisiana.
Career
Jones was a highly decorated World War II veteran. He played 10 games in the American League for Chicago, in part of two seasons, before enlisting in the United States Navy right after Pearl Harbor attack. He joined the service on June 30, 1942, becoming an aviator. In November 1943 he was assigned to the unit on the USS Yorktown (CV-10), flying Grumman F6F Hellcat fighters.
Between November and December 1944, Jones destroyed two Japanese A6M Zero and damaged one of them. On February 1, 1945, he shot down another three Zeroes while serving on a mission at northeast of Tokyo, to give him five confirmed victories. A day later, he annihilated other Zero and a Nakajima Ki-43. Then, on February 25 he received a half-share of a probable Ki-43.
For his heroic action, Jones was awarded the Silver Star, two Distinguished Flying Cross and four Air Medals.
Following his service discharge, Jones returned to play for Chicago in 1946. During the 1947 midseason he was dealt to the Boston Red Sox in exchange for Rudy York, batting a combined .237 with 19 home runs and 96 RBI that season. He hit .200 in 36 games for Boston in 1948, his last major league season, and finished his baseball career in 1949, dividing his playing time between the Texas League and American Association.
Jones died in his hometown of Epps, Louisiana at age 80.
References
Baseball in Wartime
Baseball Reference
BR Bullpen
Category:Boston Red Sox players
Category:Chicago White Sox players
Category:Major League Baseball first basemen
Category:Recipients of the Distinguished Flying Cross (United States)
Category:Recipients of the Silver Star
Category:Baseball players from Louisiana
Category:People from West Carroll Parish, Louisiana
Category:1920 births
Category:2000 deaths
Category:United States Navy pilots of World War II
Category:United States Navy officers
Category:Recipients of the Air Medal
|
{
"pile_set_name": "wikipedia_en"
}
|
---
abstract: 'Online social networks (OSNs) are ubiquitous attracting millions of users all over the world. Being a popular communication media OSNs are exploited in a variety of cyber-attacks. In this article, we discuss the [*chameleon* ]{}attack technique, a new type of OSN-based trickery where malicious posts and profiles change the way they are displayed to OSN users to conceal themselves before the attack or avoid detection. Using this technique, adversaries can, for example, avoid censorship by concealing true content when it is about to be inspected; acquire social capital to promote new content while piggybacking a trending one; cause embarrassment and serious reputation damage by tricking a victim to like, retweet, or comment a message that he wouldn’t normally do without any indication for the trickery within the OSN. An experiment performed with closed Facebook groups of sports fans shows that (1) [*chameleon* ]{}pages can pass by the moderation filters by changing the way their posts are displayed and (2) moderators do not distinguish between regular and [*chameleon* ]{}pages. We list the OSN weaknesses that facilitate the [*chameleon* ]{}attack and propose a set of mitigation guidelines.'
author:
- 'Aviad Elyashar, Sagi Uziel, Abigail Paradise, and Rami Puzis'
bibliography:
- 'references.bib'
title: 'The Chameleon Attack: Manipulating Content Display in Online Social Media'
---
Introduction {#sec:introduction}
============
The following scenario is not a conventional introduction. Rather, it’s a brief example to stress the importance and potential impact of the disclosed weakness, unless the countermeasures described in this article are applied.
\[ex:teaser\] Imagine a controversial Facebook post shared by a friend of yours. You have a lot to say about the post, but you would rather discuss it in person to avoid unnecessary attention online. A few days later when you talk with your friend about the shared post, the friend does not understand what you’re referring to. Both of you scan through his/her timeline and nothing looks like that post. The next day you open Facebook and discover that in the last six months you have joined three Facebook groups of Satanists; you actively posted on a page supporting an extreme political group (although your posts are not directly related to the topics discussed there), and you liked several websites leading to video clips with child abuse. A terrible situation that could hurt your good name especially if you are a respected government employee!
At the time of submission of this article, the nightmare described in Example \[ex:teaser\] is still possible in major online social networks (OSNs) (see Section \[sec:susceptibility\]) due to a conceptual design flaw.
Today, OSNs are an integral part of our lives [@boyd2007social]. They are powerful tools for disseminating, sharing and consuming information, opinions, and news [@kwak2010twitter]; and for expanding connections [@gilbert2009predicting], etc. However, OSNs are also constantly abused by cybercriminals who exploit them for malicious purposes, including spam and malware distribution [@lee2010uncovering], harvesting personal information [@boshmaf2011socialbot], infiltration [@elyashar2013homing], and spreading misinformation [@ferrara2015manipulation]. Bots, fake profiles, and fake information are all well-known scourges being tackled by OSN providers, academic researchers, and organizations around the world with various levels of success. It is extremely important to constantly maintain the content of social platforms and service-wise, in order to limit abuse as much as possible.
To provide the best possible service to their users, OSNs allow users to edit or delete published content [@facebook_help_edit_post], edit user profiles, and update previews of linked resources, etc. These features are important to keep social content up to date, to correct grammatical or factual errors in published content, and eliminate abusive content. Unfortunately, they also open an opportunity for a scam where OSN users are tricked into engaging with seemingly appealing content that is later modified. This type of scam is trivial to execute and is out of the scope of this article.
Facebook partially mitigates the problem of modifications made to posts after their publication by displaying an indication that a post was edited. Other OSNs, such as Twitter or Instagram, do not allow published posts to be edited. Nevertheless, the major OSNs (Facebook, Twitter, and LinkedIn) allow publishing redirect links, and they support link preview updates. This allows changing the way a post is displayed without any indication that the target content of the URLs has been changed.
In this article, we present a novel type of OSN attack termed the [*chameleon* ]{}attack, where the content (or the way it is displayed) is modified over time to create social traction before executing the attack (see Section \[sec:chameleon\_attack\]). We discuss the OSN misuse cases stemming from this attack and their potential impacts in Section \[sec:misuse\]. We review the susceptibility of seven major OSN platforms to the [*chameleon* ]{}attack in Section \[sec:susceptibility\] and present the results of an intrusion into closed Facebook groups facilitated by it in Section \[sec:group\_infiltration\_experiments\]. A set of suggested countermeasures that should be applied to reduce the impact of similar attacks in the future is suggested in Section \[sec:mitigation\].
The contribution of this study is three-fold:
- We present a new OSN attack termed the [*chameleon* ]{}attack, including an end-to-end demonstration on major OSNs (Facebook, Twitter, and LinkedIn).
- We present a social experiment on Facebook showing that chameleons facilitate infiltration into closed communities.
- We discuss multiple misuse cases and mitigation from which we derive a recommended course of action to OSNs.
Background on redirection and link preview
==========================================
#### Redirection
It is a common practice on the web that helps Internet users to find relocated resources, use multiple aliases for the same resource, and shorten long and cumbersome URLs. Thus, the use of URL shortening services is very common within OSNs.
There are two types of redirect links: server, and client redirects. In the case of a server-side redirect, the server returns the HTTP status code 301 (redirect) with a new URL. Major OSNs follow server-side redirects up to the final destination in order to provide their users with a preview of the linked Web resource. In the case of a client-side redirect, the navigation process is carried out by a JavaScript command executed in the client’s browser. Since the OSNs do not render the Web pages they do not the follow client redirects up to the final destination.
#### Short links and brand management
There are many link redirection services across the Web that use 301 server redirects for brand management, URL shortening, click counts and various website access statistics. Some of these services that focus on brand management, such as *[rebrandly.com](rebrandly.com)*, allow their clients to change the target URL while maintaining the aliases. Some services, such as *[bitly.com](bitly.com)*, require a premium subscription to change the target URL.
The ability to change the target URL without changing the short alias is important when businesses restructure their websites or move them to a different web host. Yet, as will be discussed in Section \[sec:chameleon\_attack\], this feature may be exploited to facilitate the [*chameleon* ]{}attack.
#### DNS updates
DNS is used to resolve the IP address of a server given a domain name. The owner of the domain name may designate any target IP address for his/her domain and change it at will. The update process may take up to 24 hours to propagate. Rapid DNS update queries, known as Fast Flux, are used by adversaries to launch spam and phishing campaigns. Race conditions due to the propagation of DNS updates cause a domain name to be associated with multiple, constantly changing IP addresses at the same time.
#### Link previews
Generating and displaying link previews is an important OSN feature that streamlines the social interaction within the OSN. It allows the users to quickly get a first impression of a post or a profile without extra clicks. Based on the meta-tags of the target page, the link preview, usually includes a title, a thumbnail, and a short description of the resource targeted by the URL [@kopetzky1999visual].
When shortened URLs or other server-side redirects are used, the OSN follows the redirection path to generate a preview of the final destination. These previews are cached due to performance considerations. The major OSNs update the link previews upon request (see Section \[sec:weaknesses\] for details). In the case of client-redirect, some OSNs (e.g., Twitter) use the meta-tags of the first HTML page in the redirect chain. Others, (e.g., Facebook) follow the client redirect up to the final destination.
The Chameleon Attack {#sec:chameleon_attack}
====================
The [*chameleon* ]{}attack takes advantage of link previews and redirected links to modify the way that published content is displayed within the OSN without any indication for the modifications made. As part of this attack, the adversary circumvents the content editing restrictions of an OSN by using redirect links.
![\[fig:chameleon\_attack\_phases\]Chameleon Attack Phases.](killchain.png){width="0.6\columnwidth"}
We align the phases of a typical [*chameleon* ]{}attack to a standard cyber kill chain as follows:
1. **Reconnaissance** (out of scope): The attacker collects information about the victim using standard techniques to create an appealing disguise for the [*chameleon* ]{}posts and profiles.
2. **Weaponizing** (main phase): The attacker creates one or more redirection chains to web resources (see Required Resources in Section \[sec:resources\]). The attacker creates [*chameleon* ]{}posts or profiles that contain the redirect links.
3. **Delivery** (out of scope): The attacker attracts the victim’s attention to the [*chameleon* ]{}posts and profiles, similar to phishing or spear-phishing attacks.
4. **Maturation** (main phase): The [*chameleon* ]{}content builds trust within the OSN, collects social capital, and interacts with the victims. This phase is inherent to spam and phishing attacks that employ fake OSN profiles. But since such attacks are not considered as sophisticated and targeted, this phase is typically ignored in standard cyber kill chains or is referred to by the general term of *social engineering*. Nevertheless, building trust within an OSN is very important for the success of both targeted and un-targeted [*chameleon* ]{}attacks.
5. **Execution** (main phase): The attacker modifies the display of the [*chameleon* ]{}posts or profiles by changing the redirect target and refreshing the cached link previews.
Since the [*chameleon* ]{}attack is executed outside the victim’s premises there are no lateral movement or privilege escalation cycles. This attack can be used during the reconnaissance phase of a larger attack campaign or to reduce the cost of weaponizing any OSN based attack campaign (see examples in Section \[sec:misuse\]). The most important phases in the execution flow of the [*chameleon* ]{}attack are *weaponizing*, *maturation*, and *execution* as depicted in Figure \[fig:chameleon\_attack\_phases\]. The attacker may proceed with additional follow-up activities depending on the actual attack goal as described in Section \[sec:misuse\].
{width="0.16\linewidth"} {width="0.16\linewidth"} {width="0.16\linewidth"} {width="0.16\linewidth"} {width="0.16\linewidth"} {width="0.16\linewidth"}
{width="0.16\linewidth"} {width="0.16\linewidth"} {width="0.16\linewidth"} {width="0.16\linewidth"} {width="0.16\linewidth"} {width="0.16\linewidth"}
A Brief Showcase
----------------
To demonstrate how a [*chameleon* ]{}attack looks from the user’s perspective, we show here examples of [*chameleon* ]{}posts and profiles.[^1] The link preview in this post will change each time you click the video. It may take about 20 seconds and requires refreshing the page.
#### Chameleon Post
Figures \[fig:posts\] (1,2) present the same post on Facebook with two different link previews. Both versions of the post lead to *YouTube.com* and are displayed accordingly. There is no indication of any modification made to the post in either of its versions because the actual post was not modified. Neither is there an edit history, for the same reason. Likes and comments are retained. If the post was shared, the shares will show the old link preview even after it was modified in the original post.
Similarly, Figure \[fig:posts\] (3,4) and (5,6) present two versions of the same post on Twitter and LinkedIn respectively. There is no edit indication nor edit history because Twitter tweets cannot be edited. As with Facebook, likes, comments, and retweets are retained after changing the posted video and updating the link preview. Unlike Facebook, however, the link previews of all retweets and all LinkedIn posts that contain the link will change simultaneously.
#### Chameleon Profile
Figure \[fig:pages\] presents example of a [*chameleon* ]{}page on Facebook and a [*chameleon* ]{}profile on Twitter. Since the technique used to build [*chameleon* ]{}profiles and [*chameleon* ]{}pages are similar, as well as their look and feel, in the rest of this paper, we will use the terms pages and profiles interchangeably. All OSNs allow changing the background picture and the description of profiles, groups, and pages. A [*chameleon* ]{}profile is different from a regular profile by [*chameleon* ]{}posts included alongside neutral or personal posts. This way a Chelsea fan (Figure \[fig:pages\].1) can pretend to be an Arsenal fan (Figure \[fig:pages\].2) and vice versa.
Required Resources {#sec:resources}
------------------
The most important infrastructure element used to execute the [*chameleon* ]{}attack is a redirection service that allows attackers to modify the redirect target without changing the alias. This can be implemented using a link redirection service or a website controlled by the adversary.
In the former case, the link redirection service must allow modifying the target link for a previously defined alias. This is the preferred infrastructure to launch the [*chameleon* ]{}attack.
In the latter case, if the attacker has control over the redirection server then a server-side 301 redirect can be used, seamlessly utilizing the link preview feature of major OSNs. If the attacker has no control over the webserver, he/she may still use a client-side redirect. He/she will have to supply the required metadata for the OSN to create link previews.
If the attacker owns the domain name used to post the links, he/she may re-target the IP address associated with the domain name to a different web resource. Fast flux attack infrastructure can also be used; however, this is overkill for the [*chameleon* ]{}attack and may cause the attack to be detected [@holz2008measuring].
Example Instances {#sec:misuse}
-----------------
In this section, we detail several examples of misuse cases [@mcdermott2000eliciting] which extend the general [*chameleon* ]{}attack. Each misuse case provides a specific flavor of the attack execution flow, as well as the possible impact of the attack.
### Incrimination and Shaming
This flavor of the [*chameleon* ]{}attack targets specific users. Shaming is one of the major threats in OSNs [@goldman2015trending]. In countries like Thailand, the shaming misuse cases can potentially be dangerous where people face up to 32 years in prison for “liking” or re-sharing content that insults the king. Here, the impact can be greatly amplified if the adversary employs *chameleons* and the victim is careless enough to interact with content posted by a dubious profile or page.
#### Execution flow
The attacker performs the (1) *reconnaissance* and (3) *delivery* phases using standard techniques, similar to a spear-phishing attack.[^2]
\(2) During the *weaponizing* phase, the attacker creates [*chameleon* ]{}posts that endorse a topic favored by the victim, e.g., he/she may post some new music clips by the victim’s favorite band. Each post includes a redirect link that points to a YouTube video or similar web resource, but the redirection is controlled by the attacker.
\(4) During the *maturation* phase, the victim shows their appreciation of seemingly appealing content by following the [*chameleon* ]{}page, linking, retweeting, commenting, or otherwise interacting with the [*chameleon* ]{}posts. Unlike in spear-phishing, where the victim is directed to an external resource or is required to expose his/her personal information, here standard interactions that are considered safe within OSNs are sufficient to affiliate the victim with the [*chameleon* ]{}posts. This significantly lowers the attack barrier. (5) Finally, immediately after the victim’s interaction with the [*chameleon* ]{}posts, the adversary switches their display to content that opposes the victim’s agenda to cause maximal embarrassment or political damage. The new link preview will appear in the victim’s timeline. The OSN will amplify this attack by notifying the victim’s friends (Facebook) and followers (Twitter) about the offensive posts liked, commented, or retweeted by the victim.
#### Potential impact
At the very least, such an attack can cause discomfort to the victim. It can be life-threatening in cases when the victim is a teenager. And it can have far-reaching consequences if used during political campaigns.
### Long Term Avatar Fleet Management
Adversaries maintain fleets of fake OSN profiles termed avatars to collect intelligence, infiltrate organizations, disseminate misinformation, etc. To avoid detection by machine learning algorithms and build long term trust within the OSN sophisticated avatars need to be operated by a human [@elyashar2016guided; @paradise2017creation]. The maturation process of such avatars may last from several months to a few years. Fortunately, the attack target and the required number of avatars are usually not known in advance significantly reducing the cost-effectiveness of sophisticated avatars. *Chameleon* profiles exposed here facilitate efficient management of a fleet of avatars by maintaining a pool of mature avatars whose timeline is adapted to the agenda of the attack target once it is known.
#### Execution Flow
In this special misuse case, the attack phases *weaponizing* and *maturation* are performed twice; both before and after the attack target is known.
\(1) The first *weaponizing* phase starts with establishing the redirect infrastructure and building a fleet of avatars. They are created with neutral displays common within the OSN.
\(2) During the initial *maturation* process, the neutral avatars regularly publish [*chameleon* ]{}posts with neutral displays. They acquire friends while maximizing the acceptance rate of their friend requests [@paradise2017creation].
\(3) Once the attack target is known the attacker performs the required *reconnaissance*, selects some of the mature [*chameleon* ]{}profiles, and (4) *weaponizes* them with the relevant agenda by changing the profile information and the display of all past [*chameleon* ]{}posts.
During (5) *delivery* and (6) the second *maturation* phase, the refreshed [*chameleon* ]{}profiles (avatars) contact the target and build trust with it. The (7) *execution* phase in this misuse case depends on the attacker’s goals. The avatars that already engaged in an attack will likely be discarded.
#### Potential Impact
The adversary does not have to create an OSN account and build an appropriate agenda for each avatar long before executing an attack. *Chameleon* profiles and posts are created and maintained as a general resource suitable for various attack campaigns. As a result, the cost of maintaining such avatars is dramatically reduced. Moreover, if an avatar is detected and blocked during the attack campaign, its replacement can be *weaponized* and released very quickly.
### Evading Censorship
OSNs maintain millions of entities, such as pages, groups, communities, etc. For example, Facebook groups unite users based on shared interests [@casteleyn2009use]. To ensure proper language, avoid trolling and abuse, or allow in only users with a very specific agenda, moderators inspect the users who ask to join the groups and review the published posts. *Chameleon* attack can help in evading censorship, as well as a shallow screening of OSN profiles. See Section \[sec:mitigation\] for specific recommendations on profile screening to detect [*chameleon* ]{}profiles.
For example, assume two Facebook groups, uniting Democrat and Republican activists during U.S. elections. Assume a dishonest activist from one political extreme that would like to join a Facebook group of the rivals. Reasons may vary from trolling to spying. Assume, that this activist would like to spread propaganda within the rival group. But pages that exhibit an agenda that is not appropriate for the group would not be allowed by the group owner. The next procedure would allow the rival activist to bypass the censorship of the group moderator.
#### Execution Flow
During the (1) *reconnaissance* phase, the adversary learns the censorship rules of the target. (2) The *weaponizing* phase includes establishing a [*chameleon* ]{}profile with agenda appropriate to the censorship. During the (3) *maturation* phase, the adversary publishes posts with redirect links to videos fitting the censorship rules. (4) *delivery* in this case represents the censored act, such as requesting to enter a group, sending a friend request, posting a video, etc. The censor (e.g., the group’s administrator) reviews the profile and its timeline and approves them to be presented to all group members. Finally, in the (5) *execution* phase, the adversary changes the display of its profile and posts to reflect a new agenda that would otherwise not be allowed by the censor.
#### Potential Impact
This attack allows the adversary to infiltrate a closed group and publishing posts in contrast to the administrator’s policy. Moreover, one-time censorship of published content would no longer be sufficient. Moderators would have to invest a lot more effort in the periodical monitoring of group members and their posts to ensure that they still fit the group’s agenda. In Section \[sec:group\_infiltration\_experiments\], we demonstrate the execution of the [*chameleon* ]{}attack for penetrating closed groups using soccer fan groups as an allegory for groups with extreme political agenda.
### Promotion
Unfortunately, the promotion of content, products, ideas, etc. using bogus and unfair methods is very common in OSNs. Spam and crowdturfing are two example techniques used for promotion. The objective of spam is to reach maximal exposure through unsolicited messages. Bots and crowdturfers are used to misrepresent the promoted content as a generally popular one by adding likes and comments. Crowdturfers [@lee2013crowdturfers] are human workers who promote social content for economic incentives. *Chameleon* attack can be used to acquire likes and comments of genuine OSN users by piggybacking a popular content.
#### Execution Flow
During (1) *reconnaissance* phase, the attacker collects information about a topic favorable to the general public that is related to the unpopular content that the attacker wants to promote. (2) During the *weaponizing* phase, the attacker creates a [*chameleon* ]{}page and posts that support the favorite topic. For example, assume an adversary who is a new singer who would like to promote themselves. In the *weaponizing* phase, he/she can create a [*chameleon* ]{}page that supports a well-known singer. During the (3) *delivery* and (4) *maturation* phases, the OSN users show their affection towards seemingly appealing content by interacting with the [*chameleon* ]{}page using likes, linking, retweeting, commenting, etc. As time passes, the [*chameleon* ]{}page obtains social capital. In the final (5) *execution* phase, the [*chameleon* ]{}page’s display changes to support the artificially promoted content retaining the unfairly collected social capital.
#### Potential Impact
The attacker can use [*chameleon* ]{}pages and posts to promote content by piggybacking popular content. The attacker enjoys the social capital provided by a genuine crowd that otherwise would not interact with the dubious content. Social capital obtained from bots or crowdturfers can be down-rated using various reputation management techniques. In contrast, social capital obtained through the [*chameleon* ]{}trickery is provided by genuine OSN users.
### Clickbait
Most of the revenues of online media come from online advertisements [@chakraborty2016stop]. This phenomenon generated a significant amount of competition among online media websites for the readers’ attention and their clicks. To attract users and encourage them to visit a website and click a given link, the website administrators use catchy headlines along with the provided links, which lure users into clicking on the given link [@chakraborty2016stop]. This phenomenon titled clickbait.
#### Execution Flow
\(1) During the *weaponizing* phase, the attacker creates [*chameleon* ]{}profiles with posts with redirect links. Consider an adversary that is a news provider who would like to increase the traffic to its website. To increase its revenues, he can do the following: in the *weaponizing* phase, he should publish a [*chameleon* ]{}post with a catchy headline with an attached link to an interesting article. Later, in the *maturation* phase, users attract the post by its attractive link preview, as well as its headline, and increase the traffic to a website. Later, in the *execution* phase, the adversary changes the redirect target of the posted link but keeping the link preview not updated. As a result, new users will click on the [*chameleon* ]{}post that its display did not change, but now they will be navigated to the adversary’s website.
#### Potential Impact
By applying this attack, the attacker can increase his traffic and, eventually, his income. Luring the users with an attractive link preview in which increases the likelihood that the user will click on it and will consume his content.
Susceptibility of Social Networks to the Chameleon Attack {#sec:susceptibility}
=========================================================
Online Social Networks
----------------------
Attacker’s ability OSN feature Facebook Twitter WhatsApp Instagram Reddit Flickr LinkedIn
-------------------- -------------------------------------------------- ---------- --------- ---------- ----------- -------- -------- ----------
Editing a post’s publication date Y N N N N N N
Presenting original publication date Y - - - - - -
Editing previously published posts Y N N Y Y Y Y
Presenting editing indication in published posts Y - - Y N N Y
Presenting editing indication in shared post N - - Y - - Y
Presenting edit history Y - - N - - N
Publishing redirect links Y Y Y N Y Y Y
Changing display Displaying link preview Y Y Y - Y N Y
Updating link preview Y Y N - N - Y
Switching content Hiding posts Y N N Y Y Y N
= Facilitates the [*chameleon* ]{}attack. = Mitigates the [*chameleon* ]{}attack \[tab:OsnCompareTbl\]
### Facebook
Facebook allows users to manipulate the display of previously published posts based on several different features. The features include the publishing of redirect links, editing post’s publication date, hiding previously published posts, and publishing unauthorized content in a closed group.
Up until 2017, in case a user edits a post on Facebook, an indicator is presented for the users to notify them that the content had been updated. After 2017, a Facebook update removed this public notification and enable to see the post’s history only via the button of ’View Edit History.’
While Facebook allows editing post’s publication date, it displays a small indication concerning the original publication date of the post. To watch the original publication date, a user must hover over the clock icon shown in the post, and a bubble will be shown together with the original publication date.
Also, concerning Facebook pages, Facebook does not allow to do radical changes to the original name of a page daily. However, it is still possible to conduct limited edits to the page’s name; changes that are so minor that the context of the original name will be not changed. As a result, we were able to rename a page in phases by editing the name of a given page with small changes in each edit action. First, we changed only two characters from the name of a page. Afterward, three days later, we changed two more characters and so forth until eventually, we were able to rename the page entirely as we wished.
As a countermeasure, Facebook employs a mechanism called *Link Shim* to keep Facebook users safe from external malicious links [@FacebookLinkShim]. When clicking on an external link posted on Facebook, their mechanism checks whether the link is blacklisted. In case of a suspicious URL, Facebook will notify the user [@FacebookLinkShim]. Redirect links used in [*chameleon* ]{}posts lead to legitimate destinations and so are currently approved by *Link Shim*.
### Twitter
As opposed to Facebook, Twitter does not allow users to edit and hide tweets that have already been published, or to manipulate a tweet’s publication date (see Table \[tab:OsnCompareTbl\]). This mechanism makes it more difficult for an attacker to manipulate the display of previously published content. On the other hand, Twitter allows the use of client redirects. This poses the same danger as Facebook redirects, allowing attackers to manipulate the link preview of a tweet with content that is not necessarily related to the target website. Moreover, Twitter allows users to update a link preview using the *Card Validator*.[^3] In addition, Twitter makes it possible to change a user’s display name but does not allow to change the original username chosen during registration (serves as an identifier).
### WhatsApp
WhatsApp allows messages to be published with redirect links and it displays link previews but it does not allow the update of an already published link preview. As opposed to other OSNs, WhatsApp is the only OSN that displays an indication that the message was deleted by its author.
WhatsApp is safe against most flavors of the [*chameleon* ]{}attack, except *clickbait* where an attacker can trick others by encouraging them to click a malicious link with a preview of a benign link.
### Instagram
Concerning redirect links, Instagram does not allow users to publish external links (see Table \[tab:OsnCompareTbl\]). Since the posts are image-based, the attacker cannot change the published content by redirect link. However, Instagram allows to edit already published posts, The editing process includes the text in the description section, as well as the image itself. In case of such a change to a post was made by its owner, no indication is shown to users.
### Reddit
Alongside its popularity, Reddit is prone to a [*chameleon* ]{}attack: In this OSN, the attacker can edit, delete or hide already published posts while others will not be able to know that the content has been modified.
### Flickr
As opposed to Facebook and WhatsApp, Flickr does not show link previews, but it allows users to update their posts, replace uploaded images, hide already published posts, and edit their account name. All these activities can be performed by users, without any indication for the users to the editing activity.
### LinkedIn
LinkedIn permits users to share a redirect link and to update the link preview using *Post Inspector*.[^4] As a result, users can edit their posts, however, the post will be marked as edited.
Existing Weaknesses and Security Controls {#sec:weaknesses}
-----------------------------------------
Next, we summarize the OSN weaknesses related to the [*chameleon* ]{}attack, as well as controls deployed by the various OSNs to mitigate potential misuse. While the main focus of this article is the [*chameleon* ]{}attack facilitated by cached link previews, in this subsection we also discuss other types of the [*chameleon* ]{}attack successfully mitigated by major OSNs.
### Creating artificial timeline
Publishing posts in retrospective is a feature that is easiest to misuse. Such a feature helps an adversary creating OSN accounts that look older and more reliable than they are. Luckily, all OSNs, but Facebook, do not provide such a feature to their users. Although Facebook allows *editing a post’s publication date*, it mitigates possible misuse of this feature for creating artificial timelines by *presenting the original publication date* of the post.
### Changing content
Some OSNs provide their users with the ability to *edit previously published posts*. This feature facilitates all misuse cases detailed in Section \[sec:misuse\] without any additional resources required from the attacker. Twitter and WhatsApp do not allow editing of previously published posts. Facebook, Instagram, and LinkedIn mitigate potential misuse by *presenting editing indication in published posts*. Facebook even *presents the edit history* of a post. Unfortunately, in contrast to Instagram and LinkedIn, Facebook does not *present the edit indication in shared posts*. We urge Facebook to correct this minor yet important omission.
### Changing Display
The primary weakness of the major OSNs (Twitter, Facebook, and LinkedIn) which facilitates the [*chameleon* ]{}attack discussed in this paper is the combination of three features provided by the OSNs. First, *publishing redirect links* allows attackers to change the navigation target of the posted links without any indication of such a change. Second, OSNs *display a link preview* based on metadata provided by the website at the end of the chain of server redirects. This feature allows the attackers to control the way link previews are displayed. Finally, OSNs allow *updating link preview* following the changes in the redirect chain of the previously posted link. Such an update is performed without displaying an indication that the post was updated. Currently, there are no controls that mitigate the misuse of these features.
WhatsApp, Reddit, and LinkedIn display link previews of redirect links similar to Facebook and Twitter. But they do not provide a feature to update the link previews. On one hand, the only applicable misuse case for the [*chameleon* ]{}attack in these OSNs is *clickbait*. On the other hand, updating link previews is important for commercial brand management.
### Switching Content
Facebook, Instagram, Reddit, and Flickr allow users to temporarily *hide their posts*. This feature allows a user to prepare multiple sets of posts where each set exhibits a different agenda. Later, the adversary may display the appropriate set of posts and hide the rest. The major downsides of this technique as far as the attacker is concerned are: (1) The need to maintain the sets of posts ahead of time similar to maintaining a set of regular profiles. (2) Social capital acquired by one set of posts cannot be reused by the other sets, except friends and followers.
Overall, all reviewed OSNs are well protected against timeline manipulation. The major OSNs, except Reddit and Flickr, are aware of the dangers of post-editing and provide appropriate controls to avoid misuse. Due to the real-time nature of messaging in Twitter and WhatsApp, these OSNs can disable the option of editing posts.
The major OSNs, Facebook, Twitter, and LinkedIn, care about the business of their clients and thus, explicitly provide features to update link previews. *Chameleon* attack exposed in this paper misuses this feature to manipulate the display of posts and profiles. Provided that Reddit and Flickr allow editing the post content, only WhatsApp and Instagram are not susceptible to [*chameleon* ]{}attacks.
Instagram stores the posted images and not the links to the external resources, an approach that may not scale and may not be suitable for all premium customers. WhatsApp stores the data locally and does not allow recollecting past messages if the receiver is not a member of the group when the message was posted. WhatsApp’s approach is not suitable for bloggers, commercial pages, etc. that would like to share their portfolio with every newcomer.
Additional Required Security Controls {#sec:mitigation}
-------------------------------------
The best way to mitigate the [*chameleon* ]{}attack is to disallow redirect links and to disable link preview updates in all OSNs. Nevertheless, we acknowledge that it is not possible to stop using external redirect links and short URLs. These features are very popular on social networks and important in brand management.
First and foremost an appropriate change indication should be displayed whenever the link preview cache is updated. Since on Facebook the cache is updated by the author of the original post, it can naturally be displayed in the post’s edit history. Link preview cache updates should be treated similar to the editing of posts.
However, edit indications on posts will not help unless users will be trained to pay attention to these indications. Facebook, and other OSNs, should make it crystal clear which version of the post a user liked or commented on. To minimize the impact of the [*chameleon* ]{}attack likes, shares and comments of a post should be associated with a specific version of the post within the edit history, by default. It is also important to let users know about subsequent modifications of the posts they liked, commented, or shared. The users will be able, for example, to delete their comments or to confirm it, moving the comment back from the history to the main view.
In Twitter and LinkedIn, anyone can update the link preview. The motivation for this feature is two-fold: (1) The business owner should be able to control the look and feel of his business card within the OSN regardless of the specific user who posted it. (2) Link previews should always be up to date. It will be challenging to design appropriate mitigation for the [*chameleon* ]{}without partially giving up these objectives.
We suggest notifying a Twitter (or LinkedIn) user who posted a link to an external site whenever the link preview is updated. The user will be able to delete the post or accept the link preview update at his sole discretion. By default, the link preview should remain unchanged. This approach may increase the number of notifications the users receive, but with appropriate filters, it will not be a burden on the users. However, it may require maintaining copies of link previews for all re-posted links, which in turn significantly increase storage requirements.
Finally, OSNs should update their anomaly detection algorithms to take into account changes made to the posts’ content and link previews, as well as the reputation of the servers along the redirection path of the posted links.
It may take time to implement the measures described. Meanwhile, users should be aware that their *likes and comments are precious assets* that may be used against them if given out blindly.
Next, we suggest a few guidelines that will help average OSN users detecting [*chameleon* ]{}posts and profiles. Given a suspected profile, check the textual content of its posts. *Chameleon* profiles should publish general textual descriptions to easily switch agenda. The absence of opinionated textual descriptions in the topic of your mutual interest may indicate a potential [*chameleon* ]{}. A Large number of ambiguous posts that can be interpreted in the context of the cover image or in the context of other posts in the timeline should increase the suspicion. For example, “This is the best goalkeeper in the world!!!” without a name mentioned is ambiguous. Also using public services like Facebook provided[^5] for watching a given post history can be useful for detecting a [*chameleon* ]{}post.
Many redirect links within the profile timeline is also an indication of [*chameleon* ]{}capabilities. We do not encourage the users to click links in the posts of suspicious profiles to check whether they are redirected! In Facebook and LinkedIn, a simple inspection of the URL can reveal whether a redirection is involved. Right-click the post and copy-paste the link address in any URL decoder. If the domain name within the copied URL matches the domain name within the link preview and you trust this domain, you are safe. Today, most links on Facebook are redirected through Facebook’s referral service. The URL you should look at follows the “u” parameter within the query string of l.facebook.com/l.php. If the domain name is appearing after “, u=” differs from the domain name within the link preview, the post’s author uses redirection services. Unfortunately, today, links posted on Twitter are shortened, and the second hop of the redirection cannot be inspected by just copying the URL.
Group Infiltration Experiment {#sec:group_infiltration_experiments}
=============================
In this section, we present an experiment conducted on Facebook to asses the reaction of Facebook group moderators to [*chameleon* ]{}pages. In this experiment, we follow the execution flow of the misuse case number 4 *evading censorship* in Section \[sec:misuse\].
Experimental Setup {#sec:experimentalSetup}
------------------
In this experiment, four pairs of rival soccer and basketball teams were selected: Arsenal vs Chelsea, Manchester United vs Manchester City, Lakers vs Clippers, and Knicks vs Nets. We used sixteen Facebook pages: one regular and one [*chameleon* ]{}page for each sports team. Regular pages post YouTube videos that support the respective sports team. Their names are explicitly related to the team they support e.g., “Arsenal - The Best Team in the World.” *Chameleon* pages post redirect links that lead to videos that support either the team or their rivals. Their names can be interpreted based on the context e.g., “The Best Team in the World.” The icons and cover images of all pages reflect the team they (currently) support.
Next, we selected twelve Facebook groups that support each one of the eight teams (total of 96 Facebook groups) according to the following three criteria: (a) the group allows pages to join it, (b) the group is sufficiently large (at least 50 members), and (c) there was at least some activity within the group in last month.
We requested to join each group four times: (1) as a regular fan page, (2) as a regular rival page, (3) as a [*chameleon* ]{}page while supporting the rivals, and (4) the same [*chameleon* ]{}page requested to join the group again now pretending to be a fan page. We requested each group in a random order of the pages. We used a balanced experiment design to test all permutations of pages where the respective [*chameleon* ]{}page first requests to join the group as rival’s page and afterward as fan’s page. We allowed at least five days between consequent requests to join each group.
A page can be *Approved* by the group admin or moderator (hereafter admin). In this case, the page becomes a member of the group. While the admin has not decided yet, the request is *Pending*. The owner can *Decline* the request. In this case, the page is not a member of the group, but it is possible to request to join the group again. Neither one of our pages was *Blocked* by the group admins, therefore, we ignore this status in the following results. Whenever a [*chameleon* ]{}page pretending to be a rival page is *Approved* by an admin, there is no point in trying to join the same group using the same page again. We consider this status as *Auto Approved*.
The first phase of the experiment started on July 20, 2019, and included only the Facebook groups supporting Chelsea and Arsenal. The relevant [*chameleon* ]{}pages changed the way they are displayed on Aug. 16. The second phase started on Sept. 5, 2019, and included the rest of the Facebook groups. The relevant [*chameleon* ]{}pages changed the way they are displayed on Sept. 23. The following results summarize both phases.
Results {#sec:results}
-------
During the experiment, 14 Facebook groups prevented (any) pages from joining the group. We speculate that the admins were not aware of the option of accepting pages as group members, and updated the group settings after they saw our first requests. These 14 groups were *Disqualified* in the current experiment. Overall, there were 206 *Approved* requests, 87 *Declined*, and 35 *Pending*. Figure \[fig:chamVsPage\] presents the distribution of request statuses for the different types of pages.
![Request results by type of page[]{data-label="fig:chamVsPage"}](chamVsPage.JPG){width="3.5in"}
Some admins blindly approved requests. For example, 28 groups approved all requests. Other group admins meticulously check the membership requests. Thirteen groups *Declined* or ignored the rival pages and *Approved* pages that exhibit the correct agenda. Overall, **the reaction of admins to [*chameleon* ]{}pages is similar to their reaction to regular pages with the same agenda**. To check this hypothesis, we used a one-way ANOVA test to determine whether there is a significant difference between the four types of group membership requests. The test was conducted on the request status values at the end of the experiment (*Declined*, *Pending*, *Approved*). Results showed that there is no statistically-significant difference between the approval of [*chameleon* ]{}fan pages and regular fan pages (p-value = 0.33). There is also no statistically-significant difference between the approval of [*chameleon* ]{}rival pages and regular rival pages (p-value = 0.992). However, the difference between the approval of either regular or [*chameleon* ]{}rival pages and the approval of both types of fan pages is statistically-significant with p-value ranging from 0.00 to 0.003. These results indicate that the reaction of admins to [*chameleon* ]{}pages in our experiment is similar to their reaction to regular (non-*chameleon*) pages with a similar agenda. We conclude that **admins do not distinguish between regular and [*chameleon* ]{}pages.** This conclusion is stressed by the observation that only two groups out of 82 *Declined* [*chameleon* ]{}fan pages and *Approved* regular fan pages. Seven groups approved [*chameleon* ]{}fan pages and rejected regular fan pages.
The above results also indicate that, in general, **admins are selective toward pages that they censor.** Next, we measure the selectivity of the group admins using a Likert scale [@joshi2015likert]. Relying on the conclusion that admins do not distinguish between regular and [*chameleon* ]{}pages, we treat them alike to measure admins’ selectivity. Each time a group admin *Declined* a rival page or *Approved* a fan page he/she received one point. Each time a fan page was *Declined* or a rival page was *Approved*, the selectivity was reduced by one point. *Pending* request status added zero toward the selectivity score.
For each group, we summed up the points to calculate its selectivity score. When the selectivity score is greater than three, we consider the group as *selective*. If the selectivity score is less than or equal to three, we consider the group as *not selective*.
To explain the differences in groups’ selectivity, first, we tested if there is a difference between the number of members in selective and non-selective groups using t-tests. We found that smaller groups are more selective than, larger ones with p-value = 0.00029. This result is quite intuitive. Smaller groups tend to check the identity of the users who ask to join the group, while large groups are less likely to examine the identity of the users who want to join the group. Figure \[fig:groupsScoreAvg\] presents the groups’ activity and size vs. their selectivity score. There is a weak negative correlation between the group’s selectivity score and the number of members (Pearson correlation = -0.187, p-value = 0.093).
Related Work
============
Content Spoofing and Spoofing Identification
--------------------------------------------
Content spoofing is one of the most prevalent vulnerabilities in web applications [@grossman2017whitehat]. It is also known as content injection or virtual defacement. This attack deceives users by presenting particular content on a website as legitimate and not from an external source [@lungu2010optimizing; @awang2013detecting; @karandel2016security]. Using this, an attacker can upload new, fake, or modified content as legitimate. This malicious behavior can lead to malware exposure, financial fraud, or privacy violations, and can misrepresent an organization or individual [@hayati2009spammer; @benea2012anti]. The content spoofing attack leverages the code injection vulnerability where the user’s input is not sanitized correctly. Using this vulnerability, an attacker can provide new content to the web, usually via the GET or POST parameter.
There are two ways to conduct content spoofing attack: An HTML injection, in which the attacker alters the content of a web page for malicious purposes by using HTML tags, or a text injection that manipulates the text data of a parameter [@hussain2019content]. Jitpukdebodin et al. [@jitpukdebodin2014novel] explored a vulnerability in WLAN communication. The proposed method creates a crafting spoof web content and sends it to a user before the genuine web content from a website is transmitted to the user. Hussain et al. [@hussain2019content] presented a new form of compounded SQL injection attack technique that uses the SQL injection attack vectors to perform content spoofing attacks on a web application.
There have been a few techniques for the detection of content spoofing attacks: Benea et al. [@benea2012anti] suggested preventing content spoofing by detecting phishing attacks using fingerprints similarity. Niemela and Kesti [@niemela2018detecting] detected unauthorized changes to a website using authorized content policy sets for each of a multiplicity of websites from the web operators.
![Average groups activity by selectivity score[]{data-label="fig:groupsScoreAvg"}](groupsScoreAvg.JPG){width="3.5in"}
Website Defacement
------------------
This is an attack that changes the visual appearance of websites [@kanti2011implementing; @borgolte2015meerkat; @romagna2017hacktivism]. Using this attack, an attacker can cause serious consequences to website owners, including interrupting website operations and damaging the owner’s reputation. More interestingly, attackers may support their reputation, promoting a certain ideological, religious, or political orientation [@romagna2017hacktivism; @maggi2018investigating]. Besides, web defacement is a significant threat to businesses since it can detrimentally affect the credibility and reputation of the organization [@borgolte2015meerkat; @medvet2007detection]. Most website defacement occurs when attackers manage to find any vulnerability in the web application and then inject a remote scripting file [@kanti2011implementing].
Several types of research deal with the monitoring and detection of website defacement, with solutions that include signature-based [@gurjwar2013approach; @shani2010system] and anomaly-based detection [@borgolte2015meerkat; @davanzo2011anomaly; @hoang2018website]. The simplest method to detect website defacement is a checksum comparison. The website’s content is calculated using hashing algorithms. The website is then monitored and a new checksum is calculated and compared with the previous one [@kanti2011implementing; @gurjwar2013approach; @shani2010system]. This method is effective for static web pages but not for dynamic pages.
Several techniques have been proposed for website defacement based on complex algorithms. Kim et al. [@kim2006advanced] used a 2-grams for building a profile from normal web pages for monitoring and detecting of page defacement. Medvet et al. [@medvet2007detection] detected website defacement automatically based on genetic programming. The method builds an algorithm based on a sequence of readings of the remote page to be monitored, and on a sample set of attacks. Several techniques use machine learning-based methods for website defacement detection [@borgolte2015meerkat; @davanzo2011anomaly; @hoang2018website; @bartoli2006automatic]. Those studies, build a profile of the monitored page automatically, based on machine learning techniques. Borgolte et al. [@borgolte2015meerkat] proposed the ’MEERKAT’ detection system that requires no prior knowledge about the website content or its structure, but only its URL. ’MEERKAT’ automatically learns high-level features from screenshots (image data) of defaced websites by stacked autoencoders and deep neural networks. Its drawback is that it requires extensive computational resources for image processing and recognition. Recently, advanced research [@bergadano2019defacement] proposed an application of adversarial learning to defacement detection for making the learning process unpredictable so that the adversary will be unable to replicate it and predict the classifier’s behavior using a secret key.
Cloaking Attack and Identification
----------------------------------
Cloaking, also known as ’bait and switch’ is a common technique used to hide the true nature of a website by delivering different semantic content [@wang2011cloak; @invernizzi2016cloak]. Wang et al. [@wang2011cloak] presented four cloaking types: repeat cloaking that delivers different web content based on visit times of visitors, user-agent cloaking that delivers specific web content based on visitors’ user-agent string, redirection cloaking that moves users to another website using JavaScript, and IP cloaking, which delivers specific web content based on visitors’ IP. Researchers have responded to the cloaking techniques with a variety of anti-cloaking techniques [@invernizzi2016cloak]. Basic techniques relied on a cross-view comparison technique [@wang2006detecting; @wang2007spam]: A page is classified as cloaking if the redirect chain deviated across fetches. Other approaches mainly target compromised webservers and identify clusters of URLs with trending keywords that are irrelevant to the other content hosted on page [@john2011deseo]. Wang et al. [@wang2011cloak] identified cloaking in near real-time by examining the dynamics of cloaking over time. Invernizzi et al. [@invernizzi2016cloak] developed an anti-cloaking system that detects split-view content returned to two or more distinct browsing profiles by building a classifier that detects deviations in the content.
Manipulating Human Behavior
---------------------------
These days, cyber-attacks manipulate human weaknesses more than ever [@blunden2010manufactured]. Our susceptibility to deception, an essential human vulnerability, is a significant cause of security breaches. Attackers can exploit the human vulnerability by sending a specially crafted malicious email, tricking humans into clicking on malicious links, and thus downloading malware, (a.k.a. spear-phishing) [@goel2017got].
One of the main attack tools that exploit the human factor is social engineering, which is defined as the manipulation of the human aspect of technology using deception [@uebelacker2014social]. Social engineering plays on emotions such as fear, curiosity, excitement, and empathy, and exploits cognitive biases [@abraham2010overview]. The basic ’good’ human nature characteristics make people vulnerable to the techniques used by social engineers, as it activates various psychological vulnerabilities [@bezuidenhout2010social; @conteh2016cybersecurity; @conteh2016rise; @luo2011social]. The exploitation of the human factor has extensive use in advanced persistent threats (APTs). An APT attack involves sophisticated and well-resourced adversaries targeting specific information in high-profile companies and governments [@chen2014study]. In APT attacks, social engineering techniques are aimed at manipulating humans into delivering confidential information about a targeted organization or getting an employee to take a particular action [@paradise2017creation; @gulati2003threat; @bere2015advanced].
With regard to *chameleons*, they were previously executed in files during content-sniffing XSS attacks [@barth2009secure] but not on the OSNs. Barth et al. discussed [*chameleon* ]{}documents that are files conforming to multiple file formats (e.g., PostScript+HTML). The attack exploited the fact that browsers can parse documents as HTML and execute any hidden script within. In contrast to [*chameleon* ]{}documents, which are parsed differently by different tools without adversarial trigger, our [*chameleon* ]{}posts are controlled by the attacker and are presented differently to the same users at different times.
Lately, Stivala and Pellegrino [@Stivala2020deceptive] conducted a study associated with link previews independently. In their research, they analyzed the elements of the preview links during the rendering process within 20 OSNs and demonstrated a misuse case by crafting benign-looking link previews that led to malicious web pages.
Conclusions and Future Work
===========================
This article discloses a weakness in an important feature provided by three major OSNs: Facebook, Twitter, and LinkedIn, namely *updating link previews without visible notifications while retaining social capital* (e.g., likes, comments, retweets, etc.). This weakness facilitates a new [*chameleon* ]{}attack where the link preview update can be misused to damage the good name of users, avoid censorship, and perform additional OSN scam detailed in Section \[sec:misuse\]. Out of seven reviewed OSNs, only Instagram and WhatsApp are resilient against most flavors of the [*chameleon* ]{}attack.
We acknowledge the importance of the link preview update feature provided by the OSNs to support businesses that disseminate information through social networks and suggest several measures that should be applied by the OSNs to reduce the impact of [*chameleon* ]{}attacks. The most important measure is binding social capital to the version of a post to which it was explicitly provided. We also instruct OSN users on how to identify possible *chameleons*.
We experimentally show that even the most meticulous Facebook group owners fail to identify [*chameleon* ]{}pages trying to infiltrate their groups. Thus, it is extremely important to raise the awareness of OSN users to this new kind of trickery.
We encourage researchers and practitioners to identify potential [*chameleon* ]{}profiles throughout the OSNs in the nearest future; develop and incorporate redirect reputation mechanisms into machine learning methods for identifying OSN misuse; and include the [*chameleon* ]{}attack in security awareness programs alongside phishing scam and related scam.
Ethical and Legal Considerations {#sec:ethics}
================================
Our goal is hardening OSNs against misuse while respecting the needs and privacy of OSN users. We follow strict Responsible Full Disclosure Policy, as well as guidelines recommended by the Ben-Gurion University’s Human Subject Research Committee.
In particular, we did not access or store any information about the profiles we contacted during the experiment. We only recorded the status of the requests to join their Facebook groups. The [*chameleon* ]{}pages used during the experiment were deleted at the end of the study. Owners of the contacted Facebook groups can decide whether or not to accept the request from our pages. Although we did not inform them about the study before the requests, they are provided with post-experiment written feedback regarding their participation in the trial. We contact the relevant OSNs at least one month before the publication of the trial results and disclosure of the related weaknesses. No rules or agreements were violated in the process of this study. In particular, we used Facebook pages in the showcase and in the experiment rather than profiles to adhere to the Facebook End-User Licence Agreement.
Availability
============
*Chameleon* pages, posts, and tweets are publicly available. Links can be found in the GitHub repository.[^6] Source code is not provided to reduce misuse. CWE and official responses of the major OSNs are also provided on the mentioned GitHub page.
[^1]: A demo [*chameleon* ]{}post is available at <https://www.facebook.com/permalink.php?story_fbid=101149887975595&id=101089594648291&__tn__=-R>
[^2]: Here and in the rest of this section, numbers in parentheses indicate the attack phases in the order they are performed in each misuse case.
[^3]: <https://cards-dev.twitter.com/validator>
[^4]: <https://www.linkedin.com/post-inspector>
[^5]: https://developers.facebook.com/tools/debug/sharing/batch/
[^6]: <https://github.com/aviade5/Chameleon-Attack/>
|
{
"pile_set_name": "arxiv"
}
|
Comment by Loreanadruid
Arguably Inferior Socket for Paladin PvE Gems for the most part, but Superior for PvP.A side-grade to t6, but an upgrade for almost anything pre-Sunwell.
Comment by mikititan
Anyone knows if this schematic will be buyable from the trainer or will it drop (raid/heroic)? -thanks
Comment by gennym
This item, and all the other engineering helm upgrades, are sunwell trash drops.
So since people need to start farming the instance to begin with and there will be people in the raids who need it you probably won't see this on the AH for quite some time depending on the server.
Comment by mbg98
I wouldn't call this a side grade at all, considering that Sunwell requires paladins to spam Holy Light a lot more than Flash of Light, making their mana regen come mostly from Illumination/holy crit (great for pallies with the 2 piece t6 crit bonus to HL) and being in a group with a Shadow Priest. The upgrades here are where it matters most - a 20 crit increase over t6 (more than 1 % - thats fantastic in a single piece!) and plus 9 healing. Armor? lol - what are you going to do, tank in holy gear in Sunwell? You'll need the stam for the boss fights consdering the dots that Kalecgos does, for starters. Int doesnt matter so much since your mana regen and your +heal are really the major factors in post BT/MH content.
Comment by natto
My server has pug groups for sunwell trush run. BoP/BoE recipes usually drops from 1 to 5 and lots of epic-stones. You do not have to wear BT/Hyjal geared, Kara/SSC/TK gears are fine for trush run.
It usually goes 4k-5k on my server. It is a good resource of money, and you can call ppl when your guild does not have scheduled raid. However, you will need 5-6 mages for trush run.
Comment by Altoid
Dropped for me tonight on Garithos-US on the last pull before the first stairs in the instance.
My guild's been running Sunwell since 2.4 release and the patterns are extremely rare. We've only had 4 patterns drop, ever, and two of them were the Sunfire Robe (both dropped the day before this Schematic).
I was seriously beginning to believe I'd never get my hands on this because of its extreme rarity. I'm reasonably certain it's the only Sunwell Schematic to have dropped on my server as of yet. I've been passing on T6 helms for months in hopes of getting this and was going to give in and pick one up this week, then this baby swooped in!
Comment by alexiel
Can only be learned by Paladins.
Comment by DELMistrzu
Can I learn this Schematic as a Rogue ?
Comment by Entilzha2161
You can only learn this on a pally and it's supposed to only drop for pallies, but it dropped for my DK today (maybe since there were no DKs when Sunwell was released?) My engineer is a druid and cannot use it but I hope some Paladin buys it as the sell price looks pretty high.
|
{
"pile_set_name": "pile-cc"
}
|
Javier Hernández Carrera
Javier "Javi" Hernández Carrera (born 2 May 1998) is a Spanish footballer who plays for Real Madrid Castilla as either a central defender or a left back.
Club career
Born in Jerez de la Frontera, Cádiz, Andalusia, Hernández joined Real Madrid's youth setup in 2013, from Sevilla FC. On 17 July 2017, after finishing his formation, he was loaned to Segunda División B side CD El Ejido, for one year.
Hernández made his senior debut on 27 August 2017, starting and scoring his team's first in a 3–3 home draw against FC Cartagena. He finished the campaign as an undisputed starter, contributing with two goals in 33 matches.
On 13 July 2018, Hernández was loaned to Real Oviedo Vetusta also in the third division, until the end of the season. He made his first-team debut on 11 September, starting in a 0–1 away loss against RCD Mallorca for the season's Copa del Rey.
Hernández scored his first professional goal on 7 January 2019, netting the opener in a 3–2 away win against CD Numancia for the Segunda División championship.
References
External links
Real Madrid profile
Category:1998 births
Category:Living people
Category:Sportspeople from Jerez de la Frontera
Category:Spanish footballers
Category:Andalusian footballers
Category:Association football defenders
Category:Segunda División players
Category:Segunda División B players
Category:Real Madrid Castilla footballers
Category:Real Oviedo Vetusta players
Category:Real Oviedo players
|
{
"pile_set_name": "wikipedia_en"
}
|
Taraboura
Taraboura (Greek: Ταραμπούρα) is a neighbourhood in the city of Patras. It is named after one of the Albanians in which he lived and had his house in his area. Until 1990, it had a tall for the entrance and exit for carriage wheels and vehicles in Patras. Residential housing arrived in 1980.
Taraboura features a closed arena where Olympiada Patras plays. It is located at 24 Tisonas Street with the postcode 26623. Its capacity is 2,500 people.
References
''The first version of the article is translated and is based from the article at the Greek Wikipedia (el:Main Page)
Category:Neighborhoods in Patras
|
{
"pile_set_name": "wikipedia_en"
}
|
Working Women, Special Provision and the Debate on Equality
There has been considerable coverage in the media recently about the possibility of offering women in employment paid leave from work during their menstrual period. This has generated a broad range of responses relating to long-standing discussions about ‘equality’ and ‘difference’: is women’s equality best achieved by treating them the same as men or by making provisions that recognise their differences in terms of physiological constitution and biological functions?
If the UK introduces such an initiative, it would not be the first country in the contemporary world to do so. Many countries in Asia already make the provision and Russia debated introducing a law in 2013. The policy also has a significant historical precedent. A whole chapter of my book Women Workers in the Soviet Interwar Economy: From ‘Protection’ to ‘Equality’ (Macmillan, 1999), based on extensive research conducted for my PhD, is devoted to ‘Provision for “Menstrual Leave”’.
In the 1920s, scientific researchers and labour hygiene specialists in the Soviet Union conducted extensive investigations into the impact of menstruation on women’s capacity to work in manual and industrial jobs requiring a significant degree of physical labour. Their recommendations led to two decrees being issued that targeted specific categories of women workers:
Decree ‘On the release from work during menstruation of machinists and iron press workers working on cutting machines without mechanised gears in the garment industry’, 11 January 1922
Decree ‘On the working conditions of women tractor and lorry drivers’, 9 May 1931
These decrees arose from research that suggested, amongst other things, that inadequate seating at machines and on tractors resulted in congestion and tension in the abdomen that was exacerbated during menstruation. In practice, the decrees did not provide for regular absence from work. Women seeking to benefit from the provision had to provide a doctor’s note, similar to the usual requirements for sick leave.
The official research into the impact of menstruation on women’s capacity to work and the application of the decrees in practice raised a number of issues on both sides of the argument. I offer only a summary of the contemporary research findings and observer commentary here:
For the provision:
• employers have a responsibility to protect the health of their workers and unhealthy, poor and inadequate working environments can have a detrimental impact on women’s reproductive health
• women’s labour productivity and output would rise as a result
• it is essential to protect the professionalism of certain categories of workers: the debates here centred on performance artists and female theatrical employees engaged in highly physical and intensely emotional work
• heavy physical labour and strenuous exercise can lead to disruptions of the menstrual cycle
• women’s physical and intellectual capacities are reduced during menstruation; women lose muscular strength and powers of concentration
• women’s biological constitution and reproductive functions require specific recognition in law
Against the provision:
• employers are less likely to appoint women if they are guaranteed paid time off work during menstruation
• (often from male workers, who viewed the employment of women as competition) women should not be employed in jobs for which they lack the physical strength and mental capacity
• if necessary, women could be transferred to different tasks involving easier work during menstruation
• the provision would be open to uneven application and abuse
• women cannot expect to be considered equal with men if they are given special treatment in the law
It is worth noting also that the various research projects often revealed that the vast majority of women reported no regular problems or abnormalities with menstruation, and that men commonly reported higher levels of sickness than their female colleagues. Many of the problems experienced by women in the workplace could be mitigated by the introduction of improvements to their physical working conditions (not sitting down or standing up in the same position for long periods of time) or by the simple introduction of very short breaks that would allow women to walk around and get some exercise.
Debates in the UK, on the TV and in the press, are unlikely to reach a consensus on this issue. What do you think?
|
{
"pile_set_name": "pile-cc"
}
|
Vale de Lua – Moon Surface on Earth
The valley terrain is all covered with rock formations and intricate labyrinths created by nature. In ancient times, there were deposits of quartz. But the noisy river San Miguel has streamed lots of passages over the years.
Now, quartz rocks hang over the pond and the holes of different shapes remind those seen on the moon. Due to the different degrees of refraction of light water in some places seems to be dark blue, in other - clear and transparent. Dark brown and almost black, sometimes bluish-gray rocks vary in height and shape.
Such miracle undoubtedly shows that the forces of nature are capable of creating the most unusual landscapes. In Brazil, Vale de Lua unusual relief appeared also due to the presence of sand. Gradually, layer after layer, it was brought there with river’s stream, settled on the coastal cliffs and formed numerous mounds an unusual shape.
If looking a little closer you will notice that in some places the quartz rock thinned to such an extent that its thickness does not exceed the thickness of a sheet of paper. The magnificent landscape of small lakes is completed with waterfalls.
|
{
"pile_set_name": "pile-cc"
}
|
Kevin Mansker
Kevin Mansker (born ) is an American male track cyclist. He competed in the sprint event at the 2012 UCI Track Cycling World Championships.
References
External links
Profile at cyclingarchives.com
Category:1989 births
Category:Living people
Category:American track cyclists
Category:American male cyclists
Category:Place of birth missing (living people)
|
{
"pile_set_name": "wikipedia_en"
}
|
---
abstract: |
This paper is dedicated to the study of the interaction between dynamical systems and percolation models, with views towards the study of viral infections whose virus mutate with time. Recall that $r$-bootstrap percolation describes a deterministic process where vertices of a graph are infected once $r$ neighbors of it are infected. We generalize this by introducing [*$F(t)$-bootstrap percolation*]{}, a time-dependent process where the number of neighbouring vertices which need to be infected for a disease to be transmitted is determined by a percolation function $F(t)$ at each time $t$. After studying some of the basic properties of the model, we consider smallest percolating sets and construct a polynomial-timed algorithm to find one smallest minimal percolating set on finite trees for certain $F(t)$-bootstrap percolation models.\
author:
- 'Yuyuan Luo$^{a}$ and Laura P. Schaposnik$^{b,c}$'
bibliography:
- 'Schaposnik\_Percolation.bib'
title: Minimal percolating sets for mutating infectious diseases
---
Introduction
============
The study infectious diseases though mathematical models dates back to 1766, where Bernoulli developed a model to examine the mortality due to smallpox in England [@modeling]. Moreover, the germ theory that describes the spreading of infectious diseases was first established in 1840 by Henle and was further developed in the late 19th and early 20th centuries. This laid the groundwork for mathematical models as it explained the way that infectious diseases spread, which led to the rise of compartmental models. These models divide populations into compartments, where individuals in each compartment have the same characteristics; Ross first established one such model in 1911 in [@ross] to study malaria and later on, basic compartmental models to study infectious diseases were established in a sequence of three papers by Kermack and McKendrick [@kermack1927contribution] (see also [@epidemiology] and references therein).
In these notes we are interested in the interaction between dynamical systems and percolation models, with views towards the study of infections which mutate with time. The use of stochastic models to study infectious diseases dates back to 1978 in work of J.A.J. Metz [@epidemiology]. There are many ways to mathematically model infections, including statistical-based models such as regression models (e.g. [@imai2015time]), cumulative sum charts (e.g. [@chowell2018spatial]), hidden Markov models (e.g. [@watkins2009disease]), and spatial models (e.g. [@chowell2018spatial]), as well as mechanistic state-space models such as continuum models with differential equations (e.g. [@greenhalgh2015disease]), stochastic models (e.g. [@pipatsart2017stochastic]), complex network models (e.g. [@ahmad2018analyzing]), and agent-based simulations (e.g. [@hunter2019correction] – see also [@modeling] and references therein).
Difficulties when modeling infections include incorporating the dynamics of behavior in models, as it may be difficult to access the extent to which behaviors should be modeled explicitly, quantify changes in reporting behavior, as well as identifying the role of movement and travel [@challenges]. When using data from multiple sources, difficulties may arise when determining how the evidence should be weighted and when handling dependence between datasets [@challenges2].
In what follows we shall introduce a novel type of dynamical percolation which we call [*$F(t)$-bootstrap percolation*]{}, though a generalization of classical bootstrap percolation. This approach allows one to model mutating infections, and thus we dedicate this paper to the study some of its main features. After recalling classical $r$-bootstrap percolation in Section \[intro\], we introduce a percolating function $F(t)$ through which we introduce a dynamical aspect the percolating model, as described in Definition \[fperco\].
[**Definition.**]{} Given a function $F(t): \mathbb{N}\rightarrow \mathbb{N}$, we define an [*$F(t)$-bootstrap percolation model*]{} on a graph $G$ with vertices $V$ and initially infected set $A_0$ as the process which at time $t+1$ has infected set given by $$\begin{aligned}
A_{t+1} = A_{t} \cup \{v \in V : |N(v) \cap A_t| \geq F(t)\}, \end{aligned}$$ where $N(v)$ denotes the set of neighbouring vertices to $v$, and we let $A_\infty$ be the final set of infected vertices once the percolation process has finished.
In Section \[time\] we study some basic properties of this model, describe certain (recurrent) functions which ensure the model percolates, and study the critical probability $p_c$. Since our motivation comes partially from the study of effective vaccination programs which would allow to contain an epidemic, we are interested both in the percolating time of the model, as well as in minimal percolating sets. We study the former in Section \[time2\], where by considering equivalent functions to $F(t)$, we obtained bounds on the percolating time in Proposition \[propo8\].
Finally, in Section \[minimal\] and Section \[minimal2\] we introduce and study smallest minimal percolating sets for $F(t)$-bootstrap percolation on (non-regular) trees. This leads to one of our main results in Theorem \[teo1\], where we describe an algorithm for finding the smallest minimal percolating sets. Lastly, we conclude the paper with a comparison in Section \[final\] of our model and algorithm to the model and algorithm considered in [@percset] for clasical bootstrap percolation, and analyse the effect of taking different functions within our dynamical percolation.
Background: bootstrap percolation and SIR models {#intro}
================================================
Bootstrap percolation was introduced in 1979 in the context of solid state physics in order to analyze diluted magnetic systems in which strong competition exists between exchange and crystal-field interactions [@density]. It has seen applications in the studies of fluid flow in porous areas, the orientational ordering process of magnetic alloys, as well as the failure of units in a structured collection of computer memory [@applications].
Bootstrap percolation has long been studied mathematically on finite and infinite rooted trees including Galton-Watson trees (e.g. see [@MR3164766]). It better simulates the effects of individual behavior and the spatial aspects of epidemic spreading, and better accounts for the effects of mixing patterns of individuals. Hence, communicative diseases in which these factors have significant effects are better understood when analyzed with cellular automata models such as bootstrap percolation [@automata], which is defined as follows.
For $n\in \mathbb{Z}^+$, we define an [*$n$-bootstrap percolation model*]{} on a graph $G$ with vertices $V$ and initially infected set $A_0$ as the process in which at time $t+1$ has infected set given by $$\begin{aligned}
A_{t+1} = A_{t} \cup \{v \in V : |N(v) \cap A_t| \geq n\}. \end{aligned}$$ Here, as before, we denoted by $N(v)$ the set of neighbouring vertices to $v$.
In contrast, a [*SIR Model*]{} relates at each time $t$ the number of susceptible individuals $S(t)$ with the number of infected individuals $I(t)$ and the number of recovered individuals $R(t)$, by a system of differential equations – an example of a SIR model used to simulate the spread of the dengue fever disease appears in [@dengue]. The SIR models are very useful for simulating infectious diseases; however, compared to bootstrap percolation, SIR models do not account for individual behaviors and characteristics. In these models, a fixed parameter $\beta$ denotes the average number of transmissions from an infected node in a time period.
In what follows we shall present a dynamical generalization of the above model, for which it will be useful to have an example to establish the comparisons.
![Depiction of $2$-bootstrap percolation, where shaded vertices indicated infected nodes. []{data-label="first"}](Fig1.png)
Consider the (irregular) tree with three infected nodes at time $t=0$, given by $A_0=\{2,4,5\}$ as shown in Figure \[first\]. Then, through $2$-bootstrap percolation at time $t=1$, node $3$ becomes infected because its neighbors $4$ and $5$ are infected at time $t=0$. At time $t=2$, node $1$ becomes infected since its neighbors $2$ and $3$ are infected at time $t=1$. Finally, note that nodes $6,7,8$ cannot become infected because they each have only $1$ neighbor, yet two or more infected neighbors are required to become infected.
Time-dependent Percolation {#time}
===========================
The motivation of time-dependent percolation models appears since the rate of spread of diseases may change over time. In the SIR models mentioned before, since $\beta$ is the average number of transmissions from an infected node in a time period, $1/\beta$ is the time it takes to infect a node. If we “divide the work" among several neighbors, then $1/\beta$ is also the number of infected neighbors needed to infect the current node. Consider now an infection which would evolve with time. This is, instead of taking the same number of neighbours in $r$-bootstrap percolation, consider a percolation model where the number of neighbours required to be infected for the disease to propagate changes with time, following the behaviour of a function $F(t)$ which can be set in terms of a one-parameter family of parameters $\beta$ to be $F(t) := \ceil[bigg]{\frac{1}{\beta(t)}}$. We shall say a function is a [*percolation function*]{} if it is a function $F: I \rightarrow \mathbb{Z}^+$ where $I$ is an initial segment of $\mathbb{N}$ that we use in a time-dependent percolation process, and which specifies the number of neighbors required to percolate to a node at time $t$.
\[fperco\]Given a function $F(t): \mathbb{N}\rightarrow \mathbb{N}$, we define an [*$F(t)$-bootstrap percolation model*]{} on a graph $G$ with vertices $V$ and initially infected set $A_0$ as the process in which at time $t+1$ has infected set given by $$\begin{aligned}
A_{t+1} = A_{t} \cup \{v \in V : |N(v) \cap A_t| \geq F(t)\}. \end{aligned}$$ Here, as before, we denoted by $N(v)$ the set of neighbouring vertices to $v$, and we let $A_\infty$ be the final set of infected vertices once the percolation process has finished.
One should note that $r$-bootstrap percolation can be recovered from $F(t)$-bootstrap percolation by setting the percolation function to be the constant $F(t) = r$.
It should be noted that, unless otherwise stated, the initial set $A_0$ is chosen in the same way as in $r$-bootstrap percolation: by randomly selecting a set of initially infected vertices with probability $p$, for some fixed value of $p$ which is called the [*probability of infection*]{}. If there are multiple percolation functions and initially infected sets in question, we may use the notation $A^{F }_{t}$ to denote the set of infected nodes at time $t$ percolating under the function $F(t)$ with $A_0$ as the initially infected set. In particular, this would be the case when implementing the above dynamical model to a multi-type bootstrap percolation such as the one introduced in [@gossip]. In order to understand some basic properties of $F(t)$-bootstrap percolation, we shall first focus on a single update function $F(t)$, and consider the critical probability $p_c$ of infection for which the probability of percolation is $\frac{1}{2}$.
\[propo1\] If $F(t)$ equals its minimum for infinitely many times $t$, then the critical probability of infection $p_c$ for which the probability of percolation is 1/2, is given by the value of the critical probability in $m$-bootstrap percolation, for $m:=\min_t F(t)$.
When considering classical bootstrap percolation, note that the resulting set $A_\infty^r$ of $r$-bootstrap percolation is always contained by the resulting set $A_\infty^n$ of $n-$bootstrap percolation provided $n\leq r$. Hence, setting the value $m:=\min_t F(t)$, the resulting $A_\infty^F$ set of $F(t)$-bootstrap percolation will be contained in $A_\infty^m$. Moreover, since any vertex in $A_t^F$ for $t$ such that $F(t)=m$ remains in the set the next time for which $F(t)=m$, and since there are infinitely many times $t$ such that $F(t)=m$, we know that the final resulting set $A_\infty^m$ of $m$-bootstrap percolation is contained in the final resulting set $A_\infty^F$ of $F(t)$-bootstrap percolation. Then the resulting set of $m$-bootstrap percolation and $F(t)$-bootstrap percolation need to be identical, and hence the critical probability for $F(t)$-bootstrap percolation is that of $m$-bootstrap percolation.
As we shall see later, different choices of the one-parameter family $\beta(t)$ defining $F(t)$ will lead to very different dynamical models. A particular set up arises from [@viral], which provides data on the time-dependent rate of a specific virus spread, and through which one has that an interesting family of parameters appears by setting $$\beta(t) = \left(b_0-b_f\right)\cdot\left(1-k\right)^t+b_f,$$ where $b_0$ is the initial rate of spread, $b_f$ is the final rate of spread, and $0<k<1$. Then at time $t$, the number of infected neighbors it takes to infect a node is $$F(t):=\ceil[Bigg]{\frac{1}{\left(b_0-b_f\right)\cdot\left(1-k\right)^t+b_f}}.$$
In this case, since $\beta(t)$ tends to $b_f$, and $\frac{1}{\beta}$ tends to $\frac{1}{b_f}$, one cans see that there will be infinitely many times $t$ such that $F(t) = \ceil[Bigg]{\frac{1}{b_f}}$. Hence, in this setting from Proposition \[propo1\], the critical probability will be same as that of a $r$-bootstrap percolation where $r=\ceil[Bigg]{\frac{1}{b_f}}$.
Percolation Time {#time2}
================
Informally, [*percolation time*]{} is the time it takes for the percolation process to terminate, with regards to a specific initially infected set of a graph. In terms of limits, recall that the final percolating set is defined as $$\begin{aligned}
A_\infty:=\lim_{t\rightarrow \infty} A_t,\label{mas}\end{aligned}$$ and thus one may think of the percolation time as the smallest time $t$ for which $A_t=A_\infty$. By considering different initial probabilities of infection $p$ which determine the initially infected set $A_0$, and different percolation functions $F(t)$ one can see that the percolation time of a model can vary drastically. To illustrate this, in Figure \[second\] we have plotted the percentage of nodes infected with two different initial probabilities and four different percolation functions. The model was ran $10^3$ times for each combination on random graphs with $10^2$ nodes and $300$ edges.
![ Percentage of nodes infected at time $t$ for $F(t)$-bootstrap percolation with initial probability $p$, on graphs with $100$ nodes and $300$ edges.[]{data-label="second"}](chart2.png)
In the above settings of Figure \[second\], one can see that all the models stabilize by time $10$, implying that the percolation time is less than or equal to $10$. Generally, understanding the percolation time is useful in determining when the disease spreading has stabilized. In what follows, we find a method to generate an upper bound on the percolation time given a specific graph and function. Formally, we define the [*percolation time*]{} $t_*$ as the minimum $$t_*:=\min_t \{~t~|~A_{t+1} = A_t~\}.$$
Expanding on the notation of , we shall denote by $A_\infty^\gamma$ the set of nodes infected by percolating the set $A_0$ on the graph with percolation function $\gamma(t)$, and we shall simply write $A_\infty$ when the percolation function $\gamma(t)$ is clear from context or irrelevant. Moreover, we shall say that two percolation functions $F_1: I_1 \rightarrow \mathbb{Z}^+$ and $F_2: I_2 \rightarrow \mathbb{Z}^+$ are [*equivalent*]{} for the graph $G$ if for all initially infected sets $A_0$, one has that $$A^{F_1}_\infty=A^{F_2}_\infty.$$ This equivalence relation can be understood through the lemma below, which uses an additional function $\gamma(t)$ to relate two percolation functions $F_0$ and $F_0'$ if $F_0'$ can be intuitively “generated” by removing some values of $F_0$. This removal procedure is further specified in this lemma.
Given two subsets $I_1$ and $I_2$ of $\mathbb{N}$, we say a function $\gamma: I_1 \rightarrow I_2 \cup \{-1\}$ is a [*nice function*]{} if it is surjective and
- it is injective on $\gamma^{-1}(I_2)$;
- it is increasing on $\gamma^{-1}(I_2)$;
- it satisfies $\gamma(a) \leq a$ or $\gamma(a)=-1$.
Given $I_1,I_2\subset \mathbb{N}$, let $F(t)$ be any percolation function with domain $I_1$, and define the percolation function $F'(t)$ with domain $I_2$ as $F'(t) := F(\gamma^{-1}(t))$ for $\gamma(t)$ a nice function. Then, for any fixed initially infected set $A_0$ and $t \in I_2$, one has that $$\begin{aligned}
A^{F'}_{t} \subseteq A^{F}_{\gamma^{-1}(t)}.\label{mas11}\end{aligned}$$
We first show that $F'(t)$ is well-defined. Since the domain of $F'(t)$ is $I_2$, we have that $t\in I_2$ and thus $\gamma^{-1}(t)$ is a valid expression. Moreover, $\gamma^{-1}(t)$ exists because $\gamma$ is surjective, and it is unique since $I_2$ is an initial segment of $\mathbb{N}$ and hence $t \neq -1$. Furthermore, for any $a,b \in I_1$, if $\gamma(a) = \gamma(b) \neq -1$, then $a=b$. Since the domain of $\gamma$ is $I_1$, then $\gamma^{-1}(t) \in I_1$. This means that $\gamma^{-1}(t)$ is in the domain of $F(t)$ and thus one has that $F'(t)$ is defined for all $t\in I_2$.
We shall now prove the result in the lemma by induction. Since $\gamma^{-1}(0)=0$ and the initially infected sets for the models with $F(t)$ and $F'(t)$ are the same, it must be true that $A^{F' }_{0} \subseteq A^{F }_{0}$, and in particular, $A^{F' }_{0} = A^{F }_{0} = A_0.$ In order to perform the inductive step, suppose that for some $t \in I_2$ and $t+1 \in I_2$, one has $A^{F' }_{t} \subseteq A^{F }_{\gamma^{-1}(t)}$. Moreover, suppose there is a node $n$ such that $n \in A^{F' }_{t+1}$ but $n \notin A^{F }_{\gamma^{-1}(t+1)}$. Then, this means that there exists a neighbor $n'$ of $n$ such that $n' \in A^{F' }_{t}$ but $n' \notin A^{F }_{\gamma^{-1}(t+1)-1}$. Indeed, otherwise this would imply that the set of neighbors of $n$ infected prior to the specified times are the same for both models, and since $F'(t+1) = F(\gamma^{-1}(t+1))$ for $t \in I_2$, and thus $n$ would be infected in both or neither models. From the above, since $t < t+1$ one must have $\gamma^{-1}(t) < \gamma^{-1}(t+1)$, and thus $$\gamma^{-1}(t) \leq \gamma^{-1}(t+1)-1.$$ Moreover, since $n' \notin A^{F }_{\gamma^{-1}(t+1)-1}$, then $n' \notin A^{F }_{\gamma^{-1}(t)}$. However, we assumed $n' \in A^{F' }_{t}$, and since $A^{F' }_{0} \subseteq A^{F }_{0}$, we have a contradiction, so it must be true that the sets satisfy $A^{F' }_{t+1} \subseteq A^{F }_{\gamma^{-1}(t+1)}$. Thus we have proven that for any initially infected set $A_0$ and $t \in I_2$, one has that is satisfied for all $t\in I_2$.
Through the above lemma we can further understand when an $F(t)$-percolation process finishes in the following manner.
Given a percolation function $F(t)$ and a fixed time $t \in \mathbb{N}$, let $t_p<t$ be such that $F(t_p) < F(t)$, and suppose there does not exist another time $t_i \in \mathbb{N}$ where $t_p < t_i <t$ such that $F(t_i) < F(t)$. Suppose further that we use this percolation function on a graph with $\ell$ vertices. Then, if $|\{t_i~|~F(t_i)=F(t)\}|>\ell$, then there are no nodes that becomes infected at time $t$.
Suppose some node $n$ is infected at time $t$. Then, this would imply that all nodes are infected before time $t$. We can show this using contradiction: suppose there exists $m$ nodes $n_i$ that there are not infected by time $t$. Then we know that there exists at least $m$ of $t_j \in \mathbb{N}$ such that $t_p < t_j < t$, for which $F(t_j) = F(t)$ and such that there is no node infected at $t_j$. Matching each $n_i$ with some $t_j$ and letting $t_k \in \mathbb{N}$ be such that $t_j < t_k \leq t$, one can see that there is some node infected at $t_k$, and $F(t_k) = F(t)$. Moreover, this implies that there is no $t_x \in \mathbb{N}$ such that $t_j < t_x < t_k$ and such that there is some node infected at $t_x$ and $F(t_x) = a$. We know such a $t_k$ exists because there is a node infected at time $t$.
From the above, for each $n_i$ there are two cases: either the set of nodes infected by $t_j$ is the same as the set of nodes infected by $t_k$, or there exists node $p$ in the set of nodes infected by $t_k$ but not in the set of nodes infected by its $t_j$. We have a contradiction for the first case: there must be a node infected at time $t_j$ is this is the case, as the set of infected nodes are the same as time $t_k$, so the first case is not possible. So the second case must hold for all $m$ of $n_i$’s. But then, the second case implies that there is a node infected between $t_j$ and $t_k$. This means that at least $m$ additional nodes are infected, adding to the at least $\ell-m$ nodes infected at $t_i$ such that $F(t_i) = a$ and there is a node infected at $t_i$, we have at least $\ell-m+m=\ell$ nodes infected before $t$. But if all $\ell$ nodes are infected before $t$, this would mean there are no nodes to infect at time $t$, so $n$ does not exist.
Intuitively, the above lemma tells us that given a fixed time $t_0$ and some $t>t_0$, if $F(t) = \ell$ is the smallest value the function takes on after the time $t_0$, and $F(t)$ has already taken on that value more than $\ell$ times, for $\ell$ the number of nodes in the graph, then there will be no nodes that will be infected at that time and the value is safe to be “removed”. The removal process is clarified in the next proposition, where we define an upper bound of percolation time on a specified tree and function $F(t)$.
\[propo8\]Let $G$ be a regular tree of degree $d$ and $\ell$ vertices. Given a percolation function $F(t)$, define the functions $F'(t)$ and $\gamma: \mathbb{N} \rightarrow \mathbb{N} \cup \{-1\}$ by setting:
- $F'(0) := F(0)$, and $\gamma(0) := 0$.
- Suppose the least value we have not considered $F(t)$ at is $a$, and let $b$ be the least value where $F'(b)$ has not yet been defined. If $F(a)$ has not yet appeared $\ell$ times since the last time $t$ such that $F(t) < F(a)$ and $F(a) \leq d$, then set $F'(b) := F(a)$, and let $\gamma(a)=b$. Otherwise, $\gamma(a)=-1$.
The function $F'(t)$ is equivalent to $F(t)$. \[P1\]
Intuitively, the function $\gamma$ constructed above is mapping the index associated to $F(t)$ to the index associated to $F'(t)$. If omitted, then it is mapped to $-1$ by $\gamma$. To prove the proposition, we will prove that $P_{F(t)}(A) = P_{F'(t)}(A)$. Suppose we have a node $n$ in $P_{F(t)}(A)$, and it is infected at time $t_0$. Suppose $F(t_0) = a$ for some $a \in \mathbb{Z}^+$, and let $t_{prev}$ be the largest integer $t_{prev} < a$ such that $F(t_{prev}) < a$. Suppose further that $t_0$ is the $m$th instance such that $F(t) = a$ for some $t$. Moreover, if $m > v$, there cannot be any node infected at time $t_0$ under $F(t)$, and thus it follows that $m \leq v$. But if $m \leq v$, then $\gamma(t_0) \neq -1$ and therefore all nodes that are infected under $F(t)$ became infected at some time $t$ where $\gamma(t_0) \neq -1$.
Recall that $A_0^{F} = A_0^{F'}$, and suppose for some $n$ such that $\gamma(n)\neq -1$, one has that $A_n^{F} = A_{\gamma(n)}^{F'}$. We know that for any $n < t < \gamma^{-1}(\gamma(n)+1), \gamma(t) = -1$, so nothing would be infected under $F(t)$ after time $n$ but before $\gamma^{-1}(\gamma(n)+1)$. This means that the set of previously infected nodes at time $\gamma^{-1}(\gamma(n)+1)-1$ is the same as the set of nodes infected before time $n$ leading to $$A_n^{F} = A_{\gamma^{-1}(\gamma(n)+1)-1}^{F'}.$$ Then, since $F(\gamma^{-1}(\gamma(n)+1)) = F'(\gamma(n)+1)$ and the set of previously infected nodes for both are $A_n^{F}$, we know that $A_{n+1}^{F} = A_{\gamma(n+1)}^{F'}$. Thus, for any time $n'$ in the domain of $F'(t)$, there exist a corresponding time $n$ for percolation under $F(t)$ such that the infected set at time $n$ under $F(t)$ and the infected set at time $n'$ under $F'(t)$ are the same, and thus $A_\infty^{F} = A_\infty^{F'}$.
From the above Proposition \[P1\] we can see two things: the upper bound on the percolation time is the time of the largest $t$ such that $F'(t)$ is defined, and we can use this function in an algorithm to find the smallest minimal percolating set since $F(t)$ and $F'(t)$ are equivalent. Moreover, an upper bound on the percolation time can not be obtained without regards to the percolation function: suppose we have such an upper bound $b$ on some connected graph with degree $d$ and with $1$ node initially infected and more than $1$ node not initially infected. Then, if we have percolation function $F(t)$ such that $F(t) = d+1$ for all $t\in \mathbb{N} \leq b$ and $F(m)=1$ otherwise, we see that there will be nodes infected at time $b+1$, leading to a contradiction.
Suppose the degree of a graph is $d$. Define a sequence $a$ where $a_1 = d$ and $a_{n+1} = (a_n+1)d$. Then the size of the domain of $F'(t)$ in Proposition \[P1\] is $\Sigma^{d}_{i=1}a_n$. \[ll\]
Suppose each value do appear exactly $d$ times after the last value smaller than it appears. To count how large the domain can be, we start with the possible $t$s such as $F'(t)=1$s in the function; there are $d$ of them as $1$ can maximally appear $d$ times. Note that this is equal to $a_1$. Now, suppose we have already counted all the possible $t$s when $F'(t) < n+1$, for $1 leq n < d$, which amounted to $a_{n}$. Then, there can be maximally $d$ instances at the between the appearance of each $t$ when $F'(t) < n$ as well as before and after all such appearances, so there are $a_{n}+1$ places where $F'(t)=n$ can appear. Thus there are maximally $(a_{n}+1)d$ elements $t$ in the domain such that $F'(t) = n+1$. Summing all of them yields $\Sigma^{d}_{i=1}a_n$, the total number of elements in the domain.
From Proposition \[P1\], for some $F(t)$, $A_0$ and $n$, one has $A^{F}_{\gamma^{-1}(n)} = A^{F'}_{n}$. Then if $A_\infty^{F'}$ is reached by time $\Sigma^{d}_{i=1}a_n$, the set must be infected by time $\gamma^{-1}(\Sigma^{d}_{i=1}a_n)$. Hence, in this setting an upper bound of $F(t)$ percolating on a graph with $d$ vertices can be found by taking $\gamma^{-1}(\Sigma^{d}_{i=1}a_n)$, as defined in Lemma \[ll\].
Minimal Percolating Sets {#minimal}
========================
When considering percolations within a graph, it is of much interest to understand which subsets of vertices, when infected, would lead to the infection reaching the whole graph.
A [*percolating set*]{} of a graph $G$ with percolation function $F(t)$ is a set $A_0$ for which $A_\infty^F=G$ at a finite time. A [*minimal percolating set*]{} is a percolating set $A$ such that if any node is removed from $A$, it will no longer be a percolating set.
A natural motivation for studying minimal percolating sets is that as long as we keep the number of individuals infected to less than the size of the minimal percolating set, we know that the entire population will not be decimated.
Bounds on minimal percolating sets on grids and other less regular graphs have extensively been studied. For instance, it has been shown in [@Morris] that for a grid $[n]^d$, there exists a minimal percolating set of size $4n^2/33 + o(n^2)$, but there does not exist one larger than $(n + 2)^2/6$. In the case of trees, [@percset] gives an algorithm that finds the largest and smallest minimal percolating sets on trees. However, the results in the above papers cannot be easily extended to the dynamical model because it makes several assumptions such as $F(t) \neq 1$ that do not necessarily hold in the dynamical model.
\[ex2\]An example of a minimal percolating set with $F(t)=t$ can be seen in Figure \[ex1\] (a). In this case, the minimal percolating set has size 3. Indeed, we see that if we take away any of the red nodes, the remaining initially infected red nodes would not percolate to the whole tree, and thus they form a minimal percolating set; further, there exists no minimal percolating sets of size 1 or 2, thus this is the smallest minimal percolating set. It should be noted that minimal percolating sets can have different sizes. For example, another minimal percolating set with $5$ vertices appears in Figure \[ex1\] (b).
![(a) In this tree, having nodes $2,4,5$ infected (shaded in red) initially is sufficient to ensure that the whole tree is infected. (b) This minimal percolating set shaded in red is of size $5$.[]{data-label="ex1"}](Fig8.jpg)
In what follows we shall work with general finite trees $T(V,E)$ with set of vertices $V$ and set of edges $E$. In particular, we shall consider the smallest minimal percolating sets in the following section.
Algorithms for Finding Smallest Minimal Percolating Set {#minimal2}
=======================================================
Consider $F(t)$-bootstrap percolation on a tree $T(V,E)$ with initially infected set $A_0\subset V$. As before, we shall denote by $A_t$ be the set of nodes infected at time $t$. For simplicity, we shall use here the word “infected” synonymously with “infected”. In order to build an algorithm to find smallest percolating sets, we first need to introduce a few definitions that will simplify the notation at later stages.
We shall denote by $L(a)$ the largest time $t$ such that $a \leq F(t),$ and if there does not exist such a time $t$, then set $L(a)=\infty$. Similarly, define $B(a)$ as the smallest time $t$ such that $a \leq F(t)$, and if such a time $t$ does not exist, set $B(a)=\infty$.
Given $a,b\in \mathbb{N}$, if $a<b$ then $L(a) \geq L(b)$. Indeed, this holds because if a node can be infected to with $b$ neighbors, it can with $a$ neighbors where $a<b$. Note that in general, a smallest percolating set $A_0$ must be a minimal percolating set. To see this, suppose not. Then there exists some $v$ in $A_0$ such that $A_0 -\{v\}$ percolates the graph. That means that $A_0 -\{v\}$, a smaller set that $A_0$, is a percolating set. However, since $A_0$ is a smallest percolating set, we have a contradiction. Hence, showing that a percolating set $A_0$ is the smallest implies that $A_0$ is a minimal percolating set.
The first algorithm that comes to mind is to try every case. There are $2^n$ possible sets $A_0$, and for each set we much percolate $A_0$ on $T$ to find the smallest percolating set. This amounts to an algorithm of complexity $O(t2^n)$ where $t$ is the upper bound on the percolation time.
In what follows we shall describe a polynomial-timed algorithm to find the smallest minimal percolating set on $T(V,E)$, described in Theorem \[teorema\]. For this, we shall introduce two particular times associated to each vertex in the graph, and formally define what isolated vertices are.
For each node $v$ in the graph, we let $t_a(v)$ be the time when it is infected, and $t_*(v)$ the time when it is last allowed to be infected;
Moreover, when building our algorithm, each vertex will be allocated a truth value of whether it needs to be further considered.
A node $v$ is said to be [*isolated*]{} with regards to $A_0$ if there is no vertex $w\in V$ such that $v$ becomes infected when considering $F(t)$-bootstrap percolation with initial set $A_0 \cup \{w\}$.
From the above definition, a node is isolated with regards to a set if it is impossible to infect it by adding one of any other node to that set that is not itself. Building towards the percolating algorithm, we shall consider a few lemmas first.
If a node cannot be infected by including a neighbor in the initial set, it is isolated. \[L1\]
From Remark \[L1\], by filling the neighbor in the initial set, we either increased the number of neighbors infected to a sufficient amount, or we expanded the time allowed to percolate with fewer neighbors so that percolation is possible. We explore these more precisely in the next lemma, which gives a quick test to see whether a vertex is isolated.
\[L3\] Let $v$ be an uninfected node such that not all of its $n$ neighbors are in set $A_0$. Define function $$\begin{aligned}
N:\{0,1,...,n\} \rightarrow \mathbb{Z}\label{NN}\end{aligned}$$ where $N(i)$ is the smallest time when $i$ of the neighbors of node $v$ is infected, and set $N(0)=0$. Then, a vertex $v$ is isolated iff there exists no $i$ such that $$F(t) \leq i+1~ {\rm for~ some~} t \in (N(i),t_*].$$
Suppose $s\in N(v)\cap A_0$. Then, if there exists $i$ such that $F(t) \leq i+1$ for some $t \in (N(i),t_*]$, using $A_0 \cup \{s\}$ as the initially infected set allows percolation to happen at time $t$ since there would be $i+1$ neighbors infected at each time $N(i)$. Thus with contrapositive, the forward direction is proven.
Let $v$ be not isolated, and $v \in P(A_0 \cup \{s\})$ for some neighbor $s$ of $v$. Then there would be $i+1$ neighbors infected at each time $N(i)$. Moreover, for $v$ being to be infected, the $i+1$ neighbors must be able to fill $v$ in the allowed time, $(N(i),t_*]$. Thus there exists $N(i)$ such that $F(t) \leq i+1$ for some $t \in (N(i),t_*]$. With contrapositive, we proved the backwards direction.
Note that if a vertex $v$ is uninfected and $N(v)\subset A_0$, then the vertex must be isolated. In what follows we shall study the effect of having different initially infected sets when studying $F(t)$-bootstrap percolation.
\[L2\] Let $Q$ be an initial set for which a fixed vertex $v$ with $n$ neighbours is isolated. Denoting the neighbors of $v$ be $s_1, s_2,...,s_n$, we let the times at which they are infected be $t_1^Q, t_2^Q,\ldots,t_n^Q$. Here, if for some $1\leq i \leq n$, the vertex $s_i$ is not infected, then set $t_i^Q$ to be some arbitrarily large number. Moreover, consider another initial set $P$ such that the times at which $s_1, s_2,..., s_n$ are infected are $t_1^P, t_2^P,\ldots,t_n^P$ satisfying $$\begin{aligned}
t_i^Q=&t_i^P&~{\rm for }~ i\neq j;\nonumber\\
t_j^Q \leq& t_j^P&~{\rm for }~ i= j,\nonumber
\end{aligned}$$ for some $1 \leq j \leq n$. If $v \notin P$, then the vertex $v$ must be isolated with regards to $P$ as well.
Consider $N_Q(i)$ as defined in for the set $Q$, and $N_P(i)$ the corresponding function for the set $P$. Then it must be true that for all $k \in \{0,1,...,n\}$, one has that $N_Q(k) \leq N_P(k)$. Indeed, this is because with set $P$, each neighbor of $v$ is infected at or after they are with set $Q$. Then, from Lemma \[L3\], $v$ is isolated with regards to $Q$ so there is no $m$ such that $$F(t) \leq m+1{~\rm~ for~ some~ }~t \in (N_Q(m),t_*].$$ However, since $$N_Q(k) \leq N_P(k){~\rm~ for~ all~ }~k \in \{0,1,...,n\},$$ we can say that there is no $m$ such that $$F(t) \leq m+1{~\rm~ for~ some~ }~t \in (N_P(m),t_*]$$ as $(N_P(m),t_*] \subseteq (N_Q(m),t_*].$ Thus we know that $v$ must also be isolated with regards to $P$.
\[D2\] Given a vertex $v$ which is not isolated, we define $t_p(v)\in (0,t_*]$ to be be the largest integer such that there exists $N(i)$ where $F(t_p) \leq i+1$.
Note that in order to fill an isolated node $v$, one can fill it by filling one of its neighbors by time $t_p(v)$, or just add the vertex it to the initial set. Hence, one needs to fill a node $v_n$ which is either the parent ${\rm par}(v_n)$, a child ${\rm chi}(v_n)$, or itself.
Let $v\notin A_0$ be an isolated node $v$. To achieve percolation, it is always better (faster) to include $v$ in $A_0$ than attempting to make $v$ unisolated.
It is possible to make $v$ isolated by including only descendants of $v$ in $A_0$ since we must include less than $deg(v)$ neighbors. But we know that if given the choice to include a descendant or a $v$ to the initial set, choosing $v$ is absolutely advantageous because the upwards percolation achieved by $v$ infected at some positive time is a subset of upwards percolation achieved by filling it at time $0$. Thus including $v$ to the initial set is superior.
The above set up can be understood further to find which vertex needs to be chosen to be $v_n$.
Consider a vertex $v\notin A_0$. Then, in finding a node $u$ to add to $A_0$ so that $v \in A_\infty$ for the initial set $A_0 \cup \{u\}$ and such $A_\infty$ is maximized, the vertex $v_n$ must be the parent ${\rm par}(v)$ of $v$.
Filling $v$ by time $t_*(v)$ already ensures that all descendants of $v$ will be infected, and that all percolation upwards must go through the parent ${\rm par}(v)$ of $v$. This means that filling any child of $v$ in order to fill $v$ (by including some descendant of $v$ in $A_0$) we obtain a subset of percolation if we include the parent ${\rm par}(v)$ of $v$ in $A_0$. Therefore, the parent ${\rm par}(v)$ of $v$ or a further ancestor needs to be included in $A_0$, which means $v_n$ needs to be the parent ${\rm par}(v)$ of $v$.
Note that given a node $v\notin A_0$, if we fill its parent ${\rm par}(v)$ before $t_p(v)$, then the vertex will be infected. We are now ready for our main result, which improves the naive $O(t2^n)$ bound for finding minimal percolating sets to $O(tn)$, as discussed further in the last section.
\[teorema\]\[teo1\] To obtain one smallest minimal percolating set of a tree $T(V,E)$ with percolation function $F(t)$, proceed as follows:
- Step 1. initialize tree: for each node $v$, set $t_*(v)$ to be some arbitrarily large number, and set it to true for needing to be considered.
- Step 2. percolate using current $A_0$. Save the time $t_a$’s at which the nodes were infected. Stop the algorithm if the set of nodes that are infected equals the set $V$.
- Step 3. consider a node $v$ that is furthest away from the root, and if there are multiple such nodes, choose the one that is isolated, if it exists.
- if $v$ is isolated or is the root, add $v$ to $A_0$.
- otherwise, set $t_*({\rm par}(v))=t_p(v)-1$ (as Definition \[D2\]) if it is smaller than the current $t_*({\rm par}(v))$ of the parent.
Set $v$ as considered.
- Step 4. go to step 2.
After the process has finished, the resulting set $A_0$ is one of the smallest minimal percolating set.
The proof of the theorem, describing the algorithm through which one can find a smallest percolating set, shall be organized as follows: we will first show that the set $A_0$ constructed through the steps of the theorem is a minimal percolating set, and then show that it is the smallest such set. In order to see that $A_0$ is a minimal percolating set, we first need to show that $A_0$ percolates. In step 3, we have included all isolated nodes, as well as the root if it wasn’t infected already, in $A_0$ and guaranteed to fill all other nodes by guaranteeing that their parents will be infected by their time $t_p$.
Showing that $A_0$ is a minimal percolating set is equivalent to showing that if we remove any node from $A_0$, it will not percolate to the whole tree. Note that in the process, we have only included isolated nodes in $A_0$ other than the root. This means that if any node $v_0$ is removed from $A_0$, it will not percolate to $v_0$ because we only fill nodes higher than $v_0$ after considering $v_0$ and since turning a node isolated requires filling at least one node higher and one descendant of $v_0$, it cannot be infected to after removing it from $A_0$. Moreover, if the root is in $A_0$, since we considered the root last, it is implied that the rest of $A_0$ does not percolate to root. Thus, $A_0$ is a minimal percolating set.
Now we show that the set $A_0$ constructed through the algorithm is of the smallest percolating size by contradiction using Lemma \[L2\]. For this, suppose there is some other minimal percolating set $B$ for which $|B|\leq |A|$. Then, we can build an injection $A_0$ to $B$ in the following manner: iteratively consider the node $a$ that is furthest from the root and $a \in A_0$ that hasn’t been considered, and map it to a vertex $b_0$ which is itself or one of its descendants of $b$ where $b \in B$. We know that such a $b_0$ must exist by induction.
We first consider the case where $a$ has no descendant in $A$. Then, if the vertex $b\in B$ and $b$ is a descendant of $a$, we map $a$ to $b$. Now suppose there is no node $b$ that is a descendant of $a$ where $b \in B$. Then, $a \in B$ because otherwise $a$ would be isolated with regards to $B$ as well, by Lemma \[L2\]. This means that we can map $a$ to $a$ in this case.
Now we can consider the case where all the descendants $d$ of $a$ such that $d \in A:=A_0$ has been mapped to a node $b_d\in B$ where $b_d$ is $d$ or a descendant of $d$. If there is such a $b\in B$, then $b$ is a descendant of $a$, and thus no nodes in $A$ have been matched to $b$ yet, allowing us to map $a$ to $b$. Now suppose there is no such $b\in B$. This means that there is no $b\in B$ such that all of the descendants of $a$ are descendants of $b$. Then, all nodes in $B$ that are descendants of $a$ is either some descendant of $a\in A$ or some descendant of a descendant of $a$ in $A$. This means that percolating $B$, the children of $a$ will all be infected at later times than when percolating $A$, and by Lemma \[L2\], one has that $a \in B$ because $a$ would be isolated with regards to $B$. So in this case, we can map $a$ to $a$.
The map constructed above is injective because each element of $B$ has been mapped to not more than once. Since we constructed an injective function from the set generated by the algorithm $A_0$ to a smaller minimal percolating set $B_0$, we have a contradiction because $A_0$ then must be the same size or larger than $B_0$. Thus, the set generated from the algorithm must be a smallest minimal percolating set.
From Theorem \[teo1\] one can find the smallest minimal percolating set on any finite tree. Moreover, it gives an intuition for how to think of the vertices of the graph: in particular, the property of “isolated” is not an absolute property, but a property relative to the set of nodes that has been infected before it. This isolatedness is easy to define and work with in trees since each node has at most one parent. Moreover, a similar property may be considered in more general graphs and we hope to explore this in future work. Below we shall demonstrate the algorithm of Theorem \[teo1\] with an example.
We will preform the algorithm on the tree in Example \[ex2\], with percolating function $F(t)=t$. We first initialize all the nodes, setting their time $t_*$ to some arbitrarily large number, represented as $\infty$ in Figure \[inf1\] below.
![(a)-(c) show the first three updates through the algorithm in Theorem \[teo1\], where the vertices considered at each time are shaded and each vertex is assigned the value of $t_*$. []{data-label="inf1"}](Fig4.png)
Percolating the empty set $A_0$, the resulting infected set is empty, as shown in Figure \[inf1\] (a). We then consider the furthest node from root. None of them are isolated, so we can consider any; we begin by considering node $6$ in the labelling of Figure \[ex1\] of Example \[ex2\]. It is not isolated, so we set the $t_*$ of the parent to $t_p-1=0$, as can be seen in Figure \[inf1\] (b). Then we consider another node furthest from the root, and through the algorithm set the $t_*$ of the parent to $t_p-1=0$, as can be seen in Figure \[inf1\] (c). The following steps of the algorithm are depicted in Figure \[inf2\] below.
![ (a)-(b) show the updates 4-5 through the algorithm. (c) shows the set $A_0$ in red, and the infected vertices in blue. []{data-label="inf2"}](Fig5.png)
As done in the first three steps of Figure \[inf1\], we consider the next furthest node $v$ from the root, and by the same reasoning as node $6$, set the $t_*{\rm par}(v)$ of the parent to $t_*{\rm par}(v)=1$, as can be seen in Figure \[inf2\] (a). Now we consider node $4$: since it is isolated, so we fill it in as in Figure \[inf2\] (b). The set of nodes infected can be seen in Figure \[inf2\] (c). We then consider node $5$, the furthest node from the root not considered yet. Since it is not isolated, change the $t_*{\rm par}(v)$ of its parent to $t_p(v)-1=0$, as in Figure \[inf3\] (a).
![(a)-(c) show the updates through the algorithm in Theorem \[teo1\] after setting $A_0$ to be as in Figure \[inf2\].[]{data-label="inf3"}](Fig6.png)
Then we consider node $3$, which is isolated, so we include it in $A_0$. The infected nodes as a result of percolation by this $A_0$ is shown as red vertices in Figure \[inf3\] (c). In order to finish the process, consider the vertex $v=2$ since it is the furthest away non-considered node. It is not isolated so we change the $t_*{\rm par}(v)$ of its parent to $t_p(v)-1=0$, as shown in Figure \[inf4\] (a). Finally, we consider the root: since it is isolated, we include it in our $A_0$ as seen in Figure \[inf4\] (b). Finally, percolating this $A_0$ results in all nodes being infected as shown in Figure \[inf4\] (c), and thus we stop our algorithm.
![Final steps of the algorithm.[]{data-label="inf4"}](Fig7.png)
Through the above algorithm, we have constructed a smallest minimal percolating set shown as red vertices in Figure \[inf4\] (c), which is of size $3$. Comparing it with Example \[ex2\], we see that the minimal percolating set in that example is indeed the smallest, also with $3$ elements. Finally, it should be noted that in general the times $t_p$ for each node could be different from each other and are not the same object.
From the above example, and its comparison with Example \[ex2\], one can see that a graph can have multiple different smallest minimal percolating sets, and the algorithm finds just one. In the algorithm of Theorem \[teo1\], one minimizes the size of a minimal percolating set , relying on the fact that as long as a node is not isolated, one can engineer its parent to become infected so as to infect the initial node. The motivation of the definition of isolated stems from trying to find a variable that describes whether a node is still possible to become infected by infecting its parent. Because the algorithm is on trees, we could define isolation to be the inability to be infected if we add only one node.
Concluding remarks {#final}
==================
In order to show the relevance of our work, we shall conclude this note with a short comparison of our model with those existing in the literature.\
[**Complexity.**]{} Firstly we shall consider the complexity of the algorithm in Theorem \[teo1\] to find the smallest minimal percolating set on a graph with $n$ vertices. To calculate this, suppose $t$ is the upper bound on percolation time; we have presented a way to find such an upper bound in the previous sections. In the algorithm, we first initialize the tree, which is linear timed. Steps $2$ and $3$ are run at most $n$ times as there can only be a total of $n$ unconsidered nodes. The upper bound on time is $t$, so steps 2 will take $t$ to run. Determining whether a node is isolated is linear timed, so determining isolated-ness of all nodes on the same level is quadratic timed, and doing the specifics of step 3 is constant timed. Thus the algorithm is $O(n+n(t+n^2)) = O(tn + n^3) = O(tn)$, much better than then $O(t2^n)$ complexity of the naive algorithm.\
[**Comparison on perfect trees.**]{} Finally, we shall compare our algorithm with classical $r$-bootstrap percolation. For this, in Figure \[comp\] we show a comparison of sizes of the smallest minimal percolating sets on perfect trees of height $4$, varying the degree of the tree. Two different functions were compared: one is constant and the other is quadratic. We see that the time-dependent bootstrap percolation model can be superior in modelling diseases with time-variant speed of spread, for that if each individual has around $10$ social connections, the smallest number of individuals needed to be infected in order to percolate the whole population has a difference of around $10^3$ between the two models.
[**Comparison on random trees.**]{} We shall conclude this work by comparing the smallest minimal percolating sets found through our algorithm and those constructed by Riedl in [@percset]. In order to understand the difference of the two models, we shall first consider in Figure \[comp1\] three percolating functions $F(t)$ on random trees of different sizes, where each random tree has been formed by beginning with one node, and then for each new node $i$ we add, use a random number from $1$ to $i-1$ to determine where to attach this node.
In the above picture, the size of the smallest minimal percolating set can be obtained by multiplying the size of the minimal percolating set by the corresponding value of $n$. In particular, one can see how the exponential function requires an increasingly larger minimal percolating set in comparison with polynomial percolating functions.
[**Comparison with [@percset].**]{} To compare with the work of [@percset], we shall run the algorithm with $F(t)=2$ (leading to 2-bootstrap percolation as considered in [@percset]) as well as linear-timed function on the following graph:
With our algorithm, we see that nodes $2$, $3$ and $5$ are isolated respectively, and when we add them to the initial set, all nodes become infected. Thus the smallest minimal percolating set with our algorithm has size $3$.
Riedl provided an algorithm for the smallest minimal percolating sets in trees for $r$-bootstrap percolation in [@percset] that runs in linear time. We shall describe his algorithm generally to clarify the comparisons we will make. Riedl defined a trailing star or trailing pseudo-star as a subtree with each vertex being of distance at most $1$ or $2$ away, respectively, from a certain center vertex that is connected to the rest of the tree by only one edge. Then, the first step of Riedl’s algorithm is a reduction procedure that ensures every non-leaf has degree at least $r$: intuitively, one repeatedly finds a vertex with degree less than $r$, include it to the minimal percolating set, remove it and all the edges attached to it, and for each of the connected components, add a new node with degree $1$ connected to the node that was a neighbor of the node we removed. Then, the algorithm identifies a trailing star or pseudo-star, whose center shall be denoted by $v$ and its set of leaves by $L$. Letting the original tree be $T$, if the number of leafs on $v$ is less than $r$, then set $T'=T \setminus (v \cup L)$; otherwise, set $T'=T\setminus L$. Recursively set $A'$ as the smallest minimal percolating set of $T'$ under $r$-bootstrap percolation. Then, the smallest minimal percolating set for $T$ is $A' \cup L$ if $|L|<r$ and $A' \cup L \setminus v$ otherwise. Using Riedl’s algorithm, we first note that there is a trailing star centered at $3$ with $2$ leaves. Removing the leaf, there is a trailing star at $1$ with $1$ leaf. Removing $1$ and $2$, we have one node left, which is in our $A'$. Adding the leaves back and removing $3$, we have an $A_0$ of $2,3$ and $5$, a smallest minimal percolating set. Thus the smallest minimal percolating set with Riedl’s algorithm also has size $3$, as expected.
We shall now compare our algorithm to that of Riedl. A key step in Riedl’s algorithm, which is including the leaves of stars and pseudo-stars in the final minimal percolating set, assumes that these leaves cannot be infected as it is assumed that $r > 1$. However, in our algorithm, we consider functions that may have the value of $1$ somewhere in the function, thus we cannot make that assumption. Further, in $r$-bootstrap percolation, time of infection of each vertex does not need to be taken into account when calculating the conditions for a node to be infected as that $r$ is constant, whereas in the time-dependent case, it is necessary: suppose a node has $n$ neighbors, and there is only one $t$ such that $F(t) \leq n$, so all neighbors must be infected by time $n$ in order for $n$ to become infected.\
[**Concluding remarks.**]{} The problem our algorithm solves is a generalization of Riedl’s, for that it finds one smallest minimal percolating set for functions including constant ones. It has higher computational complexity for that it is not guaranteed for an unisolated node to be infected once one other neighbor of it is infected without accounting for time limits. Finally, we should mention that the work presented in previous sections could be generalized in several directions and, in particular, we hope to develop a similar algorithm for largest minimal percolating set; and study the size of largest and smallest minimal percolating sets in lattices.
\
[**Acknowledgements.**]{} The authors are thankful to MIT PRIMES-USA for the opportunity to conduct this research together, and in particular Tanya Khovanova for her continued support, to Eric Riedl and Yongyi Chen for comments on a draft of the paper, and to Rinni Bhansali and Fidel I. Schaposnik for useful advice regarding our code. The work of Laura Schaposnik is partially supported through the NSF grants DMS-1509693 and CAREER DMS 1749013, and she is thankful to the Simons Center for Geometry and Physics for the hospitality during part of the preparation of the manuscript. This material is also based upon work supported by the National Science Foundation under Grant No. DMS- 1440140 while Laura Schaposnik was in residence at the Mathematical Sciences Research Institute in Berkeley, California, during the Fall 2019 semester.
|
{
"pile_set_name": "arxiv"
}
|
Accounting
Surf Works offer a range of accounting services suitable for all types of business. Below, we have listed packages suitable for sole traders, partnerships and limited companies. The packages can be fully tailored to your requirements by adding extra services to create the exact service that you and your business requires.
All services are carried out on time with the minimum of fuss by our in house, fully qualified accountant
The list of services offered is not exhaustive so please let us know if you require a service not listed. If you have specific needs we can build a bespoke accountancy package tailored to your exact requirements.
Standard Packages
From Sole Trader to Limited Company, we can organise your accounting with a simple, no-nonsense standard package.
Sole Traderfrom £25pm
Partnershipfrom £45pm
Personal Tax Return for each partner (includes partnership income and bank interest received)
Limited Co.from £65pm
Year End Accounts
Accounts Filed at Companies House
Company Tax Return
Payroll for Directors Salary
Dividend Paperwork
Directors Personal Tax Return
Return Filed at Companies House
Bolt on Services
Year End Accounts
Bookkeeping
VAT Returns
Payroll
CIS Returns
Management Accounts
Company Formations
Company Annual Returns
Personal Tax Returns
Partnership Tax Returns
Company Tax Returns
Rental Property Accounts
Capital Gains Tax
Inheritance Tax
Warning: strpos() expects parameter 1 to be string, array given in /srv/users/serverpilot/apps/surfworks/public/wp-content/themes/kallyas/framework/hg-theme-framework/inc/helpers/functions-image-helpers.php on line 157
Warning: strpos() expects parameter 1 to be string, array given in /srv/users/serverpilot/apps/surfworks/public/wp-content/themes/kallyas/framework/hg-theme-framework/inc/helpers/functions-image-helpers.php on line 157
We also offer a fully outsourced finance function that includes:
Raise and issue sales invoices to your customers
Collect, allocate and bank money from your customers
Maintain your purchases ledger
Issue payments to your suppliers when invoices are due
For more information about our accountancy services, give us a call or email :-)
|
{
"pile_set_name": "pile-cc"
}
|
From the mid-1960's until the close of that decade, automobiles became lighter, more compact, and more powerful. Auto manufacturers continued to compete against one another for drag-strip supremacy. As government regulations and safety concerns increased, the muscle car era began to decline rapidly.
Many of these ultimate high-performance muscle cars were built to satisfy homologation requirements. Others were built just to have the fastest machine on the road. The Plymouth Hemi 'Cuda is an example of one of the fiercest and most powerful vehicle ever constructed for the roadway. It was derived from the lesser Barracuda's which began in the mid-1960's. It was built atop the 'E' body platform and was restyled in 1970 by John Herlitz, making it longer, wider, and lower. The 426 cubic-inch Hemi V8 was capable of producing an astonishing 425 horsepower. Matted to a four-speed manual 833 transmission, this was the ultimate muscle car of its day.
This 1971 Plymouth Hemi 'Cuda Convertible with black paint and orange billboards was offered for sale at the 2006 RM Auction in Monterey, CA where it was expected to sell between $180,000-$220,000. It came equipped from the factory with power windows, power brakes, power steering, Rally instrument cluster, rim blow steering wheel, bucket seats, AM/FM cassette radio, and driving lights. It has a Dana '60' rear end and the 426 cu in engine. It is one of just 374 'Cda Convertibles built in 1971. On auction day bidding reached $165,000 which was not high enough to satisfy reserve. The vehicle was left unsold.By Daniel Vaughan | Dec 2006
This 'Cuda Convertible was given a show-quality restoration to original specifications and is one of just 374 examples originally produced for the 1971 model year. It is believed to be one of just 87 383-powered convertibles produced for the last year of 'Cuda convertible production in 1971. The 383 cubic-inch V8 has four-barrel carburetors and is capable of producing 300 horsepower. There is a TorqueFlite three-speed automatic gearbox and four-wheel hydraulic brakes.
The car is finished in Tawny Gold, with a white interior and a white power-operated convertible top. Features include dual chrome-tipped exhaust outlets, floor console, hood pins, power brakes, power steering, Rallye wheels, a 'Slap Stik' shifter and a 'Tuff' steering wheel.
In 2010, this 'Cuda Convertible was offered for sale at the Vintage Motor Cars of Meadow Brook presented by RM Auctions. The car was estimated to sell for $60,000 - $70,000. As bidding came to a close, the car had been sold for the sum of $44,000 including buyer's premium.By Daniel Vaughan | Aug 2010
V8 Cuda Convertible
The 3rd generation Barracuda ran from 1970 through 1974; the previous generations were A-body Valiant based which began in 1964. Designed by John E. Herlitz on the 108-inch wheelbase, unibody, E-platform, a shorter and wider version of the existing B-body. This example has the non-Hemi 340 cubic-inch V8 with automatic and it is a stock example. 1971 was the only year for four headlamps. Somehow, this model series didn't sell to expectation and production slowed over the years, making the cars quite rare today. An unaltered car is even more rare.
V8 Cuda Hard Top Coupe
The writing was on the wall by 1971 for the muscle car enthusiast. With rising gas prices and skyrocketing insurance rates, the days of the overpowered and often low priced performance automobile were numbered. For the big three, it seems that the decision was made to go out with a bang, and some of the rarest and most desirable muscle cars ever to come out of the Motor City were produced.
Among the hottest is the Hemi 'Cuda, produced for a mere two model years. In 1970, it is believed that Plymouth produced just 696 Hemi 'Cuda hardtops and for 1971, a mere 118 would leave the line.
Wild colors would survive for the 1971 model year and Chrysler would lead the pack with their Hi-Impact color palate. Several eye popping colors were offered, including Sassy Grass Green as seen on this example, which is one of the rarest offerings.
When it comes to American Muscle, the Plymouth hemi 'Cuda is always at the top of the list. And when it comes to rarity and desirability, nothing compares to a 1971 Hemi ' Cuda.
No matter what make or model you may prefer, there is no disputing the visual impact of the 426 Street Hemi engine. With the massive valve covers and the huge dual quad carbs, it certainly takes top honors when it comes to intimidation. To add the outrageous FC7 in Violet, (aka Plum Crazy) paint to the mix is to take things a step beyond.
This 1971 Hemi 'Cuda exemplifies what Mopar Performance was all about in the final years of the original Muscle Car era. With a mere 107 leaving the Hamtramck, Michigan assembly plant with the Hemi engine under the shaker hood, these cars were rare even when new. This car is one of just 48 equipped with the Torqueflite automatic transmission and it also features the rare leather interior, elastomeric color keyed bumpers, power steering and power front disc brakes, a center console, the AM radio with the Dictaphone cassette recorder, tinted glass, dual color keyed mirrors and more, making it one of the highest option 1971 Hemi 'Cuda's in existence.
Of course, when new these cars were flogged not only on the street, but at the tracks throughout the country, making this example among the most sought after and valuable American muscle cars ever built.
The first series of the Barracuda was produced from 1964 through 1969, distinguished by its A-body construction. From 1970 through 1974 the second series was produced using an E-body construction.
In 1964, Plymouth offered the Barracuda as an option of the Valiant model line, meaning it wore both the Valiant and Barracuda emblems. The base offering was a 225 cubic-inch six-cylinder engine that produced with 180 horsepower. An optional Commando 273 cubic-inch eight-cylinder engine was available with a four-barrel carburetor, high-compression heads and revised cams. The vehicle was outfitted with a live rear axle and semi-elliptic springs. Unfortunately, the Barracuda was introduced at the same time, separated by only two weeks, as the Ford Mustang. The Mustang proved to be the more popular car outselling the Valiant Barracuda by a ratio of 8 to 1.
The interior was given a floor-shifter, vinyl semi-bucket seats, and rear seating. The rear seats folded down allowing ample space for cargo.
By 1967, Plymouth redesigned the Barracuda and added a coupe and convertible to the model line-up. To accommodate larger engines, the engine bay was enlarged. There were multiple engine offerings that ranged in configuration and horsepower ratings. The 225 cubic-inch six-cylinder was the base engine while the 383 cubic-inch 8-cylinder was the top-of-the-line producing 280 horsepower. That was impressive, especially considering the horsepower to weight ratio. Many chose the 340 cubic-inch eight-cylinder because the 383 and Hemi were reported to make the Barracuda nose-heavy while the 340 offered optimal handling.
In 1968 Plymouth offered a Super Stock 426 Hemi package. The lightweight body and race-tuned Hemi were perfect for the drag racing circuit. Glass was replaced with lexan, non-essential items were removed, and lightweight seats with aluminum brackets replaced the factory bench, and were given a sticker that indicated the car was not to be driven on public highways but for supervised acceleration trials. The result was a car that could run the quarter mile in the ten-second range.
For 1969 a limited number of 440 Barracudas were produced, giving the vehicle a zero-to-sixty time of around 5.6 seconds.
In 1970 the Barracuda were restyled but shared similarities to the 1967 through 1969 models. The Barracuda was available in convertible and hardtop configuration; the fastback was no longer offered. Sales were strong in 1970 but declined in the years that followed. The muscle car era was coming to a close due to the rising government safety and emission regulations and insurance premiums. Manufacturers were forced to detune their engines. The market segment was slowly shifting from muscle-cars to luxury automobiles. 1974 was the final year Plymouth offered the Barracuda.By Daniel Vaughan | Aug 2010
◾Dodge Charger and Durango 'most loved' in their respective segments for second consecutive year
◾Jeep® Renegade leads Entry SUV segment in 2015 Most Loved Vehicles in America survey by Strategic Vision
◾FIAT captures most segment wins among small cars with 500 and 500e
◾FCA US ranked highest overall in Strategic Vision's 20th annual Total Quality Index™ this past July
November 24, 2015 , Auburn Hills, Mich. - Strategic Vision has named five FCA US LLC vehicles to its 'Most Loved Ve...[Read more...]
Scottsdale, Arizona (July 18th, 2015) – Thomas Scott is an accountant and entrepreneur from Athens, Georgia who has had a love for all things automotive for as long as he can remember. He possesses a lifetime of passion for buying, selling and working on classic American cars.
'I started out with the muscle cars — the Mopars, the Cobra Jet Mustang, the Chevelle,' Scott says. 'Those are cars that everybody recognizes — they're widely popular and very tradeable.' However, as S...[Read more...]
Scottsdale, Arizona (December 1st, 2014) – For Enthusiasts – By Enthusiasts. ™ This is far more than a tagline at Russo and Steele Collector Automobile Auctions. It's a lifestyle, and we are gearing up to deliver that singular passion to the High Desert of sunny Scottsdale, Arizona for our annual flagship event during the world renowned collector car week. Additionally, Scottsdale marks the kick-off of the year-long celebration of our 15th anniversary. Held over five thrilling a...[Read more...]
|
{
"pile_set_name": "pile-cc"
}
|
Education Week reporter Ben Herold explores how technology is shaping teaching and learning and the management of schools. Join the discussion as he analyzes the latest developments.
Gates Foundation, Chan Zuckerberg Team Up to Seek 'State of the Art' Ideas for Schools
By Benjamin Herold on
May 8, 2018 1:31 PM
The Bill & Melinda Gates Foundation and the Chan Zuckerberg Initiative are teaming up on a new research-and-development initiative aimed at identifying "state-of-the-art" educational strategies and bringing them to the classroom.
The focus is on spurring development of new measures, new ways of teaching, and new technologies for tracking and supporting students' writing ability, math skills, and "executive functions," such as self-control and attention.
In a new Request for Information released today, the groups wrote that researchers from fields as diverse as education, neuroscience, cognitive psychology, and technology are generating exciting new ideas about how people actually learn—but that information "has not yet been translated effectively into methods and tools for teachers and students to use in the classroom every day."
Such "research insights must inform ongoing development of tools and instructional approaches that will enable students to overcome math, literacy, and other learning challenges and at scale, in order to reach millions, if not billions, of students," the document reads.
The focus of the new efforts is on identifying promising new developments and ideas in three main areas:
Improving students' writing (especially non-fiction)."The skills connected to writing—evaluation of arguments and evidence, critical and creative thinking about solutions and sources, identifying support for a key idea or process, clear and evocative argument-making—are frequently cited as 21st century skills in high demand by employers," the Request for Information states. "Yet, the majority of high school graduates are not prepared for the demands of postsecondary and workplace writing."
Among the areas where the groups hope to see improvements: comprehensive writing solutions, new metrics for measuring student progress and proficiency in writing, and new tools to promote more collaboration and better feedback.
Improving students' mathematical understanding, application, and related mindsets.Here, the language of the personalized-learning movement (which both organizations support) is clear: There already exist promising approaches that "help teachers to address individual students' needs by mirroring the same personalized approaches used by the best 1:1 tutors," the document states. "Highly personalized learning experiences and tools have the potential to analyze student responses to understand barriers to student learning, provide immediate feedback, and apply immediate and effective remediation to students when needed."
Among other things, the organizations are specifically looking for tools that can further personalize math instruction via a focus on the "whole student"—including children's mindsets, beliefs, attention, and "affective" or emotional states.
Measuring and improving students' executive function."Student success in academics and in future careers is associated with their ability to wrestle with multiple ideas at once, think flexibly, and regulate their action and thoughts," the Request for Information states. "There is much to be done to track and improve students' progress on [executive function] development and connect it to real-world benefits, especially for those who are most at-risk."
Areas of focus here include advances in techniques for tracking children's development of these skills and abilities, interventions (including "technology-enhanced programs in or outside of school") designed to improve desired behaviors, and supports for teachers.
The Gates Foundation is a traditional charitable foundation, chaired by Microsoft founder Bill Gates. Over the last decade-plus, the group has dedicated hundreds of millions of dollars a year to such education-related causes as promoting small high schools, changing the way teachers are evaluated, and supporting development of the Common Core State Standards. Last October, the Gates Foundation announced a strategic shift in focus, including a new emphasis on "locally-driven solutions" and "innovative research."
The Chan Zuckerberg Initiative, meanwhile, is a newer entity, founded and led by Facebook CEO Mark Zuckerberg and his wife, pediatrician Priscilla Chan. Structured as a limited-liability corporation, CZI is free to make charitable donations, invest in for-profit companies, and engage in political lobbying and advocacy, with minimal disclosure requirements. The venture-philanthropy group has announced that it will give hundreds of millions of dollars annually to support a vision of "whole-child personalized learning" that aims to customize each child's educational experience based on their academic, social, emotional, and physical strengths, needs, and preferences.
Last June, the two groups announced their first substantive collaboration: a $12 million joint award to an intermediary organization known as New Profit, which in turns supports organizations working to promote personalized learning.
In their new Request for Information, the Gates Foundation and CZI said that technology is not the focus of what they hope to spur, but it is expected to play a role.
The groups also emphasized that their new plan is currently in draft stage. Individuals, nonprofit groups, universities, private companies, and government-sponsored labs are invited to respond, with the expectation that those groups' input will in turn shape the foundations' funding plans moving forward.
No decision has yet been made as to how much money the groups will ultimately invest in the new R&D effort.
Why this new partnership, and why now?
"The reason our two philanthropies have decided to join hands in this effort is simple: We believe the scope and importance of this work exceeds what any single organization can or should undertake alone," wrote CZI president of education Jim Shelton and Gates Foundation director of K-12 education Bob Hughes in an op-ed published today by Fast Company.
"The purpose of the initiative is not to mandate anything. It's to learn from the work that's currently happening in classrooms, universities, entrepreneurial efforts, and research centers throughout the country."
Photos:
Bill Gates, Microsoft co-founder and director at Berkshire Hathaway, is interviewed by Liz Claman of the Fox Business Network in Omaha, Neb., May 8. Photo by Nati Harnik/AP
Facebook CEO and Harvard dropout Mark Zuckerberg delivers the commencement address at Harvard University commencement exercises on May 25, in Cambridge, Mass. Photo by Steven Senne/AP
Categories:
Tags:
Notice: We recently upgraded our comments. (Learn more here.) If you are logged in as a subscriber or registered user and already have a Display Name on edweek.org, you can post comments. If you do not already have a Display Name, please create one here.
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.
|
{
"pile_set_name": "pile-cc"
}
|
A VISUALLY STUNNING architectural biography of Minnesota’s most influential architect of the twentieth century. Architect, artist, furniture designer, and educator, Ralph Rapson has played a leading role in the development and practice of modern architecture and design, both nationally and internationally.
“Ralph Rapson is now a legend in the history of modern architecture.”
—Cesar Pelli, FAIA
REVIEW:
Barbara Flanagan/The New York Times
Ralph Rapson is best known as the designer of the Gutherie, Minneapolis’s landmark of theater design, but because he worked, taught and competed with most of the world’s first modernists–Wright, Mies, Corbusier, Saarinen–his elder son and biographer calls him “the Forest Gump of architecture.”
Ralph Rapson: Sixty Years of Modern Design, by Rip Rapson, Jane King Hession and Bruce N. Wright, documents the architect’s vast career and uncanny associations.
Rapson believed design should be reflect the moment–furniture, houses, cities–but his take on modernism was never pompous. He perpetuated endless ideas–still fresh–vibrant drawings and youthful pranks. (He had his students hoist famous visitors upside down, including the stocky Buckminister Fuller, and footprint the ceiling with their bare soles.) The book shows how one can be talented, influential and happy, all the while remaining internationally obscure. It also tells, discreetly, how one man can achieve all this single-handedly: with his right forearm amputated at birth, Ralph Rapson drew with his left hand.
|
{
"pile_set_name": "pile-cc"
}
|
---
author:
- |
Malika Chassan$^{1,2}$ , Jean-Marc Azaïs$^1$,\
Guillaume Buscarlet$^3$, Norbert Suard$^2$\
[ $(^1)$ Institut de Mathématiques de Toulouse, Université Toulouse 3, France]{}\
[ $(^2)$ CNES, Toulouse, France]{}\
[ $(^3)$ Thales Alenia Space, Toulouse, France]{}\
[ malika.chassan@math.univ-toulouse.fr]{}
bibliography:
- 'biblio.bib'
title: A proportional hazard model for the estimation of ionosphere storm occurrence risk
---
Introduction
============
Severe magnetic storms are feared events for integrity and continuity of GPS-EGNOS navigation system and an accurate modeling of this phenomena is necessary. Our aim is to estimate the intensity of apparition of extreme magnetic storm per time unit (year).
Our data set, retrieved from [@noaaKpap], consists of 80 years of registration of the so called 3 hours ap index (for “planetary amplitude”). The ap index quantifies the intensity of planetary geomagnetic activity, using data from 13 observatories. Although the equatorial region is not covered by these 13 observatories, they are spread all over the earth and the coverage of the ap index is rather global. The ap index is the linear transformation of the quasi log-scale index Kp, with the same sampling step of 3 hours. The Kp index, and hence the ap index, corresponds to a maximal variation of the magnetic field over a 3 hours period. See [@noaaKpap] for more details on geomagnetic indices. The ap index is available from 1932 to present but for our analysis we will use only the 7 complete solar cycles of the data set, from the 17th (on the general list) which starts on September 1933, to the 23th which ends on December 2008.
There are other data available for the study of the ionosphere magnetic activity, each of them with advantages and disadvantages:
- the aa index (for “antipodal amplitude”). Although this index is available since 1868, it is calculated from only two nearly antipodal geomagnetic stations in England and Australia. Thus, this indice does not take into account all the magnetic activity of the ionosphere.
- the Dst (Disturbance storm time). This index is restricted to the equatorial magnetic perturbation (see Figure \[fig : carte\_obs\]). Moreover, there are only 57 years of registration available against 80 for the ap index. Nonetheless, this indice gets the advantage to be an unbounded integer contrary to the ap index which lies in a finite set of non consecutive positive integers (see Section \[section : difficulties\]).
- the raw geomagnetic data are also available for many geomagnetic observatories. Oldest observations date back to 1883 for hourly values and to 1969 for 1 minute values. They consist of the measure by magnetometers of magnetic field variations. The disadvantage of this data is the presence of gaps in the recording (with gap lengths varying from one month to several years depending on the observatory). The principal disadvantage of these data is the quantity of pre-treatments required.
Since all these index show a strong correlation rate [@Rifa], we chose to use only one indice to make our analyses. We opted for the ap index. The main advantage of this data set is the large amount of data. Moreover, there is no gap in the ap index, contrary to raw geomagnetic data. Finally, the ap index is more global than aa index or Dst, as one can see on Figure \[fig : carte\_obs\].
![Positions of the observatories for the Dst ($\bigstar$) and the Kp/ap indices ().[]{data-label="fig : carte_obs"}](Fig1.jpg){width="8cm" height="5.5cm"}
Intensive storms being scarce, classical statistical methods for probability estimation, as empirical frequency, are not precise enough. In many domains, the Extreme Value Theory (EVT) enables to estimate the probability of scarce extreme events. But, because of, among other things, the finite discrete form of our data and the obvious non stationary behavior, the EVT cannot be applied.\
In Section 2, we develop the arguments showing that a use of classical EVT is not achievable. In Section 3, we describe our new proportional hazard model. This is the main contribution of this paper. The description of parameter estimators could be found in Section 4. The Section 5 is dedicated to the presentation of applications to our data set.
Difficulties to directly apply EVT {#section : difficulties}
===================================
As said before, the first obstacle to direct application of EVT is the bounded discrete form of our data. The ap index varies in the set $\{$0, 2, 3, 4, 5, 6, 7, 9, 12, 15, 18, 22, 27, 32, 39, 48, 56, 67, 80, 94, 111, 132, 154, 179, 207, 236, 300, 400$\}$. The application of Extreme Value Theory assumes the continuity of the probability distribution and it is well known that EVT does not apply to discrete finite observations, see for example [@Anderson1970].
The fact that finite discrete data do not enter in the scope of the theory is not the only issue. Indeed, in case of peaks over threshold modeling, one has to choose a threshold. The choice of the optimal threshold is made analyzing the behavior of the parameters according to a threshold variation. This is, generally speaking, not possible with discrete data. For example, see the work of Cooley, Nychka and Naveau [@CooleyNaveau2007], where the low precision of the measure makes data almost discrete. Here, when the threshold grows up, one can observe a sawtooth behavior of parameters estimators and this makes the threshold selection troublesome.\
A second problem is that ap index data obviously show a non stationary pattern. It is well known that the sun activity follows cycles with a duration of about 11 years. Corresponding cycles are observable in ap index behavior and must be taken into account for model assessment. This behavior implies that the probability of a magnetic storm occurrence depends on the position into the cycle. See Figure \[Cycle2\], for example.
![The ap index during the first complete cycle (17$^{th}$ on the general list, from September 1933 to February 1944). The dotted vertical line represents the peak.[]{data-label="Cycle2"}](Fig2.jpg){width="8cm" height="6.5cm"}
One can see the first complete solar cycle of the data set. Its middle is indicated by a vertical dotted line. One obviously remarks that strong storms (characterized by a high ap index level) occur principally during the second half of the cycle. Thus, it is not realistic to model this behavior by a standard stationary extreme value model (e.g. with constant parameters). A more efficient approach will be to include non-stationarity in parameters estimation. But once again, for this type of processes, there is no general theory allowing such a modeling.
In various research fields like hydrology, non stationary extreme models are proposed. For example, see the work of Jonathan and Ewans [@Jonathan2011]. In this paper, authors want to model the seasonality of extreme waves in the gulf of Mexico. Occurrence rate and intensity of storm peak events vary with season. To model this seasonal effect, the authors have chosen to express the Generalized Pareto parameters as a function of seasonal degree using a Fourier form. But this approach supposes that the classical EVT can be applied, and this is not the case with the data set used in this paper.
Model description
=================
In this section, we give a precise definition of what we call a storm, describe data and pretreatments (mostly declustering and time warping). We also describe the model we built and its advantages.
Storm definition, declustering
------------------------------
Ionospheric perturbations are classified in a standardized way using ap index, according to the Table 1:
**Ionosphere Condition** **Kp-index** **ap index**
-------------------------- -------------- ----------------
Quiet 0-1 <7
Unsettled 2 7 to <15
Active 3 15 to <27
Minor storm 4 27 to <48
Major storm 5 48 to <80
Severe storm 6 80 to <140
Large severe 7 140 to <240
Extreme 8 240 to <400
Extreme 9 $\geq$ 400
: Relation between Kp, ap and ionosphere activity
\
\
We introduce a declustering process of the data in order to consider only one event with the highest intensity even through there are different periods of high intensity separated by less active ones (lower indices). See Chapter 5.3 in [@Coles2001] for example. This so called Runs Declustering process allows to precisely define what we consider as a storm. We have to set two parameters:
- a *low level*, the threshold above which we consider that a storm begins (typically 111, 132 or 154);
- the run length *r*, the minimal number of observations below the low level between two events for them to be consider independent.
Thus, two exceedances of the low level separated by less than *r* measures will be consider to belong to the same cluster (same storm).
Then, for each cluster, we define the storm level as the maximal level reached in the cluster. The first time when this maximum is attained is also saved, it represents the storm date. For a cluster, we define the length of the storm as the number of observations between the first up-crossing and the last down-crossing of the low level.
Durations of magnetic storms are very variable, from 3 or 6 hours for an extreme storm (level 300 or 400) until 90 hours for a low level storm. But, due to this declustering, we consider only one time event (since only the first maximum occurrence time is saved). This is not incoherent since we focus on strong storms, which are brief compared to lower storms but it should be take into account for the probability of occurrence definition.
Precisions on probability of occurrence {#section: precise_proba}
---------------------------------------
As said before, a storm is now defined by three values: the maximal level, the first time when this maximum is attained and the length of the cluster. This modeling allows to estimate the probability: $$P_1(t) = \mathds P(\textrm{a storm of level 400 \textbf{begins} at time t})$$ And we want to know the probability: $$P_2(t) = \mathds P(\textrm{a storm of level 400 \textbf{ is ongoing} at time t})$$
In the whole data set, the level 400 is reached 29 times, but only 23 storms of level 400 are counted after the declustering. Among these 23 storms, 17 reach the level 400 only one time and 6 remain at this level two consecutive times.
Hence, we can say that: $$\begin{array}{ll}
P_2 (t) &= P_1(t) + P_1(t-1)\times \mathds P(\textrm{storm stays at the level 400 two times})\\
& \simeq P_1(t) \times (1 + \mathds P(\textrm{storm stays at the level 400 two times}))\\
& \simeq P_1(t)\times (1 + 6/23)
\end{array}$$
Data description
----------------
After the declustering there are only 23 magnetic storms of level 400. There are not enough individuals to estimate their frequency as a function of the covariates. For the storms of level 300, one counts 44 events and this is still insufficient.
Consequently, we have to use storms of lower levels to estimate the influence of each covariate and extrapolate these results to the extreme level. We will use all the storms of level greater or equal to the *low level* parameter defined in the declustering process to make estimations. For example, if the low level is 111, we call “high level storm” every storm of level 111, 132, 154, 179, 207, 236, 300 or 400. The “extreme level” will be only 400.
The mean probability of occurrence for each high level is given in Table \[tab: freq\].
Level 111 132 154 179 207 236 300 400
--------------------------- ------ ------ ------ ------ ------ ------ ------ ------
Number of storm 182 158 103 84 51 57 44 23
Frequency $\times 10^{4}$ 7.99 6.93 4.52 3.69 2.24 2.50 1.93 1.01
Frequency in year$^{-1}$ 2.33 2.02 1.32 1.08 0.65 0.73 0.56 0.29
: Number of occurrences and frequency of storms by level[]{data-label="tab: freq"}
\
Besides of the 3 hours ap index, we dispose of a covariate representing the solar activity of a cycle. This solar cycle activity characteristic is the maximum of the monthly Smoothed Sunspot Number (monthly SSN). For an easier interpretation of the results, this covariate will be centered. See [@nasaSSN] for more details on the sunspot number.
The lengths of the cycles are also available, we call $D_j$ the length of the $j^{th}$ cycle.
Time Warping
------------
The durations of the 7 complete solar cycles range from 9.7 to 12.6 years. Thus, in order to analyze all the 7 cycles together, a data warping is applied to each cycle: the position of a storm on a cycle is represented by a number between $-0.5$ and $0.5$ where $-0.5$ is the beginning of the cycle, 0.5 its end and 0 its middle (peak). In the Figure \[Cycle2Warping\], the dash-dotted line represents the warped time for the first complete solar cycle.
![The ap index during the first cycle. The dotted vertical line represents the peak and the dash-dotted line the warped time.[]{data-label="Cycle2Warping"}](Fig3.jpg){width="8cm" height="7cm"}
Proportional hazard model
-------------------------
The model we built is inspired by the Cox model. First introduced in epidemiology, the Cox model is a proportional hazard model which permits to express the instantaneous risk with respect to time and some covariates $(X_1,....,X_p)$. In epidemiology, these variables are risk factors as well as treatments. The instantaneous risk $\lambda(t,X_1,...X_p)$ is defined using the occurrence probability in an infinitesimal interval $$\mathds P \{ \textrm{there exists an event} \in [t,t+dt]\, \}= \lambda(t,X_1,...X_p)dt$$ In the Cox model, this instantaneous risk is a relative risk with respect to a reference risk $ \lambda_0(t)$, often related to a control treatment. The influence of the covariates is modeled by the exponential of a linear combination of them. That is to say: $$\lambda(t,X_1,...X_p) = \lambda_0(t)\exp (\sum_{i=1}^p \beta_i X_i)$$ where $\beta_i$ quantifies the influence of the $i^{th}$ covariate. For more details about the Cox model, see [@Aalen2008].
The model constructed here has undergone meaningful modifications from the Cox model:
- an event (a storm occurrence) may occur several times within a cycle. Hence we use Poisson distributions instead of Bernoulli ones;
- the variable $D_j$ is included as factor, thus the measurement unit is the number of events per time unit and not per cycle;
- $\lambda_0(t)$ is not considered as a nuisance parameter but as a parameter to estimate;
- the estimation is made using all the storms of high level and an extrapolation to the storms of extreme level 400 is applied using the parameter $P_{400}$, the probability that a high level storm grows into a storm of level 400. The utilization of this parameter assumes that the level reached by a high level storm does not depend on the instant of appearance. A chi-square independence test showed that this assumption is acceptable. For precisions on this test, see Appendix \[annexe: chi2\].\
Thus, in the model we developed, the number of observed storms (of high level) during the cycle $j$ at time $t$, called $N_j(t)$, is supposed to be a non-homogeneous Poisson process of intensity $\lambda_j(t)$ such as : $$\lambda_j(t) = \lambda_0(t) D_j \exp ( \beta X_j)$$ i.e. $$N_j([a,b]) \sim \mathcal P \left( \int_a^b \lambda_j (t)dt \right)$$ The basic intensity $\lambda_0(t)$ takes into account the fact that storms occurs more likely during the second half of the cycle. We want to estimate it. Note that only one covariate is used here, the solar activity index $X_j$ and that the parameter $\beta$ models its influence.
A model extension
-----------------
We have seen that there is a strong difference between the two halves of a solar cycle. Thus, we tried to implement a modified model, where the estimation was made separately on every half. Thus, the variable $D_j$ was replaced by $D_{j,1}$ and $D_{j,2}$, the lengths of the first and second half of the cycle, and then, $N_j(.)$ was a non-homogeneous Poisson process of intensity: $$\begin{array}{c}
\lambda_{j,1}(t) = \lambda_0(t) D_{j,1} \exp ( \beta_1 X_j) \ \textrm{ if } t<0\\
\lambda_{j,2}(t) = \lambda_0(t) D_{j,2} \exp ( \beta_2 X_j) \ \textrm{ if } t\geq0
\end{array}$$ But the estimation in this model led to incoherent results. Indeed, because of the presence of a normalization constant different on each half (see Section \[section : lambda 0 chap\]), the basic intensity during the first half was higher than during the second one. Hence, this approach was abandoned.
Estimation
==========
$P_{400}$ and $\beta$
---------------------
Since $P_{400}$ is independent of the position in the cycle, the empirical frequency is used $$\widehat{P_{400}} = \frac{\# \{ \textrm{storms of level } 400 \}}{\# \{ \textrm{storms of level $\geq$ \textit{low level}} \}}$$ And noting $m=\# \{ \textrm{storms of level $\geq$ \textit{low level}} \}$ we get the corresponding $95\%$ confidence interval: $$P_{400} \in \left[ \widehat{P_{400}} \pm 1.96 \sqrt{\widehat{P_{400}}(1- \widehat{P_{400}})/m}\right]$$ For $\beta$, we use the fact that $$N_j = N_j([-0.5,0.5]) \sim \mathcal P \left( \left[ \int_{-1/2}^{1/2} \lambda_0 (s)ds \right] \ D_j \exp(\beta X_j) \right)$$ As in the Cox model, we verify the sufficiency of the statistic $N_j$ and $\beta$ is estimated by its maximum likelihood estimator in a Poisson generalized linear model. A confidence interval is also computed. All the details could be found in Appendix \[annexe\].
Basic intensity $\lambda_0(t)$ {#section : lambda 0 chap}
------------------------------
Here, we use a kernel estimator. Assuming that $\beta$ is known, we have : $$\widehat{\lambda_0 (t)} = K\displaystyle \sum_{j=1}^J \int_{-1/2}^{1/2} dN_j(t-s)\phi(s) = K \displaystyle \sum_{j=1}^J \int_{-1/2}^{1/2} N_j(t-s)\phi'(s)ds$$ where $J$ is the number of individuals (cycles) , $K$ a normalization constant and $\phi$ the kernel, verifying $\phi(\pm 1/2) =0$ (for the integration by parts) and $\int_{-1/2}^{1/2} \phi(s)ds =1$.\
The bias and the variance of this estimator are calculated using step functions and by passage to the limit. Let $\phi$ be a step function, $$\phi(s) = \sum_{i=1}^n a_i \mathbb 1 _{A_i}(s)$$ where the $A_i = [ t_i, t_{i+1} ] $ form a partition of $[ -1/2,1/2 ]$ (we can assume $t_i<t_{i+1}$ without loss of generality) and the $a_i$ are such that $\int_{-1/2}^{1/2} \phi(s) ds =1$. Then, for each $t \in [-1/2,1/2]$, $$\begin{array}{ll}
\widehat{\lambda_0 (t)} &= \displaystyle \sum_{j=1}^J K \int_{-1/2}^{1/2} dN_j(t-s)\phi(s)\\
%voir sauv 9/01/13 pour les détails
&= K \displaystyle{ \sum_{j=1}^J} \bigg\{ a_ 1 N_j([t-t_2,t-t_1]) + ...+ a_n N_j([t-t_{n+1},t-t_n]) \bigg\}
\end{array}$$ Thus, since $N_j([a,b]) \sim \mathcal P \left( Q_i \, \int_a^b \lambda_0 (s)ds \right) \ $ with $Q_i = D_j \exp(\beta X_j)$ and since $\mathds E \mathcal P (\xi) = \mathds V \mathcal P (\xi) = \xi$, we get: $$\mathds E \, \widehat{\lambda_0 (t)} = K \displaystyle{ \sum_{j=1}^J} Q_j \int_{-1/2}^{1/2} \lambda_0 (s)\phi(t-s)ds$$\
Similarly, for the variance: $$\begin{array}{ll}
\mathds V \, \widehat{\lambda_0 (t)} &= K^2 \displaystyle{ \sum_{j=1}^N} \bigg\{ a_1^2 \mathds V N_j([t-t_2,t-t_1]) + ...+ a_n^2 \mathds V N_j([t-t_{n+1},t-t_n]) \bigg\}\\
&=K^2 \displaystyle{ \sum_{j=1}^N} Q_j \int_{-1/2}^{1/2} \lambda_0 (s)\phi^2(t-s)ds
\end{array}$$ In the case of a kernel concentrated around zero we obtain $$\mathds E \, \widehat{\lambda_0 (t)} \simeq K \displaystyle{ \sum_{j=1}^J} Q_j \lambda_0 (t)$$ Hence, the choice $K = 1/ \sum Q_j$ is convenient and then we get $$\mathds V \, \widehat{\lambda_0 (t)}\simeq \frac{1}{ \sum Q_j}\lambda_0(t) \int_{-1/2}^{1/2}\phi^2(s)ds$$ In practice we used for $\phi$ a Gaussian kernel, i.e. $$\phi(s) = \frac{1}{\sqrt{2\pi}h}\exp(-\frac{s^2}{2h^2})$$ with $h$ the band width parameter, determined later. Then, using the fact that $$\phi^2(s) = \frac{1}{2\sqrt\pi h} \phi(\sqrt 2 s)$$ where $\phi(\sqrt 2 s)$ is the density function of a normal distribution $\mathcal N \left( 0, (h/\sqrt 2 )^2 \right)$ we can say that for $h$ sufficiently small $$\int_{-1/2}^{1/2}\phi^2(s)ds \simeq \int_{-\infty}^{+\infty}\phi^2(s)ds = \frac{1}{2\sqrt\pi h}$$
In order to avoid edge effects, a periodization is applied before the estimation process. The band width parameter $h$ is chosen by cross-validation with minimization of the Integrated Square Error. See [@Bowman1984] or [@Hall1983] for more details.\
\
Remark: as indicated in Section \[section: precise\_proba\], the intensity estimated by the kernel method does not correspond to the intensity we want to evaluate. Indeed, the intensity we estimate correspond to the probability $P_1$ that a storm of level 400 begins at time $t$. Hence we apply a correction by multiplying $\widehat {\lambda_0(t)}$ by 29/23.\
Thus, we obtain the approximate confidence interval for $\lambda_0(t)$ $$\lambda_0(t) \in \left[ \widehat{\lambda_0 (t)} \pm 1.96 \sqrt{\frac{1}{\sum Q_j}\frac{\widehat{\lambda_0 (t)}}{2\sqrt \pi h}} \right]$$
Results
=======
Instantaneous intensity
-----------------------
The graphic in Figure \[fig: lambda0\_111\] gives the estimation result of $\widehat {\lambda_0(t)}$ for a low level of 111 with the confidence area (i.e. the intensity for all the storms of level greater or equal to 111). The bandwidth parameter is selected by cross validation and is equal to 0.035. As expected, the basic intensity is higher during the second half of the cycle. One can also see a significant increase near of the x-axis zero, highlighting the difference between the two halves of a solar cycle.
![Estimated instantaneous intensity (years$^{-1}$) of the storms of level greater or equal to 111, for a mean solar activity of 146.7[]{data-label="fig: lambda0_111"}](Fig4.jpg){width="8cm" height="6.5cm"}
$P_{400}$ and $\beta$
---------------------
For $\widehat{P_{400}}$, the obtained results for different low levels are gathered in the Table \[tab: p400\].
Low level 111 132 154
--------------------- ------------------------- ------------------------- -------------------------
$\widehat{P_{400}}$ 0.031384 0.041905 0.059299
95 % C.I. \[0.018477 ; 0.044291\] \[0.024765 ; 0.059045\] \[0.035266 ; 0.083333\]
: $\widehat{P_{400}}$ (the probability for a high storm to grow into a storm of level 400) and 95 % confidence intervals for each low level[]{data-label="tab: p400"}
\
With a low level of 111, the estimation of $\beta$ gives: $$\hat\beta = 0.0059651\ \textrm{ with the 95\% confidence interval } \ [ 0.0035873 ; 0.0083429]$$ Although this value seems to be small, the significance of $ \hat\beta$ has been shown by a likelihood ratio test. The test of $\beta=0$ against $\beta=\hat\beta$ returns a p-value of $7.02 \times 10^{-7}$. Thus, the solar activity index $X$ affects the number of storms occurring during a cycle. Graphically, the influence of the solar activity index on the number of storms per cycle is observable on Figure \[NbO\_AS\_111\].\
![Total number of storms per cycle for a low level of 111 according to the solar activity (centered)[]{data-label="NbO_AS_111"}](Fig5.jpg){width="8cm" height="6.5cm"}
Instantaneous intensity: extrapolation to level 400 and relative risk {#section : extrapol}
---------------------------------------------------------------------
The extrapolation to the storms of extreme level 400 is made by multiplying by $\widehat{P_{400}}$ (with confidence interval). We obtain the final intensity shown in Figure \[fig: intensite\_111\]. This curve corresponds to intensity of apparition of extreme storms for a solar cycle with a mean solar activity of 146.7. Recall that in the equation: $$\lambda_j(t) = \lambda_0(t) D_j \exp ( \beta X_j)$$ the risk factor is $\exp ( \beta X_j)$. Then, using $\hat\beta$, we can evaluate the relative risk for a cycle with a given solar activity index. For example compared to the average level of solar activity (146.7), a cycle with a high solar activity of 180 has a relative risk of $\exp(33.3 \times 0.0059651) = 1.22$.
![Instantaneous intensity (years$^{-1}$), with confidence interval, of the storms of level 400 obtained by extrapolation from the low level 111, for a mean solar activity of 146.7. In dash-dotted line the empirical frequency of storms of level 400[]{data-label="fig: intensite_111"}](Fig6.jpg){width="8cm" height="6.5cm"}
Method stability
----------------
The results presented in the previous sections are given for a fixed low level (of 111). This asks the question of the model sensitivity to this parameter. The stability of the employed method can be evaluated by testing the stability to a low level change. The results for two other low levels, 132 and 154, are given in Figure \[fig: intensite\_132\_154\]. The two last curves seem to be smoother but this is partly due to the bandwidth parameter which is now equal to 0.045 (always selected by cross validation). For more precision, see Figure \[fig: comp3lambda\] where the three instantaneous intensity curves are plotted together. One can see that there is no significant difference between the three curves and that the method is rather stable.
![Similar to Figure \[fig: intensite\_111\] with a low level of 132 (left) and 154 (right)[]{data-label="fig: intensite_132_154"}](Fig7.jpg "fig:"){width="6.5cm" height="5cm"}![Similar to Figure \[fig: intensite\_111\] with a low level of 132 (left) and 154 (right)[]{data-label="fig: intensite_132_154"}](Fig8.jpg "fig:"){width="6.5cm" height="5cm"}
![Instantaneous intensity (years$^{-1}$) of the storms of level 400 obtained by extrapolation from the low levels 111 (plain line), 132 (dotted line) and 154 (dashed line)[]{data-label="fig: comp3lambda"}](Fig9.jpg){width="8cm" height="6.5cm"}
A model extension
-----------------
In an alternative approach, we consider the gradient of a storm to characterize its strength (instead of the ap index level). Gradients are calculated on one time step (3H) and the storm gradient is defined as the maximal gradient attained during a storm. This approach has been setting up because of the observation of storms with low levels (less than an ap index of 111) but strong effects due to fast variations of the ap index. We have led the same study with this new definition for the storm strength. The extreme gradient level are those greater than 100 and the low one is 35. The estimation of $\beta$ gives $$\hat\beta = 0.0053499\ \textrm{ with the confidence interval } \ [ 0.0038128 ; 0.006887]$$ These values are similar to those obtained with the ap index. The estimated intensity for the storms of extreme gradient is plotted in Figure \[fig: intensite\_grad\]. One can see that the step between the two halves of the cycle is stronger.
![Instantaneous intensity (years$^{-1}$), with confidence interval, of the storms with extreme gradient ($\geq$ 100) obtained by extrapolation from the low gradient level 35, for a mean solar activity of 146.7. In dash-dotted line the empirical frequency of storms with an extreme gradient[]{data-label="fig: intensite_grad"}](Fig10.jpg){width="8cm" height="6.5cm"}
We should precise that the use of the gradient involves one disadvantage. Since the ap index represents a maximum over a 3 hours period, the two values of ap index used for the gradient calculation can be separated by nearly 6 hours or only by few minutes. The real dates of these values are not known and the gradient is calculated using 3 hours time step. However, the calculated gradient gives an approximation of the variation speed of the ap index. Moreover, since the gradient is used analogously to the ap index, the original model is still appropriate here.
Conclusion
==========
This study highlights that the intensity of magnetic storm occurrence strongly depends on the position on the solar cycle. The probability is higher during the second half of the cycle. The solar activity also has an influence on this intensity and, giving an activity index, allows to express a relative risk (compared to a cycle with the average level of solar activity 146.7).
The analyses has been performed for different low levels in order to check his stability. The first results are given for a low level of 111 and a comparison is made using two other low levels: 132 and 154. The shape similarity of the three curves attests of the method stability.
The model we built also allows us to make predictions about the current solar cycle. For the beginning date of this 24th cycle, we have chosen December 2008, a date accepted by a panel of experts (although there is no consensus). For the solar activity index, we have used the NOAA prediction with a maximum of 87.9 attained on November 2013 [@noaaPrev24]. The end of the 24th cycle is estimated around December 2019 or January 2020. The estimation (from the beginning to present) and the prediction are represented on Figure \[fig: prev24\] (plain line).\
![Estimation and prediction of instantaneous intensity (years$^{-1}$) of the storms of level 400 for the 24th solar cycle, with confidence interval. For comparison, in dash dotted gray, the same intensity for a cycle with a mean solar activity index of 146.7[]{data-label="fig: prev24"}](Fig11.jpg){width="8.5cm" height="7cm"}
Appendix
========
Maximum likelihood estimator of $\beta$ {#annexe}
=======================================
The use of $N_j$ instead of $N_j(t)$ for the estimation of $\beta$ arises the question of the sufficiency of this statistic. Consider only one cycle and the model: $$N(t) \sim \mathcal P \left( \lambda_0 (t)dt \ D \exp(\beta X) \right)\quad \textrm{ for } t \in [-0.5 , 0.5 ]$$ Then, consider $\Delta_1, \Delta_2, ... , \Delta_n$ a partition of \[-0.5, 0.5\] into $n$ sub segments. For $i=1...n$, note $N(\Delta_i)= \int_{\Delta_i} dN(t)$ the number of events in $\Delta_i$. Giving that $N(t)$ is a Poisson process we know that the $\{N(\Delta_i), i=1...n\}$ are independent variables and that $N(\Delta_i) \sim \mathcal P \left( \left[ \int_{\Delta_i} \lambda_0 (s)ds \right] \ D \ \exp(\beta X) \right)$. We note $C_i = \int_{\Delta_i} \lambda_0 (s)ds \ D$. Then the Log-likelihood with respect to the counting measure (in which we integrate the weights $1/N(\Delta_i)!\ $) is $$- \exp(\beta X) \sum_{i=1}^n C_i + \sum_{i=1}^n [ N(\Delta_i) \log(C_i)] + \beta X \sum_{i=1}^n N(\Delta_i)$$ We see that $\beta$ is linked to the $N(\Delta_i)$ only by the term $\sum_{i=1}^n N(\Delta_i)$. Hence there is no loss of information to use the total number of events per cycle for the estimation of $\beta$.
We can now compute the maximum likelihood estimator. For the $j^{th}$ cycle, the likelihood with respect to the counting measure with weights $1/N_j! \, $ is, noting $\alpha = \int_{-1/2}^{1/2} \lambda_0 (s)ds$ $$\exp \left( -\alpha \, D_j \exp(\beta X_j) \right) (\alpha \, D_j \exp(\beta X_j))^{N_j}$$ and the Log-likelihood for all the $J$ cycles: $$-\alpha \sum_{j=1}^J D_j \exp(\beta X_j) + \log(\alpha) \sum_{j=1}^J N_j + \sum_{j=1}^J N_j \log(D_j) + \beta \sum_{j=1}^J N_jX_j$$ The derivatives in $\alpha$ anb $\beta$ respectively give : $$\sum_{j=1}^J D_j \exp(\beta X_j) = \frac{\sum_{j=1}^J N_j}{\alpha}$$ and $$\alpha \sum_{j=1}^J D_j X_j \exp(\beta X_j) = \sum_{j=1}^J N_j X_j$$ Replacing $\alpha$ by the solution of the first equation, we obtain: $$\sum_{j=1}^J D_j X_j \exp(\beta X_j) \sum_{j=1}^J N_j = \sum_{j=1}^J D_j \exp(\beta X_j) \sum_{j=1}^J N_j X_j$$ This implicit equation resolves only numerically (by the secant method).
We can also compute the Fisher information matrix: $$\left( \begin{array}{cc}
\alpha^{-1} \sum_{j=1}^J D_j \exp(\beta X_j) & \sum_{j=1}^J D_j X_j \exp(\beta X_j) \\
\sum_{j=1}^J D_j X_j \exp(\beta X_j) & \alpha \sum_{j=1}^J D_j X^2_j \exp(\beta X_j) \\
\end{array} \right)$$ The (2,2) coefficient of the inverse matrix of the Fisher information matrix provides the variance of $\hat \beta$, used for the construction of a confidence interval.
Chi-square test: {#annexe: chi2}
=================
The chi-square independence test is performed a posteriori. When the instantaneous intensity is estimated, the time interval $[-0.5,0.5]$ is separated into two parts, of low and high intensity. The intensity threshold for this partition will be the empirical frequency of extreme storms, which is about 0.29 storm per year (horizontal dash-dotted line in Figure \[fig: intensite\_111\]). The two parts correspond to the times where the instantaneous intensity is respectively below and above this threshold.
Then, the chi-square test is applied to the proportions of extreme level storms for each area and returns a p-value of 0.26, leading to the acceptance of the independence hypothesis. The same test is applied with different thresholds for the partition into two areas (0.40, 0.50 and 0.60) and always leads to the same conclusion.
|
{
"pile_set_name": "arxiv"
}
|
Go back to /usr/src/vdr-1.2.5 and run runvdr.remote. If
you use Red Hat, set the environment variable
LD_ASSUME_KERNEL=2.4.1, because VDR doesn't yet work with the native
posix layer that Red Hat introduced in the latest version. The modules for
the DVB card then are loaded, and the VDR is started. Hook up your TV,
and you should see a black screen prompting you to define the keys on your
remote. After finishing the wizard, you're ready to watch TV, record
shows and remove commercials. You can listen to your MP3s and watch
videos. There's a manual in VDR's root that explains
how to record and edit TV events, using the time-shift feature.
Back It Up
In case you're disappointed that the end of the article is within reach,
don't worry; there still are some optional things you can do. The automatic
backup feature has some limitations. Although the (S)VCD backup works
flawlessly, the DivX encoding does not crop the picture to remove
black bands, should they exist. This has quite a negative
impact on bit rate, size and overall picture quality. If you really
want a high-quality, small-size MPEG-4, you should back it up manually.
The improved picture quality is well worth the trouble.
Figure 3. The information bar shows the program name, running TV show and what's
on next.
VDR splits its recordings into 2GB files, which is a bit inconvenient
for transcoding the videos. If you go for manual conversion, which
gives you finer control over the quality/size aspect, mencoder or
transcode are good options. Use the speedy mencoder, which I found to
be perfect for backups to MPEG-4, or transcode, which comes with a lot of
tools. If you favor the I-don't-want-to-care approach, get a hold
of VDRCONVERT.
The README file offers a pretty simple approach to
installing it, and at least you can watch some TV while downloading and
compiling.
With VDRCONVERT you have to
change some scripts and configuration files to adapt the DVD/(S)VCD
resolutions to NTSC, in case PAL is not used where you live.
It's too bad that a Linux PVR doesn't make the TV
programs themselves any better, but I guess you can't have
everything, can you?
Christian A. Herzog is a programmer focused on Web development using
open-source technologies. He's still on his never-ending quest to bring a
Linux-based device to every home and company he comes across.
Write him at noeffred@gmx.net.
Comment viewing options
I was wonder if you could also include a connection diagram . I was looking at the nexus-s card and didn't see a TV out, just a loop connection is this the connection used for the TV. Or are using additional card to get the TV output?
|
{
"pile_set_name": "pile-cc"
}
|
---
abstract: 'We study the effects of Supernova (SN) feedback on the formation of galaxies using hydrodynamical simulations in a $\Lambda$CDM cosmology. We use an extended version of the code GADGET-2 which includes chemical enrichment and energy feedback by Type II and Type Ia SN, metal-dependent cooling and a multiphase model for the gas component. We focus on the effects of SN feedback on the star formation process, galaxy morphology, evolution of the specific angular momentum and chemical properties. We find that SN feedback plays a fundamental role in galaxy evolution, producing a self-regulated cycle for star formation, preventing the early consumption of gas and allowing disks to form at late times. The SN feedback model is able to reproduce the expected dependence on virial mass, with less massive systems being more strongly affected.'
---
Introduction
============
Supernova explosions play a fundamental role in galaxy formation and evolution. On one side, they are the main source of heavy elements in the Universe and the presence of such elements substantially enhances the cooling of gas (White & Frenk 1991). On the other hand, SNe eject a significant amount of energy into the interstellar medium. It is believed that SN explosions are responsible of generating a self-regulated cycle for star formation through the heating and disruption of cold gas clouds, as well as of triggering important galactic winds such as those observed (e.g. Martin 2004). Smaller systems are more strongly affected by SN feedback, because their shallower potential wells are less efficient in retaining baryons (e.g. White & Frenk 1991).
Numerical simulations have become an important tool to study galaxy formation, since they can track the joint evolution of dark matter and baryons in the context of a cosmological model. However, this has shown to be an extremely complex task, because of the need to cover a large dynamical range and describe, at the same time, large-scale processes such as tidal interactions and mergers and small-scale processes related to stellar evolution.
One of the main problems that galaxy formation simulations have repeteadly found is the inability to reproduce the morphologies of disk galaxies observed in the Universe. This is generally refered to as the angular momentum problem that arises when baryons transfer most of their angular momentum to the dark matter components during interactions and mergers (Navarro & Benz 1991; Navarro & White 1994). As a result, disks are too small and concentrated with respect to real spirals. More recent simulations which include prescriptions for SN feedback have been able to produce more realistic disks (e.g. Abadi et al. 2003; Robertson et al. 2004; Governato et al. 2007). These works have pointed out the importance of SN feedback as a key process to prevent the loss of angular momentum, regulate the star formation activity and produce extended, young disk-like components.
In this work, we investigate the effects of SN feedback on the formation of galaxies, focusing on the formation of disks. For this purpose, we have run simulations of a Milky-Way type galaxy using an extended version of the code [GADGET-2]{} which includes chemical enrichment and energy feedback by SN. A summary of the simulation code and the initial conditions is given in Section \[simus\]. In Section \[results\] we investigate the effects of SN feedback on galaxy morphology, star formation rates, evolution of specific angular momentum and chemical properties. We also investigate the dependence of the results on virial mass. Finally, in Section \[conclusions\] we give our conclusions.
Simulations {#simus}
===========
We use the simulation code described in Scannapieco et al. (2005, 2006). This is an extended version of the Tree-PM SPH code [GADGET-2]{} (Springel & Hernquist 2002; Springel 2005), which includes chemical enrichment and energy feedback by SN, metal-dependent cooling and a multiphase model for the gas component. Note that our star formation and feedback model is substantially different from that of Springel & Hernquist (2003), but we do include their treatment of UV background.
We focus on the study of a disk galaxy similar to the Milky Way in its cosmological context. For this purpose we simulate a system with $z=0$ halo mass of $\sim 10^{12}$ $h^{-1}$ M$_\odot$ and spin parameter of $\lambda\sim 0.03$, extracted from a large cosmological simulation and resimulated with improved resolution. It was selected to have no major mergers since $z=1$ in order to give time for a disk to form. The simulations adopt a $\Lambda$CDM Universe with the following cosmological parameters: $\Omega_\Lambda=0.7$, $\Omega_{\rm m}=0.3$, $\Omega_{\rm b}=0.04$, a normalization of the power spectrum of $\sigma_8=0.9$ and $H_0=100\ h$ km s$^{-1}$ Mpc$^{-1}$ with $h=0.7$. The particle mass is $1.6\times 10^7$ for dark matter and $2.4\times 10^6$ $h^{-1}$ M$_\odot$ for baryonic particles, and we use a maximum gravitational softening of $0.8\ h^{-1}$ kpc for gas, dark matter and star particles. At $z=0$ the halo of our galaxy contains $\sim 1.2\times 10^5$ dark matter and $\sim 1.5\times 10^5$ baryonic particles within the virial radius.
In order to investigate the effects of SN feedback on the formation of galaxies, we compare two simulations which only differ in the inclusion of the SN energy feedback model. These simulations are part of the series analysed in Scannapieco et al. (2008), where an extensive investigation of the effects of SN feedback on galaxies and a parameter study is performed. In this work, we use the no-feedback run NF (run without including the SN energy feedback model) and the feedback run E-0.7. We refer the interested reader to Scannapieco et al. (2008) for details in the characteristics of these simulations.
Results
=======
In Fig. \[maps\] we show stellar surface density maps at $z=0$ for the NF and E-0.7 runs. Clearly, SN feedback has an important effect on the final morphology of the galaxy. If SN feedback is not included, as we have done in run NF, the stars define a spheroidal component with no disk. On the contrary, the inclusion of SN energy feedback allows the formation of an extended disk component.
![Edge-on stellar surface density maps for the no-feedback (NF, left-hand panel) and feedback (E-0.7, right-hand panel) simulations at $z=0$. The colors span 4 orders of magnitude in projected density, with brighter colors representing higher densities. []{data-label="maps"}](map-NF.eps "fig:"){width="60mm"} ![Edge-on stellar surface density maps for the no-feedback (NF, left-hand panel) and feedback (E-0.7, right-hand panel) simulations at $z=0$. The colors span 4 orders of magnitude in projected density, with brighter colors representing higher densities. []{data-label="maps"}](map-E-0.7.eps "fig:"){width="60mm"}
![Left: Star formation rates for the no-feedback (NF) and feedback (E-0.7) runs. Right: Mass fraction as a function of formation time for stars of the disk and spheroidal components in simulation E-0.7. []{data-label="sfr_stellarage"}](sfr-copen.ps "fig:"){width="70mm"}![Left: Star formation rates for the no-feedback (NF) and feedback (E-0.7) runs. Right: Mass fraction as a function of formation time for stars of the disk and spheroidal components in simulation E-0.7. []{data-label="sfr_stellarage"}](fig6.ps "fig:"){width="64mm"}
The generation of a disk component is closely related to the star formation process. In the left-hand panel of Fig. \[sfr\_stellarage\] we show the star formation rates (SFR) for our simulations. In the no-feedback case (NF), the gas cools down and concentrates at the centre of the potential well very early, producing a strong starburst which feeds the galaxy spheroid. As a result of the early consumption of gas to form stars, the SFR is low at later times. On the contrary, the SFR obtained for the feedback case is lower at early times, indicating that SN feedback has contributed to self-regulate the star formation process. This is the result of the heating of gas and the generation of galactic winds. In this case, the amount of gas available for star formation is larger at recent times and consequently the SFR is higher. In the right-hand panel of Fig. \[sfr\_stellarage\] we show the mass fraction as a function of formation time for stars of the disk and spheroidal components in our feedback simulation (see Scannapieco et al. 2008 for the method used to segregate stars into disk and spheroid). From this plot it is clear that star formation at recent times ($z\lesssim 1$) significantly contributes to the formation of the disk component, while stars formed at early times contribute mainly to the spheroid. In this simulation, $\sim 50$ per cent of the mass of the disk forms since $z=1$. Note that in the no-feedback case, only a few per cent of the final stellar mass of the galaxy is formed since $z=1$.
Our simulation E-0.7 has produced a galaxy with an extended disk component. By using the segregation of stars into disk and spheroid mentioned above, we can calculate the masses of the different components, as well as characteristic scales. The disk of the simulated galaxy has a mass of $3.3\times 10^{10}\ h^{-1}\ M_\odot$, a half-mass radius of $5.7\ h^{-1}$ kpc, a half-mass height of $0.5\ h^{-1}$ kpc, and a half-mass formation time of $6.3$ Gyr. The spheroid mass and half-mass formation time are $4.1\times 10^{10}\ h^{-1}\ M_\odot$ and $2.5$ Gyr, respectively. It is clear that the characteristic half-mass times are very different in the two cases, the disk component being formed by younger stars.
In Fig. \[j\_evolution\] we show the evolution of the specific angular momentum of the dark matter (within the virial radius) and of the cold gas plus stars (within twice the optical radius) for the no-feedback case (left-hand panel) and for the feedback case E-0.7 (right-hand panel). The evolution of the specific angular momentum of the dark matter component is similar in the two cases, growing as a result of tidal torques at early epochs and being conserved from turnaround ($z\approx 1.5$) until $z=0$. On the contrary, the cold baryonic components in the two cases differ significantly, in particular at late times. In the no-feedback case (NF), much angular momentum is lost through dynamical friction, particularly through a satellite which is accreted onto the main halo at $z\sim 1$. In E-0.7, on the other hand, the cold gas and stars lose rather little specific angular momentum between $z=1$ and $z=0$. Two main factors contribute to this difference. Firstly, in E-0.7 a significant number of young stars form between $z=1$ and $z=0$ with high specific angular momentum (these stars form from high specific angular momentum gas which becomes cold at late times); and secondly, dynamical friction affects the system much less than in NF, since satellites are less massive. At $z=0$, disk stars have a specific angular momentum comparable to that of the dark matter, while spheroid stars have a much lower specific angular momentum.
![Dashed lines show the specific angular momentum as a function of time for the dark matter that, at $z=0$, lies within the virial radius of the system for NF (left panel) and E-0.7 (right panel). We also show with dots the specific angular momentum for the baryons which end up as cold gas or stars in the central $20\ h^{-1}$ kpc at $z=0$. The arrows show the specific angular momentum of disk and spheroid stars. []{data-label="j_evolution"}](fig5a.ps "fig:"){width="65mm"} ![Dashed lines show the specific angular momentum as a function of time for the dark matter that, at $z=0$, lies within the virial radius of the system for NF (left panel) and E-0.7 (right panel). We also show with dots the specific angular momentum for the baryons which end up as cold gas or stars in the central $20\ h^{-1}$ kpc at $z=0$. The arrows show the specific angular momentum of disk and spheroid stars. []{data-label="j_evolution"}](fig5b.ps "fig:"){width="65mm"}
In Fig \[metal\_profiles\] we show the oxygen profiles for the no-feedback (NF) and feedback (E-0.7) runs. From this figure we can see that SN feedback strongly affects the chemical distributions. If no feedback is included, the gas is enriched only in the very central regions. Including SN feedback triggers a redistribution of mass and metals through galactic winds and fountains, giving the gas component a much higher level of enrichment out to large radii. A linear fit to this metallicity profile gives a slope of $-0.048$ dex kpc$^{-1}$ and a zero-point of $8.77$ dex, consistent with the observed values in real disk galaxies (e.g. Zaritsky et al. 1994).
![Oxygen abundance for the gas component as a function of radius projected onto the disk plane for our no-feedback simulation (NF) and for the feedback case E-0.7. The error bars correspond to the standard deviation around the mean. []{data-label="metal_profiles"}](fig9.ps){width="80mm"}
Finally, we investigate the effects of SN feedback on different mass systems. For that purpose we have scaled down our initial conditions to generate galaxies of $10^{10}\ h^{-1}\ M_\odot$ and $10^9\ h^{-1}\ M_\odot$ halo mass, and simulate their evolution including the SN feedback model (with the same parameters than E-0.7). These simulations are TE-0.7 and DE-0.7, respectively. In Fig. \[dwarf\] we show the SFRs for these simulations, as well as for E-0.7, normalized to the scale factor ($\Gamma=1$ for E-0.7, $\Gamma=10^{-2}$ for TE-0.7 and $\Gamma=10^{-3}$ for DE-0.7). From this figure it is clear that SN feedback has a dramatic effect on small galaxies. This is because more violent winds develop and baryons are unable to condensate and form stars. In the smallest galaxy, the SFR is very low at all times because most of the gas has been lost after the first starburst episode. This proves that our model is able to reproduce the expected dependence of SN feedback on virial mass, without changing the relevant physical parameters.
![SFRs for simulations DE-0.7 ($10^{9}\ h^{-1}$ M$_\odot$), TE-0.7 ($10^{10}\ h^{-1}$ M$_\odot$) and E-0.7 ($10^{12}\ h^{-1}$ M$_\odot$) run with energy feedback. To facilitate comparison, the SFRs are normalized to the scale factor $\Gamma$. []{data-label="dwarf"}](fig10b.ps){width="80mm"}
Conclusions
===========
We have run simulations of a Milky Way-type galaxy in its cosmological setting in order to investigate the effects of SN feedback on the formation of galaxy disks. We compare two simulations with the only difference being the inclusion of the SN energy feedback model of Scannapieco et al. (2005, 2006). Our main results can be summarized as follows:
- [ SN feedback helps to settle a self-regulated cycle for star formation in galaxies, through the heating and disruption of cold gas and the generation of galactic winds. The regulation of star formation allows gas to be mantained in a hot halo which can condensate at late times, becoming a reservoir for recent star formation. This contributes significantly to the formation of disk components. ]{}
- [When SN feedback is included, the specific angular momentum of the baryons is conserved and disks with the correct scale-lengths are obtained. This results from the late collapse of gas with high angular momentum, which becomes available to form stars at later times, when the system does not suffer from strong interactions. ]{}
- [ The injection of SN energy into the interstellar medium generates a redistribution of chemical elements in galaxies. If energy feedback is not considered, only the very central regions were stars are formed are contaminated. On the contrary, the inclusion of feedback triggers a redistribution of metals since gas is heated and expands, contaminating the outer regions of galaxies. In this case, metallicity profiles in agreement with observations are produced. ]{}
- [ Our model is able to reproduce the expected dependence of SN feedback on virial mass: as we go to less massive systems, SN feedback has stronger effects: the star formation rates (normalized to mass) are lower, and more violent winds develop. This proves that our model is well suited for studying the cosmological growth of structure where large systems are assembled through mergers of smaller substructures and systems form simultaneously over a wide range of scales. ]{}
, 2003, *ApJ*, 591, 499
, 2007, *MNRAS*, 374, 1479
, 2004, *A&AS*, 205, 8901
, 1991, *ApJ*, 380, 320
, 1993, *MNRAS*, 265, 271
, 2004, *ApJ*, 606, 32
, 2005, *MNRAS*, 364, 552
, 2006, *MNRAS*, 371, 1125
, 2008, *MNRAS*, in press (astro-ph/0804.3795)
, 2002, *MNRAS*, 333, 649
, 2003, *MNRAS*, 339, 289
2005, *MNRAS*, 364, 1105
, 1991, *ApJ*, 379, 52
, 1994, *ApJ*, 420, 87
|
{
"pile_set_name": "arxiv"
}
|
Bareback BF Videos Pay Pal
Get your discount membership to Bareback BF Videos using the image above…or try the free Bareback BF Videos logins below, and get member access to Barebackbfvideos without paying. Download tons of quality Unreleased Footage and Really High Resolution Pics. This barebackbfvideos.com deal is a limited offer, Don’t miss out!
|
{
"pile_set_name": "pile-cc"
}
|
Got this cute little sewing chair from Sara and Stacy at SugarSCOUT–they have “super sweet finds of all kinds”…just check out their Etsy Shop. (lots of great ideas on their blog @ www.sugarSCOUT.com, too!)
I do love spending time in my studio that has become a haven for creating my upcycled bags.
I’m adding new bags as quick as I get them done to my Etsy shop. Take a look…it’s called itzaChicThing.
I love to layer color, pattern and texture. I created this bag using a fusing process. After making many bags, all shapes and sizes (you can see some of them at bohochicbag.com), I decided to use the same concept to create pieces for hanging.
|
{
"pile_set_name": "pile-cc"
}
|
Carlo Buscaglia
Carlo Buscaglia (9 February 1909 – 15 August 1981) was an Italian footballer from Bastia di Balocco in the Province of Vercelli who played as a midfielder.
Career
Buscaglia played club football most notably for Napoli. He spent a decade at Napoli, also serving as the team's captain, and wrote himself into the appearance records books at the club; today he is sixth in the club's all-time appearance records for the league.
After leaving Napoli in 1938, he spent two year spells at Juventus and Savona.
References
Category:1909 births
Category:1981 deaths
Category:Italian footballers
Category:Serie A players
Category:Casale F.B.C. players
Category:Juventus F.C. players
Category:S.S.C. Napoli players
Category:Savona F.B.C. players
Category:Sportspeople from Turin
Category:Association football midfielders
Category:People from the Province of Vercelli
|
{
"pile_set_name": "wikipedia_en"
}
|
---
abstract: 'In this paper, we recall our renormalized quantum Q-system associated with representations of the Lie algebra $A_r$, and show that it can be viewed as a quotient of the quantum current algebra $U_q(\n[u,u^{-1}])\subset U_q(\widehat{\sl}_2)$ in the Drinfeld presentation. Moreover, we find the interpretation of the conserved quantities in terms of Cartan currents at level 0, and the rest of the current algebra, in a non-standard polarization in terms of generators in the quantum cluster algebra.'
address:
- 'PDF: Department of Mathematics, University of Illinois MC-382, Urbana, IL 61821, U.S.A. e-mail: philippe@illinois.edu'
- 'RK: Department of Mathematics, University of Illinois MC-382, Urbana, IL 61821, U.S.A. e-mail: rinat@illinois.edu'
author:
- Philippe Di Francesco
- Rinat Kedem
bibliography:
- 'refs.bib'
title: 'Quantum Q systems: From cluster algebras to quantum current algebras'
---
Introduction
============
An extended quantum Q system
============================
Proofs {#proofsec}
======
The quantum affine algebra
==========================
Discussion/Conclusion
=====================
|
{
"pile_set_name": "arxiv"
}
|
---
abstract: 'Isotropic Heisenberg exchange naturally appears as the main interaction in magnetism, usually favouring long-range spin-ordered phases. The anisotropic Dzyaloshinskii-Moriya interaction arises from relativistic corrections and is *a priori* much weaker, even though it may sufficiently compete with the isotropic one to yield new spin textures. Here, we challenge this well-established paradigm, and propose to explore a Heisenberg-exchange-free magnetic world. There, the Dzyaloshinskii-Moriya interaction induces magnetic frustration in two dimensions, from which the competition with an external magnetic field results in a new mechanism producing skyrmions of nanoscale size. The isolated nanoskyrmion can already be stabilized in a few-atom cluster, and may then be used as LEGO${\textregistered}$ block to build a large magnetic mosaic. The realization of such topological spin nanotextures in $sp$- and $p$-electron compounds or in ultracold atomic gases would open a new route toward robust and compact magnetic memories.'
author:
- 'E. A. Stepanov$^{1,2}$, S. A. Nikolaev$^{2}$, C. Dutreix$^{3}$, M. I. Katsnelson$^{1,2}$, V. V. Mazurenko$^{2}$'
title: 'Heisenberg-exchange-free nanoskyrmion mosaic'
---
The concept of spin was introduced by G. Uhlenbeck and S. Goudsmit in the 1920s in order to explain the emission spectrum of the hydrogen atom obtained by A. Sommerfeld [@Int2]. W. Heitler and F. London subsequently realized that the covalent bond of the hydrogen molecule involves two electrons of opposite spins, as a result of the fermionic exchange [@Int4]. This finding inspired W. Heisenberg to give an empirical description of ferromagnetism [@Int5; @Int6], before P. Dirac finally proposed a Hamiltonian description in terms of scalar products of spin operators [@Int7]. These pioneering works focused on the ferromagnetic exchange interaction that is realized through the direct overlap of two neighbouring electronic orbitals. Nonetheless, P. Anderson understood that the exchange interaction in transition metal oxides could also rely on an indirect antiferromagnetic coupling via intermediate orbitals [@PhysRev.115.2]. This so-called superexchange interaction, however, could not explain the weak ferromagnetism of some antiferromagnets. The latter has been found to arise from anisotropic interactions of much weaker strength, as addressed by I. Dzyaloshinskii and T. Moriya [@DZYALOSHINSKY1958241; @Moriya]. The competition between the isotropic exchange and anisotropic Dzyaloshinskii-Moriya interactions (DMI) leads to the formation of topologically protected magnetic phases, such as skyrmions [@NagaosaReview]. Nevertheless, the isotropic exchange mainly rules the competition, which only allows the formation of large magnetic structures, more difficult to stabilize and manipulate in experiments [@NagaosaReview; @PhysRevX.4.031045]. Finding a new route toward more compact robust spin textures then appears as a natural challenge.
As a promising direction, we investigate the existence of two-dimensional skyrmions in the absence of isotropic Heisenberg exchange. Indeed, recent theoretical works have revealed that antiferromagnetic superexchange may be compensated by strong ferromagnetic direct exchange interactions at the surfaces of $sp$- and $p$-electron nanostructures [@silicon; @graphene], whose experimental isolation has recently been achieved [@PbSn; @PhysRevLett.98.126401; @SurfMagn; @kashtiban2014atomically]. Moreover, Floquet engineering in such compounds also offers the possibility to dynamically switch off the isotropic Heisenberg exchange interaction under high-frequency-light irradiation, a unique situation that could not be met in transition metal oxides in equilibrium [@PhysRevLett.115.075301; @Control1; @Control2]. In particular, rapidly driving the strongly-correlated electrons may be used to tune the magnetic interactions, which can be described by in terms of spin operators $\hat{\bf S}_i$ by the following Hamiltonian $$\begin{aligned}
H_{\rm spin} = -\sum_{{\ensuremath{\left\langle ij \right\rangle}}} J_{ij} (A) \,\hat{\bf S}_{i}\,\hat{\bf S}_{j} +
\sum_{{\ensuremath{\left\langle ij \right\rangle}}}{\bf D}_{ij} (A)\,[\hat{\bf S}_{i}\times\hat{\bf S}_{j}],
\label{Hspin}\end{aligned}$$ where the strengths of isotropic Heisenberg exchange $J_{ij}(A)$ and anisotropic DMI ${\bf D}_{ij}(A)$ now depend on the light amplitude $A$. The summations are assumed to run over all nearest-neighbour sites $i$ and $j$. The isotropic Heisenberg exchange term describes a competition between ferromagnetic direct exchange and antiferromagnetic kinetic exchange [@PhysRev.115.2]. Importantly, it may be switched off dynamically by varying the intensity of the high-frequency light, while the anisotropic DMI remains non-zero [@Control2].
![Stable nanoskyrmion-designed DMI abbreviation resulting from the Monte Carlo simulation of the Heisenberg-exchange-free model on the non-regular square lattice with $B_{z} = 1.2$. Arrows and color depict the in- and out-of-plane spin projection, respectively.[]{data-label="Fig1"}](Fig1.pdf){width="0.67\linewidth"}
The study of Heisenberg-exchange-free magnetism may also be achieved in other classes of systems, such as optical lattices of ultracold atomic gases. Indeed, cold atoms have enabled the observation and control of superexchange interactions, which could be reversed between ferromagnetic and antiferromagnetic [@Trotzky], as well as strong DMI [@Gong], following the realization of the spin-orbit coupling in bosonic and fermionic gases [@SO_Lin; @SO_Wang].
Here, we show that such a control of the microscopic magnetic interactions offers an unprecedented opportunity to observe and manipulate nano-scale skyrmions. Heisenberg-exchange-free nanoskyrmions actually arise from the competition between anisotropic DMI and a constant magnetic field. The latter was essentially known to stabilize the spin textures [@PhysRevX.4.031045], whereas here it is a part of the substantially different and unexplored mechanism responsible for nanoskyrmions. Fig. \[Fig1\] immediately highlights that an arbitrary system of few-atom skyrmions can be stabilized and controlled on a non-regular lattice with open boundary conditions, which was not possible at all in the presence of isotropic Heisenberg exchange.
[*Heisenberg-exchange-free Hamiltonian*]{} — Motivated by the recent predictions and experiments discussed above, we consider the following spin Hamiltonian $$\begin{aligned}
&\hat H_{\rm Hef} =
\sum_{{\ensuremath{\left\langle ij \right\rangle}}}{\bf D}_{ij}\,[\hat{\bf S}_{i}\times\hat{\bf S}_{j}] - \sum_{i}{\bf B}\hat{\bf S}_{i},
\label{DMI}\end{aligned}$$ where the magnetic field is perpendicular to the two dimensional system and ${\bf B}=~(0,0,B_{z})$. The latter tends to align the spins in the $z$ direction, while DMI flavors their orthogonal orientations. At the quantum level this non-trivial competition provides a fundamental resource in quantum information processing [@QuantInf]. Here, we are interested in the semi-classical description of the Heisenberg-exchange-free magnetism.
![Magnetic frustration in elementary DMI clusters of the triangular ([**a**]{}) and square ([**b**]{}) lattices. Curving arrows denote clockwise direction of the bonds in each cluster. Big gray arrows correspond to the in-plane DMI vectors. Black arrows in circles denote the in-plane directions of the spin moments. Blue dashed and red dotted lines indicate the bonds with minimal and zero DMI energy, respectively. ([**c**]{}) and ([**d**]{}) illustrate the examples of the spin configurations corresponding to classical ground state of the DMI Hamiltonian.[]{data-label="Fig2"}](Fig2.pdf){width="1\linewidth"}
[*DMI-induced frustration*]{} — In the case of the two-dimensional materials with pure DMI between nearest neighbours, magnetic frustration is the intrinsic property of the system. To show this let us start off with the elementary plaquettes of the triangular and square lattices without external magnetic field (see Fig. \[Fig2\]). Keeping in mind real two-dimensional materials with the $C_{nv}$ symmetry [@graphene; @silicon] we consider the in-plane orientation of the DMI vector perpendicular to the corresponding bond. Taking three spins of a single square plaquette as shown in Fig. \[Fig2\] [**b**]{}, one can minimize their energy while discarding their coupling with the fourth spin. Then, the orientation of the remaining spin can not be uniquely defined, because the spin configuration regardless whether it points “up” or “down” has the same energy, which indicates frustration. Thus, Fig. \[Fig2\] [**d**]{} gives the example of the classical ground state of the square plaquette with the energy ${\rm E}_{\square} = - {\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}\,|\mathbf{D}_{ij}|\,S^2$ and magnetization ${\rm M}^z_{\square} = S/2$ (per spin). In turn, frustration of the triangular plaquette is expressed in Fig. \[Fig2\] [**b**]{}, while its magnetic ground state is characterized by the following spin configuration $\mathbf{S}_1 = (0, -\frac{1}{{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}}, \frac{1}{{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}})\,S$, $\mathbf{S}_2 = (\frac{{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{3}}}{2{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}}, \frac{1}{2{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}}, \frac{1}{{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}})\,S$ and $\mathbf{S}_3 = (-\frac{{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{3}}}{2{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}}, \frac{1}{2{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}}, \frac{1}{{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}})\,S$ shown in Fig. \[Fig2\] [**c**]{}. One can see that the in-plane spin components form $120^{\circ}$-Neel state similar to the isotropic Heisenberg model on the triangular lattice [@PhysRev.115.2]. The corresponding energy and magnetization are ${\rm E}_{\triangle} =- {\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{3}}\,|\mathbf{D}_{ij}|\,S^2$ and ${\rm M}^z_{\triangle} = S/{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}$, respectively.
Importantly, the ground state of the triangular and square plaquettes is degenerate due to the $C_{nv}$ and in-plane mirror symmetry. For instance, there is another state with the same energy ${\rm E'} = {\rm E}$, but opposite magnetization ${\rm M}'^z = - {\rm M}^z$. Therefore, such elementary magnetic units can be considered as the building blocks in order to realize [*spin spirals*]{} on lattices with periodic boundary conditions and zero magnetic field. The ensuing results are obtained via Monte Carlo simulations, as detailed in Supplemental Material [@SM].
![Fragments of the spin textures and spin structure factors obtained with the Heisenberg-exchange-free model on the square $20\times20$ ([**a**]{}) and triangular $21\times21$ ([**b**]{}) lattices. The values of the magnetic fields in these simulations were chosen $B_{z} = 3.0$ and $B_{z} = 3.2$ for the triangular and square lattices, respectively. The calculated skyrmion numbers for the triangular (blue triangles) and square (red squares) lattices ([**c**]{}). The magnetic field is in units of DMI. The temperature is equal to ${\rm T}=0.01\,|{\rm \bf D}|$.[]{data-label="Fig3"}](Fig3.pdf){width="1\linewidth"}
[*Nanoskyrmionic state*]{} — At finite magnetic field the spiral state can be transformed to a skyrmionic spin texture. Fig. \[Fig3\] gives some examples obtained from the Monte Carlo calculations for the triangular and square lattices. Remarkably, the radius of the obtained nanoskyrmions does not exceed a few lattice constants. The calculated spin structure factors $\chi_{\perp}({\mathbf{q}})$ and $\chi_{\parallel}({\mathbf{q}})$ [@SM] revealed a superposition of the two (square lattice) or three (triangular lattice) spin spirals with $\pm \mathbf{q}$, which is the first indication of the skyrmionic state (see Fig. \[Fig3\] [**a**]{}, [**b**]{}). For another confirmation one can calculate the skyrmionic number, which is related to the topological charge. In the discrete case, the result is extremely sensitive to the number of sites comprising a single skyrmion and to the way the spherical surface is approximated [@Rosales]. Here we used the approach of Berg and Lüscher [@Berg], which is based on the definition of the topological charge as the sum of the nonoverlapping spherical triangles areas [@Blugel2016; @SM]. According to our simulations, the topological charge of each object shown in Fig. \[Fig3\] [**a**]{} and Fig. \[Fig3\] [**b**]{} is equal to unity. The square and triangular lattice systems exhibit completely different dependence of the average skyrmion number on the magnetic field. As it is shown in Fig. \[Fig3\] [**c**]{}, the skyrmionic phase for the square lattice $2.7\leq B_{z} < 4.2$ is much narrower than the one of the triangular lattice $1.2 \leq B_{z} < 6$. Moreover, in the case of the square lattice we observe strong finite-size effects leading to a non-stable value of the average topological charge in the region of $0 < B_{z} < 2.7$. Since the value of the magnetic field is given in the units of DMI, the topological spin structures in the considered model require weak magnetic fields, which is very appealing for modern experiments.
We would like to stress that the underlying mechanism responsible for the skyrmions presented in this work is intrinsically different from those presented in other studies. Generally, skyrmions can be realized by means of the different mechanisms [@NagaosaReview]. For instance, in noncentrosymmetric systems these spin textures arise from the competition between isotropic and anisotropic exchange interactions. On the other hand, magnetic frustration induced by competing isotropic exchange interactions can also lead to a skyrmion crystal state, even in the absence of DMI and anisotropy. Moreover, following the results of [@Blugel] the nanoskyrmions are stabilized due to a four-spin interaction. Nevertheless, the nanoskyrmions have never been predicted and observed as the result of the interplay between DMI and a constant magnetic field.
![Catalogue of the DMI nanoskyrmion species stabilized on the small square (top figures) and triangular (bottom figures) clusters with open boundary conditions. The corresponding magnetic fields are (from top to bottom): on-site $B_{z}=3.0;~3.0$, off-site $B_{z}=1.2;~2.4$, bond-centered $B_{z}=2.5;~3.0$. []{data-label="Fig4"}](Fig4.pdf){width="0.64\linewidth"}
Full catalogue of nanoskyrmions obtained in this study is presented in Fig. \[Fig4\]. As one can see, they can be classified with respect to the position of the skyrmionic center on the discrete lattice. Thus, the on-site, off-site (center of an elementary triangle or square) and bond-centred configurations have been revealed. Importantly, these structures can be stabilized not only on the lattice, but also on the isolated plaquettes of 12-37 sites with open boundary conditions. It is worth mentioning that not all sites of the isolated plaquettes form the skyrmion. In some cases inclusion of the additional spins that describe the environment of the skyrmion is necessary for stabilization of the topological spin structure. As we discuss below, it can be used to construct a magnetic domain for data storage.
[*Nanoskyrmionic mosaic*]{} — By defining the local rules (interaction between nearest neighbours) and external parameters (such as the lattice size and magnetic field) one can obtain magnetic structures with different patterns by using Monte Carlo approach. As it follows from Fig. \[Fig5\], the pure off-site square skyrmion structures are realized on the lattices $6\times14$ with open boundary conditions at $B_{z} = 3.0$. Increasing the lattice size along the $x$ direction injects bond-centred skyrmions into the system. In turn, increasing the magnetic field leads to the compression of the nanoskyrmions and reduces their density. At $B_{z}=3.6$ we observed the on-site square skyrmion of the most compact size. Thus the particular pattern of the resulting nanoskyrmion mosaic respect the minimal size of individual nanoskyrmions and the tendency of the system to formation of close-packed structures to minimize the energy.
![(Top panel) Evolution of the nanoskyrmions on the lattice $6\times14$ with respect to the magnetic field. (Bottom panel) Examples of nanoskyrmion mosaics obtained from the Monte Carlo simulations of the DMI model on the square lattices with open boundary conditions at the magnetic field $B_{z}=3$.[]{data-label="Fig5"}](Fig5.pdf){width="1\linewidth"}
The solution of the Heisenberg-exchange-free Hamiltonian for the magnetic fields corresponding to the skyrmionic phase can be related to the famous geometrical NP-hard problem of bin packing [@Bin]. Let us imagine that there is a set of fixed-size and fixed-energy objects (nanoskyrmions) that should be packed on a lattice of $n \times m$ size in a more compact way. As one can see, such objects are weakly-coupled to each other. Indeed, the contact area of different skyrmions is nearly ferromagnetic, so the binding energy between two skyrmions is very small, since it is related to DMI. In addition, the energy difference between nanoskyrmions of different types is very small as can be seen from Fig. \[Fig5\]. Indeed, the energy difference between three off-site and bond-centered skyrmions realized on the $6\times14$ plaquette is about $0.2\,B_{z}$. Here, we sample spin orientation within the Monte Carlo simulations and do not manipulate the nanoskyrmions directly. Thus, the stabilization of a periodic and close-packed nanoskyrmionic structures on the square or triangular lattice with the length of more than $30$ sites becomes a challenging task. Therefore, the problem can be addressed to a LEGO$\textregistered$-type constructor, where one builds the mosaic pattern using the unit nanoskyrmionic bricks.
[*Size limit*]{} — For practical applications, it is of crucial importance to have skyrmions with the size of the nanometer range, for instance to achieve a high density memory [@Lin]. Previously the record density of the skyrmions was reported for the Fe/Ir(111) system [@Blugel] for which the size of the unit cell of the skyrmion lattice stabilized due to a four-spin interaction was found to be 1nm $\times$ 1nm. In our case the diameter of the triangular on-site skyrmion is found to be $4$ lattice constants (Fig. \[Fig4\]). Thus for prototype $sp$-electron materials the diameter is equal to 1.02 nm (semifluorinated graphene [@graphene]) and 2.64 nm (Si(111):{Sn,Pb} [@silicon]).
On the basis of the obtained results we predict the smallest diameter of $2$ lattice constants for the on-site square skyrmion. Our simulations for finite-sized systems with open boundary conditions show that such a nanoskyrmion can exist on the $5\times5$ cluster (Fig. \[Fig4\] top right plaquette), which is smaller than that previously reported in [@Keesman]. We believe that this is the ultimate limit of a skyrmion size on the square lattice.
[*Micromagnetic model*]{} — The analysis of the isolated skyrmion can also be fulfilled on the level of the micromagnetic model treating the magnetization as a continuous vector field [@SM]. Contrary to the case of nonzero exchange interaction, the Heisenberg-exchange-free Hamiltonian allows to obtain an analytical solution for the skyrmionic profile. In the particular case of the square lattice, the radius of the isolated skyrmion is equal to $R=4Da/B$, where $a$ is the lattice constant. Moreover, the skyrmionic solution is stable even in the presence of a small exchange interaction $J\ll{}D$ [@SM]. It is worth mentioning that the obtained result for the radius of the Heisenberg-exchange-free skyrmion is essentially different from the case of competing exchange interaction and DMI, where the radius is proportional to the ratio $J/D$. Although in the absence of DMI both, the exchange interaction and magnetic field, favour the collinear orientation of spins in the direction perpendicular to the surface, the presence of DMI changes the picture drastically. When the spins are tilted by the anisotropic interaction, the magnetic field still wants them to point in the $z$ direction, while the exchange interaction tries to keep two neighbouring spins parallel without any relation to the axes. Therefore, the stronger magnetic field decreases the radius of the skyrmion, while the larger value of exchange interaction broadens the structure [@ref].
![([**a**]{}) two possible states of the nanoskyrmionic bit. ([**b**]{}) the 24-bit nanoskyrmion memory block encoding DMI abbreviation as obtained from the Monte Carlo simulations with $B_{z} =1.2$.[]{data-label="Fig6"}](Fig6.pdf){width="0.9\linewidth"}
[*Memory prototype*]{} — Having analyzed individual nanoskyrmions we are now in a position to discuss technological applications of the nanoskyrmion mosaic. Fig. \[Fig6\] a visualizes a spin structure consisting of the elementary blocks of two types that we associated with the two possible states of a single bit, the “1” and “0”. According to our Monte Carlo simulations a side stacking of off-site square plaquette visualized in Fig. \[Fig4\] protects skyrmionic state in each plaquette. Thus we have a stable building block for design of the nano-scale memory or nanostructures presented in Fig. \[Fig1\]. Similar to the experimentally realized vacancy-based memory [@Memory], a specific filling of the lattice can be reached by means of the scanning tunnelling microscopy (STM) technique. In turn, the spin-polarized regime [@STM] of STM is to be used to read the nanoskyrmionic state. The density of the memory prototype we discuss can be estimated as 1/9 bits $a^{-2}$ ($a$ is the lattice constant), which is of the same order of magnitude as obtained for vacancy-based memory.
[*Conclusion*]{} — We have introduced a new class of the two-dimensional systems that are described with the Heisenberg-exchange-free Hamiltonian. The frustration of DMI on the triangular and square lattices leads to a non-trivial state of nanoskyrmionic mosaic that can be manipulated by varying the strength of the constant magnetic field and the size of the sample. Importantly, such a state appears as a result of competition between DMI and the constant magnetic field. This mechanism is unique and is reported for the first time. Being stable on non-regular lattices with open boundary conditions, nanoskyrmionic phase is shown to be promising for technological applications as a memory component. We also present the catalogue of nanoskyrmionic species that can be stabilized already on a tiny plaquettes of a few lattice sites. Characteristics of the isolated skyrmion were studied both, numerically and analytically, within the Monte Carlo simulations and in the framework of the micromagnetic model, respectively.
We thank Frederic Mila, Alexander Tsirlin and Alexey Kimel for fruitful discussions. The work of E.A.S. and V.V.M. was supported by the Russian Science Foundation, Grant 17-72-20041. The work of M.I.K. was supported by NWO via Spinoza Prize and by ERC Advanced Grant 338957 FEMTO/NANO. Also, the work was partially supported by the Stichting voor Fundamenteel Onderzoek der Materie (FOM), which is financially supported by the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO).
[42]{}
Uhlenbeck, G. E., Goudsmit, S. [*Die Naturwissenschaften*]{} [**13**]{}, 953 (1925). Uhlenbeck, G. E., Goudsmit, S. [*Nature*]{} [**117**]{}, 264 (1926).
Goudsmit, S., Uhlenbeck, G. E. [*Physica*]{} [**6**]{}, 273 (1926).
Heitler, W., London, F. [*Zeitschrift für Physik*]{} [**44**]{}, 455 (1927). Heisenberg, W. [*Zeitschrift für Physik*]{} [**43**]{}, 172 (1927). Heisenberg, W. [*Zeitschrift für Physik*]{} [**49**]{}, 619 (1928). Dirac, P. A. M. [*Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences*]{} (The Royal Society, 1929).
Anderson, P. W. [*Phys. Rev.*]{} [**115**]{}, 2 (1959). Dzyaloshinsky, I. [*Journal of Physics and Chemistry of Solids*]{} [**4**]{}, 241 (1958). Moriya, T. [*Phys. Rev.*]{} [**120**]{}, 91 (1960). Skyrme, T. [*Nuclear Physics*]{} [**31**]{}, 556 (1962). Bogdanov, A., Yablonskii, D. [*Sov. Phys. JETP*]{} [**68**]{}, 101 (1989).
Mühlbauer, S., et al. [*Science*]{} [**323**]{}, 915 (2009). Münzer, W., et al. [*Phys. Rev.*]{} [**B 81**]{}, 041203(R) (2010). Yu, X., et al. [*Nature*]{} [**465**]{}, 901 (2010).
Nagaosa, N., Tokura, Y. [*Nature Nanotechnology*]{} [**8**]{}, 899 (2013).
Banerjee, S., Rowland, J., Erten, O., Randeria M. [*Phys. Rev.*]{} [**X 4**]{}, 031045 (2014). Badrtdinov, D. I., Nikolaev, S. A., Katsnelson, M. I., Mazurenko, V. V. [*Phys. Rev.*]{} [**B 94**]{}, 224418 (2016). Mazurenko, V. V., et al. [*Phys. Rev.*]{} [**B 94**]{}, 214411 (2016). Slezák, J., Mutombo, P., Cháb, V. [*Phys. Rev.*]{} [**B 60**]{}, 13328 (1999). Modesti, S., et al. [*Phys. Rev. Lett.*]{} [**98**]{}, 126401 (2007). Li, G., et al. [*Nature Communications*]{} [**4**]{}, 1620 (2013).
Kashtiban, R. J., et al. [*Nature Communications*]{} [**5**]{}, 4902 (2014).
Itin, A. P., Katsnelson, M. I. [*Phys. Rev. Lett.*]{} [**115**]{}, 075301 (2015). Dutreix, C., Stepanov, E. A., Katsnelson, M. I. [*Phys. Rev.*]{} [*B 93*]{}, 241404(R) (2016). Stepanov, E. A., Dutreix, C., Katsnelson, M. I. [*Phys. Rev. Lett.*]{} [**118**]{}, 157201 (2017). Trotzky, S., et al. [*Science*]{} [**319**]{}, 295 (2008). Gong, M., Qian, Y., Yan, M., Scarola, V. W., Zhang, C. [*Scientific Reports*]{} [**5**]{}, 10050 (2015).
Lin, Y. J., Jiménez-Garcia, K., Spielman, I. B. [*Nature*]{} [**471**]{}, 83 (2011).
Wang, P., et al. [*Phys. Rev. Lett.*]{} [**109**]{}, 095301 (2012). Da-Chuang, L., et al. [*Chinese Physics Letters*]{} [**32**]{}, 050302 (2015).
Supplemental Material for “Heisenberg-exchange-free nanoskyrmion mosaic”.
Rosales, H. D., Cabra, D. C., Pujol, P. [*Phys. Rev.*]{} [**B 92**]{}, 214439 (2015). Berg, B., Lüscher, M. [*Nuclear Physics*]{} [**B 190**]{}, 412 (1981). Heo, C., Kiselev, N. S., Nandy, A. K., Blügel, S., Rasing, T. [*Scientific reports*]{} [**6**]{}, 27146 (2016).
Heinze, S., et al. [*Nature Physics*]{} [**7**]{}, 713 (2011).
Johnson, D. S. [*Near-optimal bin packing algorithms*]{}, (Ph.D. thesis, Massachusetts Institute of Technology, 1973).
Lin, S. Z., Saxena, A. [*Phys. Rev.*]{} [**B 92**]{}, 180401(R) (2015). Keesman, R., Raaijmakers, M., Baerends, A. E., Barkema, G. T., Duine, R. A. [*Phys. Rev.*]{} [**B 94**]{}, 054402 (2016). Kalff, F. E., et al. [*Nature Nanotechnology*]{} [**11**]{}, 926 (2016).
Wiesendanger, R. [*Rev. Mod. Phys.*]{} [**81**]{}, 1495 (2009). Although the rich variety of nanoskyrmions on the discrete lattices obtained in the current study can not be described by the micromagnetic model, because the length scale on which the magnetic structure varies is of the order of the interatomic distance, the result for the radius matches very well our numerical simulations and can be considered as a limiting case of the micromagnetic solution when the applied magnetic filed is of the order of DMI. Nevertheless, the analytical solution for the isolated skyrmion might be helpful for the other set of system parameters, where the considered micromagnetic model is applicable.
Methods
=======
The DMI Hamiltonian with classical spins was solved by means of the Monte Carlo approach. The spin update scheme is based on the Metropolis algorithm. The systems in question are gradually (200 temperature steps) cooled down from high temperatures (${\rm T}\sim
|\mathbf{D}_{ij}|$) to ${\rm T}=~0.01|\mathbf{D}_{ij}|$. Each temperature step run consists of $1.5\times10^{6}$ Monte Carlo steps. The corresponding micromagnetic model was solved analytically.
Definition of the skyrmion number
=================================
Skyrmionic number is related to the topological charge. In the discrete case, the result is extremely sensitive to the number of sites comprising a single Skyrmion and to the way the spherical surface is approximated. Here we used the approach of Berg and Lüscher, which is based on the definition of the topological charge as the sum of the nonoverlapping spherical triangles areas. Solid angle subtended by the spins ${\bf S}_{1}$, ${\bf S}_{2}$ and ${\bf S}_{3}$ is defined as $$\begin{aligned}
A = 2 \arccos[\frac{1+ {\bf S}_{1} {\bf S}_{2} + {\bf S}_{2} {\bf S}_{3} + {\bf S}_{3} {\bf S}_{1}}{{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2(1+ {\bf S}_{1} {\bf S}_{2} )(1+ {\bf S}_{2} {\bf S}_{3})(1+ {\bf S}_{3} {\bf S}_{1})}}}].\end{aligned}$$ We do not consider the exceptional configurations for which $$\begin{aligned}
&{\bf S}_{1} [{\bf S}_{2} \times {\bf S}_{1}] = 0 \\
&1+ {\bf S}_{1} {\bf S}_{2} + {\bf S}_{2} {\bf S}_{3} + {\bf S}_{3} {\bf S}_{1} \le 0. \notag\end{aligned}$$ Then the topological charge $Q$ is equal to $
Q = \frac{1}{4\pi} \sum_{l} A_{l}.
$
Spin spiral state
=================
Our Monte Carlo simulations for the DMI Hamiltonian with classical spin $|\mathbf{S}| = 1$ have shown that the triangular and square lattice systems form [*spin spiral*]{} structures (see Fig. \[spinfactors\]). The obtained spin textures and the calculated spin structure factors are $$\begin{aligned}
\chi_{\perp}({\mathbf{q}})&=\frac{1}{N}{\ensuremath{\left\langle \left|\sum_{i} S_{i}^{x} \, e^{-i{\mathbf{q}}\cdot{\bf r}_{i}} \right|^{2}+\left|\sum_{i} S_{i}^{y} \, e^{-i{\mathbf{q}}\cdot{\bf r}_{i}} \right|^{2} \right\rangle}}\\
\chi_{\parallel}({\mathbf{q}})&=\frac{1}{N}{\ensuremath{\left\langle \left|\sum_{i} S_{i}^{z} e^{-i{\mathbf{q}}\cdot{\bf r}_{i}} \right|^{2} \right\rangle}}.\end{aligned}$$
![Fragments of the spin textures and spin structure factors obtained with the Heisenberg-exchange-free model on the square $20\times20$ [**A**]{} and triangular $21\times21$ [**B**]{} lattices in the absence of the magnetic field. The temperature is equal to ${\rm T}=0.01\,|{\rm \bf D}|$.[]{data-label="spinfactors"}](FigS1.pdf){width="0.55\linewidth"}
The pictures of intensities at zero magnetic field correspond to the spin spiral state with $|\mathbf{q}_{\square}| = \frac{1}{2{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}} \times \frac{2\pi}{a}$ and $|\mathbf{q}_{\triangle}| \simeq 0.29 \times \frac{2\pi}{a}$ for the square and triangle lattices, respectively. Here $a$ is the lattice constant. The corresponding periods of the spin spirals are $\lambda_{\triangle} = 3.5\,a$ and $\lambda_{\square} = 2 {\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}\,a$. The energies of the triangular and square systems at B$_{z}$ and low temperatures are scaled as energies of the elementary clusters, namely E$_{\triangle}$ and E$_{\square}$ (Fig. \[Fig2\] c and Fig. \[Fig2\] d), multiplied by the number of sites. In contrast to the previous considerations taking into account Heisenberg exchange interaction, we do not observe any long-wavelength spin excitations (${\mathbf{q}}=0$) in $\chi_{\parallel}(\boldsymbol{q})$.
Micromagnetics of isolated skyrmion
===================================
A qualitative description of a single skyrmion can be obtained within a micromagnetic model, when the quantum spin operators $\hat{\bf S}$ are replaced by the classical local magnetization ${\bf m}_{i}$ at every lattice site and later by a continuous and differentiable vector field ${\bf m}({\bf r})$ as $\hat{\bf S}_{i} \to S {\bf m}_{i} \to S{\bf m}({\bf r})$, where $S$ is the spin amplitude and $|{\bf m}|=1$. This approach is valid for large quantum spins when the length scale on which the magnetic structure varies is larger than the interatomic distance. Let us make the specified transformation explicitly and calculate the energy of the single spin localized at the lattice site $i$ that can be found as a sum of the initial DMI Hamiltonian over the nearest neighbor lattice sites $$\begin{aligned}
\label{mme1}
E_{i}&=-\sum_{j}J_{ij}\,\hat{\bf S}_{i}\,\hat{\bf S}_{j} +
\sum_{j}{\bf D}_{ij}\,[\hat{\bf S}_{i}\times\hat{\bf S}_{j}] -
{\bf B}\,\hat{\bf S}_{i} \\
&= -S^{2}\sum_{j}J_{ij}\,{\bf m}_{i}\,{\bf m}_{j} +
S^2\sum_{j}{\bf D}_{ij}\,[{\bf m}_{i}\times{\bf m}_{j}] -
S{\bf B}\,{\bf m}_{i} \notag\\
&=-S^{2}\sum_{j}J_{ij}\left(1-\frac{1}{2}({\bf m}_{j}-{\bf m}_{i})^{2}\right) +
S^2\sum_{j}{\bf D}_{ij}\left[{\bf m}_{i}\times\left({\bf m}_{j}-{\bf m}_{i}\right)\right] - S{\bf B}\,{\bf m}_{i} \notag\\
&\simeq-S^{2}\sum_{j}J_{ij}\left(1-\frac{1}{2}({\bf m}({\bf r}_{i}+\delta{\bf r}_{ij}) - {\bf m}({\bf r}_{i}))^{2}\right) + S^2\sum_{j}{\bf D}_{ij}\left[{\bf m}_{i}({\bf r}_{i})\times\left({\bf m}({\bf r}_{i}+\delta{\bf r}_{ij}) - {\bf m}({\bf r}_{i})\right)\right] - S{\bf B}\,{\bf m}({\bf r}_{i}) \notag\\
&\simeq\frac{S^{2}a^2}{2}\sum_{j}J_{ij}\left(({\bf e}_{ij}\nabla)\,{\bf m}({\bf r}_{i})\right)^{2} + S^2a\sum_{j}{\bf D}_{ij}\left[{\bf m}_{i}({\bf r}_{i})\times
(({\bf e}_{ij}\nabla)\,{\bf m}({\bf r}_{i}))\right] - S{\bf B}\,{\bf m}({\bf r}_{i}), \notag\end{aligned}$$ The DMI vector ${\bf D}_{ij} = D\left[{\bf e}_{z}\times{\bf e}_{ij}\right]$ is perpendicular to the vector ${\bf e}_{ij}$ that connects spins at the nearest-neighbor sites ${\ensuremath{\left\langle ij \right\rangle}}$ and favours their orthogonal alignment, while the exchange term tends to make the magnetization uniform. The above derivation was obtained for a particular case of a square lattice, but can be straightforwardly generalized to an arbitrary configuration of spins. Then, we obtain $$\begin{aligned}
\label{mme2}
E_{i}
&= \frac{S^{2}a^2}{2}\sum_{j}J_{ij}\left(({\bf e}_{ij}\nabla)\,{\bf m}({\bf r}_{i})\right)^{2} + S^2aD\sum_{j}\left[{\bf e}_{z}\times{\bf e}_{ij}\right]\left[{\bf m}_{i}\times
\Big(({\bf e}_{ij}\nabla)\,{\bf m}({\bf r}_{i})\Big)\right] - S{\bf B}_{z}\,{\bf m}_{z}({\bf r}_{i}) \\
&= \frac{J'}{2}\left[\Big(\partial_{x}\,{\bf m}({\bf r}_{i})\Big)^{2} + \Big(\partial_{y}\,{\bf m}({\bf r}_{i})\Big)^{2} \right] - S{\bf B}_{z}\,{\bf m}_{z}({\bf r}_{i}) \notag \\
&\,+ D'\,\left[{\bf m}_{z}({\bf r}_{i})\,\Big(\partial_{x}{\bf m}_{x}({\bf r}_{i})\Big) - \Big(\partial_{x}{\bf m}_{z}({\bf r}_{i})\Big)\,{\bf m}_{x}({\bf r}_{i})) + {\bf m}_{z}({\bf r}_{i})\,\Big(\partial_{y}{\bf m}_{y}({\bf r}_{i})\Big) - \Big(\partial_{y}{\bf m}_{z}({\bf r}_{i})\Big)\,{\bf m}_{y}({\bf r}_{i}))\right] \notag\end{aligned}$$ where $J'=2JS^{2}a^{2}$, $D'=2DS^{2}a$ and $B'=BS$. The unit vector of the magnetization at every point of the vector field can be parametrized by ${\bf m} = \sin\theta\cos\psi\,{\bf e}_{x} + \sin\theta\sin\psi\,{\bf e}_{y} + \cos\theta\,{\bf e}_{z}$ in the spherical coordinate basis. In order to describe axisymmetric skyrmions, we additionally introduce the cylindrical coordinates $\rho$ and $\varphi$, so that $\rho=0$ is associated to the center of a skyrmion $$\begin{aligned}
\partial_{x}{\bf m}({\bf r}_{i}) &=
\cos\varphi\,\partial_{\rho}{\bf m}({\bf r}_{i}) -
\frac{1}{\rho}\sin\varphi\,\partial_{\varphi}{\bf m}({\bf r}_{i})\,, \\
\partial_{y}{\bf m}({\bf r}_{i}) &=
\sin\varphi\,\partial_{\rho}{\bf m}({\bf r}_{i}) +
\frac{1}{\rho}\cos\varphi\,\partial_{\varphi}{\bf m}({\bf r}_{i})\,,\end{aligned}$$ from which follows $$\begin{aligned}
\Big(\partial_{x}{\bf m}({\bf r}_{i})\Big)^2 +
\Big(\partial_{y}{\bf m}({\bf r}_{i})\Big)^2 =
\Big(\partial_{\rho}{\bf m}({\bf r}_{i})\Big)^2 +
\frac{1}{\rho^2}\Big(\partial_{\varphi}{\bf m}({\bf r}_{i})\Big)^2.\end{aligned}$$ Assuming that $\theta=\theta(\rho,\varphi)$ and $\psi=\psi(\rho,\varphi)$, the derivatives of the magnetization can be expressed as $$\begin{aligned}
\partial_{\rho}{\bf m}({\bf r}_{i}) &=
\Big(\cos\theta\cos\psi\,\dot\theta_{\rho}-\sin\theta\sin\psi\,\dot\psi_{\rho}\Big)\,{\bf e}_{x} +
\Big(\cos\theta\sin\psi\,\dot\theta_{\rho}+\sin\theta\cos\psi\,\dot\psi_{\rho}\Big)\,{\bf e}_{y} -
\sin\theta\,\dot\theta_{\rho}\,{\bf e}_{z} \,, \\
\partial_{\varphi}{\bf m}({\bf r}_{i}) &=
\Big(\cos\theta\cos\psi\,\dot\theta_{\varphi}-\sin\theta\sin\psi\,\dot\psi_{\varphi}\Big)\,{\bf e}_{x} +
\Big(\cos\theta\sin\psi\,\dot\theta_{\varphi}+\sin\theta\cos\psi\,\dot\psi_{\varphi}\Big)\,{\bf e}_{y} -
\sin\theta\,\dot\theta_{\varphi}\,{\bf e}_{z} \,.\end{aligned}$$ The exchange and DMI energies then equal to $$\begin{aligned}
E^{J}_{i} &=
\frac{J'}{2}\left[\dot\theta^2_{\rho} + \sin^2\theta\,\dot\psi_{\rho}^2 + \frac{1}{\rho^2}\dot{\theta}^2_{\varphi} +
\frac{1}{\rho^2}\sin^2\theta\,\dot\psi^2_{\varphi} \right],\\
E^{D}_{i} &= D'\left(\cos(\psi-\varphi)\left[\dot\theta_{\rho}+\frac{1}{\rho}\sin\theta\cos\theta\,\dot\psi_{\varphi}\right] + \sin(\psi-\varphi)\left[\frac{1}{\rho}\dot\theta_{\varphi}-\sin\theta\cos\theta\,\dot\psi_{\rho}\right]\right).\end{aligned}$$ Finally, the micromagnetic energy can be written as follows $$\begin{aligned}
E(\theta,\psi) =\int_{0}^{\infty}{\cal E}(\theta,\psi,\rho,\varphi)\,d\rho\,d\varphi \,,\end{aligned}$$ where the skyrmionic energy density is $$\begin{aligned}
{\cal E}(\theta,\psi,\rho,\varphi)
&= \frac{J'}{2}\left[\rho\dot\theta^2_{\rho} + \rho\sin^2\theta\,\dot\psi_{\rho}^2 + \frac{1}{\rho}\dot{\theta}^2_{\varphi} +
\frac{1}{\rho}\sin^2\theta\,\dot\psi^2_{\varphi} \right] -
B'\rho\cos\theta \\
& + D'\left(\cos(\psi-\varphi)\left[\rho\dot\theta_{\rho}+\sin\theta\cos\theta\,\dot\psi_{\varphi}\right] + \sin(\psi-\varphi)\left[\dot\theta_{\varphi}-\rho\sin\theta\cos\theta\,\dot\psi_{\rho}\right]\right). \notag\end{aligned}$$ The set of the Euler-Lagrange equations for this energy density $$\begin{aligned}
\begin{cases}
\frac{\partial{\cal E}}{\partial\theta} - \frac{d}{d\rho}\frac{\partial{\cal E}}{\partial\dot\theta_{\rho}} - \frac{d}{d\varphi}\frac{\partial{\cal E}}{\partial\dot\theta_{\varphi}}=0,
\\
\frac{\partial{\cal E}}{\partial\psi} - \frac{d}{d\rho}\frac{\partial{\cal E}}{\partial\dot\psi_{\rho}} - \frac{d}{d\varphi}\frac{\partial{\cal E}}{\partial\dot\psi_{\varphi}}=0,
\end{cases}\end{aligned}$$ then reads $$\begin{aligned}
\left\{\hspace{-0.15cm}
\begin{matrix}
&J'\left[\rho\,\ddot\theta_{\rho} + \dot\theta_{\rho} + \frac{1}{\rho}\ddot\theta_{\varphi} - \frac{1}{\rho} \sin\theta\cos\theta\,\dot\psi^2_{\varphi} - \rho\sin\theta\cos\theta\,\dot\psi^2_{\rho} \right] + 2D'\left[\cos(\psi-\varphi)\sin^2\theta\,\dot\psi_{\varphi} - \rho\sin(\psi-\varphi)\sin^2\theta\,\dot\psi_{\rho}\right] - B'\rho\sin\theta=0 \,,\\
&J'\left[\rho\sin^2\theta\,\ddot\psi_{\rho} + \sin^2\theta\,\dot\psi_{\rho} + \rho\sin2\theta\,\dot\psi_{\rho}\dot\theta_{\rho} + \frac{1}{\rho}\sin^2\theta\,\ddot\psi_{\varphi} + \frac{1}{\rho}\sin2\theta\,\dot\theta_{\varphi}\,\dot\psi_{\varphi}\right] + 2D'\left[\rho\sin(\psi-\varphi)\sin^2\theta\,\dot\theta_{\rho} - \cos(\psi-\varphi)\sin^2\theta\,\dot\theta_{\varphi}\right] = 0 \,.
\end{matrix}
\right. \notag\end{aligned}$$ Here we restrict ourselves to the particular case of the $C_{nv}$ symmetry. Then, one can assume that $\dot\theta_{\varphi}=0$ and $\psi-\varphi=\pi{}n$ ($n\in{\mathbb Z}$), which leads to $$\begin{aligned}
\alpha \left[\rho^{2}\,\ddot\theta_{\rho} + \rho \dot\theta_{\rho} - \sin\theta\cos\theta \right] \pm 2\rho\sin^2\theta - \beta\rho^{2}\sin\theta=0 \,,\end{aligned}$$ where $\alpha=J'/D'$ and $\beta=B'/D'$. Although we are interested in the problem where the exchange interaction is absent, it is still necessary to keep $\alpha\ll1$ as a small parameter in order to investigate stability of the skyrmionic solution under small perturbations. Therefore, one can look for a solution of the following form $$\begin{aligned}
\theta = \theta_{0} + \alpha \theta_{1} + O(\alpha^2),\end{aligned}$$ which results in $$\begin{aligned}
\alpha \left[ \rho^{2}\,\ddot\theta_{0} + \rho \dot\theta_{0} - \sin\theta_{0}\cos\theta_{0} \right]
\pm 2\rho\sin^2\theta_{0} \pm 4 \alpha \rho\sin\theta_{0}\cos\theta_{0}\, \theta_{1}
- \beta\rho^{2}\sin\theta_{0} - \alpha \beta\rho^{2}\cos\theta_{0}\,\theta_{1}
=0 \,.\end{aligned}$$
Solution for $J'=0$
-------------------
When the exchange interaction is dynamically switched off ($J' = 0$), the zeroth order in the limit $\alpha\ll1$ leads to $$\begin{aligned}
\rho\,\sin\theta_{0} \left( \beta \rho \mp 2\sin\theta_{0} \right) =0 \,.\end{aligned}$$ This yields two solutions: $$\begin{aligned}
1)~\sin\theta_{0} &= 0,~\text{which corresponds to a FM ordered state}\\
2)~\sin\theta_{0} &= \pm \frac{\beta\rho}{2} = \pm \frac{B'\rho}{2D'},~\text{which describes a Skyrmion}.\label{eq:Skprofile}\end{aligned}$$ Then, the unit vector of the magnetization that describes a single skyrmion is equal to $$\begin{aligned}
{\bf m} = \sin\theta \cos(\psi-\phi)\,{\bf e}_{\rho} + \sin\theta\sin(\psi-\varphi)\,{\bf e}_{\varphi} + \cos\theta\,{\bf e}_{z} = \pm\sin\theta\,{\bf e}_{\rho} + \cos\theta\,{\bf e}_{z} =
\frac{\beta\rho}{2}\,{\bf e}_{\rho} + \cos\theta_0\,{\bf e}_{z},\end{aligned}$$ Importantly, the radial coordinate of the skyrmionic solution is limited by the condition $\rho\leq\frac{2D'}{B'}$. Moreover, the $z$ component of the magnetization, namely ${\bf m}_{z}=\cos\theta_0$, is not uniquely determined by the Euler-Lagrange equations. Indeed, the solution with the initial condition $\theta_0(\rho=0)=\pi$ for the center of the skyrmion describes only half of the skyrmion, because the magnetization at the boundary ${\bf m}(\rho=2D'/B')$ lies in-plane along ${\bf e}_{\rho}$, which can not be continuously matched with the FM environment of a single skyrmion. Moreover, the magnetization at the larger values of $\rho$ is undefined within this solution. Therefore, one has to make some efforts to obtain the solution for the whole skyrmionic structure.
Let us stick to the case, when the magnetization of the center of the skyrmion is points, i.e. $\theta_0(\rho=0)=\pi$, ${\bf m}_{z}(\rho=0)=-1$, and magnetic field is points. Then, Eq. \[eq:Skprofile\] provides the solution on the segment $\theta_0\in[\pi,\frac{\pi}{2}]$ and $\rho\in[0,\frac{2D'}{B'}]$, which for every given direction with the fixed angle $\varphi$ describes the quarter period of the spin spiral as shown in the left panel of Fig. \[fig:SkProf\]. As it is mentioned in the main text, in the case of the $C_{nv}$ symmetry, the single skyrmion is nothing more than a superposition of three and two spin spirals for the case of the triangular and square lattice respectively. Therefore, one has to restore the second quarter of the period of a spin spiral and the rest can be obtained via the symmetry operation $\rho\to-\rho$.
The second part of the spin spiral can be found by shifting the variable $\rho$ by $\rho_0$ in the skyrmionic solution as $\sin\theta_0 =~ B'(\rho-~\rho_0)/2D'$. In order to match this solution with the initial one, the constant has to be equal to $\rho_0=\frac{4D'}{B'}$. Since the magnetization is defined as a continuous and differentiable function, the angle $\theta_0$ can only vary on a segment $\theta\in[\frac{\pi}{2},0]$, otherwise either the ${\bf e}_{\rho}$, or ${\bf e}_{z}$ projections of the magnetization will not fulfil the mentioned requirement. The correct matching of the two spin spirals is shown in Fig. \[fig:SkProf\] a), while Fig. \[fig:SkProf\] c) shows the violation of differentiability of ${\bf m}_{z}$ and Figs. \[fig:SkProf\] b), d) give a wrong matching of ${\bf m}_{\rho}$. Thus, the magnetization at the boundary of the skyrmion at $\rho=R=\frac{4D'}{B'}$ that defines the radius $R$ points up, i.e. ${\bf m}_{z}(\rho_0)=1$, which perfectly matches with the FM environment that is collinear to a constant magnetic field ${\bf B}$.
![Possible matching of the two parts of the spin spiral.[]{data-label="fig:SkProf"}](FigS2.pdf){width="0.5\linewidth"}
![Skyrmionic radius for two different values of the magnetic field. The larger field favours more compact structures ($R_2<R_1$) as shown in the right panel. Red arrows depict the skyrmion, while the two black arrows are related to the ferromagnetic environment.[]{data-label="fig:SkR"}](FigS3.pdf){width="0.5\linewidth"}
It is worth mentioning that the obtained result for the radius of the skyrmion $R=\frac{4D'}{B'} = \frac{8DSa}{B}$ is fundamentally different from the case when the skyrmion appears due to a competition between the exchange interaction and DMI. The radius in the later case is proportional to the ratio $J/D$ and does not depend on the value of the spin $S$, while in the Hiesenberg-exchange-free case it does. Although in the absence of DMI both, the exchange interaction and magnetic field, favour the collinear orientation of spins along the $z$ axis, the presence of DMI changes the picture drastically. The spins are now tilted from site to site, but the magnetic field still wants them to point in the $z$ direction and the exchange interaction aligns neighboring spins parallel without any relation to the axes. This leads to the fact that the stronger magnetic field decreases the radius of the skyrmion, while the larger value of the exchange interaction broadens the structure. This is also clear from Fig. \[fig:SkR\] where in the case of zero exchange interaction the larger magnetic field favours the alignment with the smaller radius of the skyrmion $R_2<R_1$ shown in the right panel.
Finally, the obtained skyrmionic structure is shown in Fig. \[fig:Sk\]. It is worth mentioning that our numerical study corresponds to $B\sim{}D$, so the radius of the skyrmion is equal to $\rho_{0}\sim4Sa$, which is of the order of a few lattice sites. Although for these values of the magnetic field the micromagnetic model is not applicable, because the magnetization changes a lot from site to site, it provides a good qualitative understanding of the skyrmionic behavior and still matches with our numerical simulations.
The corresponding skyrmion number in a two-dimensional system is defined as $$\begin{aligned}
N=\frac{1}{4\pi} \int dx\,dy\,{\bf m} \left[ \partial_{x}{\bf m}\times\partial_{y}{\bf m}\right]\end{aligned}$$ and then equal to $$\begin{aligned}
N=\frac{1}{4\pi} \int dx\,dy\,\frac{1}{\rho}\sin\theta\,\dot\theta_{\rho} = \frac{1}{4\pi} \int d\rho\,d\varphi\,\sin\theta\,\dot\theta_{\rho} = \frac12\left(\cos\theta(0) - \cos\theta(\rho_{0})\right) = 1.\end{aligned}$$ One can also consider the case of zero magnetic field. Then, solution of the Euler-Lagrange equations $$\begin{aligned}
\left\{
\begin{matrix}
\dot\psi_{\varphi} = \rho\tan(\psi-\varphi)\,\dot\psi_{\rho},\\
\dot\theta_{\varphi} = \rho\tan(\psi-\varphi)\,\dot\theta_{\rho}
\end{matrix}
\right.\end{aligned}$$ describes a spiral state, as shown in Fig. \[spinfactors\].
![Spatial profile of the skyrmionic solution.[]{data-label="fig:Sk"}](FigS4.pdf){width="0.32\linewidth"}
Solution for small $J'$
-----------------------
Now, let us study stability of the skyrmionic solution and consider the case of a small exchange interaction with respect to DMI. The first order in the limit $\alpha\ll1$ implies $$\begin{aligned}
\left[ \rho^{2}\,\ddot\theta_{0} + \rho \dot\theta_{0} - \sin\theta_{0}\cos\theta_{0} \right]
\pm 4 \rho\sin\theta_{0}\cos\theta_{0}\, \theta_{1}
- \beta\rho^{2}\cos\theta_{0}\,\theta_{1}
=0 \,.\end{aligned}$$ The zeroth order solution leads to $$\begin{aligned}
\cos \theta_{0} \, \dot\theta_{0} = \pm \frac{\beta}{2} ~~~\text{and}~~~ \cos\theta_{0} \, \ddot\theta_{0} -\sin\theta_{0} \, \dot\theta_{0}^{2} = 0 \,.\end{aligned}$$ This results in $$\begin{aligned}
&\rho^{2}\frac{\sin\theta_{0}}{\cos\theta_{0}} \left(\frac{\beta}{2\cos\theta_{0}}\right)^{2} \pm \rho \frac{\beta}{2\cos\theta_{0}} - \sin\theta_{0}\cos\theta_{0} = - \beta \rho^{2} \cos\theta_{0} \, \theta_{1}
\notag\\
&\pm \left(\frac{\beta\rho}{2}\right)^{3}\frac{1}{\cos^{4}\theta_{0}} \pm \frac{\beta\rho}{2} \frac{1}{\cos^{2}\theta_{0}} \mp \frac{\beta\rho}{2} = - \beta \rho^{2} \, \theta_{1}
\notag\\
&\beta \rho^{2} \, \theta_{1} = \mp \left[ \left(\frac{\beta}{2} \rho \right)^{3} \frac{1}{\cos^{4}\theta_{0}} + \frac{\beta}{2} \rho \left( \frac{1}{\cos^{2}\theta_{0}} - 1 \right) \right] \notag\\
&\theta_{1} = - \frac{\beta}{4}\sin\theta_{0} \left[ \frac{1}{\cos^{4}\theta_{0}} + \frac{1}{\cos^{2}\theta_{0}} \right]
\,,\end{aligned}$$ provided $\cos\theta_{0}\neq0$. Therefore the total solution for the skyrmion $$\begin{aligned}
\theta = \theta_{0} - \frac{J'B'}{4D'^2}\sin\theta_{0} \left[ \frac{1}{\cos^{4}\theta_{0}} + \frac{1}{\cos^{2}\theta_{0}} \right]\end{aligned}$$ is stable in the two important regions when $\sin\theta_0=0$ – around the center of skyrmion and at the border. The divergency of the correction $\theta_1$ in the middle of the skyrmion when $\cos\theta_0=0$ comes from the fact that the magnetization is poorly defined here, as it was discussed above.
|
{
"pile_set_name": "arxiv"
}
|
Jeanette Sawyer Cohen, PhD, clinical assistant professor of psychology in pediatrics at Weill Cornell Medical College in New York City
Pediatric Psychologist
How to Teach Independence?
How can I teach my toddler to do things independently?
You’ve probably become more patient since you started this whole parenthood thing. And you’re going to have to practice patience even more as your toddler learns to become more independent.
For example, she tells you she can’t finish the puzzle she’s doing. Instead of jumping right in and telling her which piece goes where, you’re going to have to tell her you’ll help a little. Go ahead and help, but let her do a lot of it herself, and make sure she’s the one to finish the job. That will give her a sense of accomplishment and the confidence to try again next time.
Remember that children each progress at their own rate. It’s not always fast — and there will be setbacks along the way. But the more you can allow them to do on their own without stepping in, the more they’ll be likely to try for themselves again and again.
|
{
"pile_set_name": "pile-cc"
}
|
You can make an appointment to meet with your Financial Aid counselor using Orange Success through MySlice. Once logged in, select 'Orange SUccess' under 'Advising' in the Student Services panel. Within your Orange SUccess portal, navigate to 'My Success Network,' select your financial aid advisor and schedule an appointment at a day and time convenient for you.
If your counselor is not available at a time that suits your schedule, please call or visit our office to schedule an appointment with the next available counselor.
|
{
"pile_set_name": "pile-cc"
}
|
Friends of the Crow Collection: Adults/ Children ($10/ $3) || General Public: Adults/ Children ($18/ $5)
Otsukimi Celebration, 2012
The Japan America Society celebrates the full autumn moon each year with an outdoor picnic, Japanese music, and haiku poetry. Although not commonly observed in modern-day Japan, the moon viewing tradition dates back to the Heian Period (794–A.D. 1185), when the evening was marked with poetry and music by court aristocrats. The celebration later spread to warriors, townspeople, and farmers, and became a harvest festival.
Bring a picnic supper, beverage, and something to sit on as no food or drink will be sold at the event or pre-order an Obento from Mr. Sushi for $18 when purchasing your celebration tickets. Alcohol is not allowed at Winfrey Point, a City of Dallas park facility. For more information, visit jasdfw.org.
|
{
"pile_set_name": "pile-cc"
}
|
A small city in Iowa has taken an action to save the bees from extinction. Acres of land were donated to increase the local habitats of the bees.
Over the past decade, bees are steadily disappearing. Worker bees disappear and leaving behind the queen. With a few nursing bees to take care of the immature bees, a colo… Read More
To stay updated with the latest in the apiculture industry to can visit our beekeeping latest news. On the other hand if you are starting beekeeping and would like to begin professional beekeeping today download a copy of our beekeeping for beginners ebook.
Beekeeping can be a full-time profession or a hobby that is simple. Nonetheless, more often than not, what started as a hobby would turn into a profession. But you cannot simply tell and decide yourself you will start to do beekeeping. Before starting on any avocation or profession, you need to have understanding and satisfactory knowledge on the subject that you are going to enter. If you’ve been putting off your interest in beekeeping for quite a while, then it’s about time to indulge yourself in your line of interest. Bee farming may seem simple; learning the basic beekeeping lessons can enable you to get away to a good start.
What does a beekeeper need to understand?
On beekeeping to start at the right foot you should have interest that is complete. You need to spend time taking care of your colonies of bees. You should have also agreed to share your home space with the bees. There are potential risks in beekeeping that can harm not only you but your family also. Your focus is not only to make money by selling honey; a good beekeeper should have passion and a keen interest in rearing bees.
An apiarist should know the right place for the beehives. You have to make sure beekeeping is allowed in your area, if you decide to place your beehives at your backyard. There are several places restricted to beekeeping; you have to get permission concerning this.
Beekeepers must know whether beekeeping supplies are available in the area where the beehives are situated. When you must go to a neighborhood beekeeping shop you may never understand; it is best that a nearby beekeeping store is reachable.
Equipment and protective tools are also very important to beekeepers to know. Know the right kind of suit to choose to keep you from any possible danger in beekeeping.
If you’re incapable to harvest honey from your bees all the efforts that are beekeeping would be futile. A beekeeper should know the methods in gathering the honey from the comb; beeswax is also part of the returns in beekeeping.
|
{
"pile_set_name": "pile-cc"
}
|
[Justin Timberlake & Chris Stapleton:]
Sometimes the greatest way to say something is to say nothing at all
Sometimes the greatest way to say something is to say nothing at all
Sometimes the greatest way to say something is to say nothing
But I can't help myself, no I can't help myself, no, no
Caught up in the middle of it
No I can't help myself, no I can't help myself, no, no, no
Caught up in the rhythm of it
[Justin Timberlake & Chris Stapleton:]
Sometimes the greatest way to say something is to say nothing at all
Sometimes the greatest way to say something is to say nothing at all
Sometimes the greatest way to say something is to say nothing
|
{
"pile_set_name": "pile-cc"
}
|
MacEwan International
MacEwan International promotes an internationally informed and cross-culturally sensitive learning environment. Our vision is to be a leader in internationalization, preparing all students, as well as faculty and staff, to succeed in and contribute to a global society and economy as members of an interconnected world community.
|
{
"pile_set_name": "pile-cc"
}
|
Araucaria clonal forestry: types of cuttings and mother tree sex in field survival and growth
Resumo:
Araucaria angustifolia (Bert.) O Kuntze (Paraná pine or Araucaria) is a potential forestry native species for Brazilian silviculture. However, a number of challenges and technical restraints persist, hindering its silvicultural expansion, among which are the lack of cloning technologies of superior genetic materials and their assessment under field conditions. Thus, we evaluated the potential use of araucaria plants derived from cuttings and seeds for timber production, by assessing field survival, growth and strobilus production using cuttings from male and female plants, collected from different positions, compared with those produced by sexual reproduction. Clones of male and female trees from different types of cuttings and seedlings were planted in 3 x 3 m spacing. The experiment was conducted in a completely randomized design of one tree plot with three treatments. Female clones and apical cuttings showed higher growth in diameter at breast height (6.4 cm) and total height (3.6 m) 74 months after planting, followed by seedlings and other clones, with similar results. We conclude that cuttings technique is potential for araucaria propagation for wood production purposes, and it is favored by the use of apical cuttings from female mother trees.
|
{
"pile_set_name": "pile-cc"
}
|
Hey Everyone,
This is my first post on the board, and I'm glad to see there is a section specifically on Spanish wines as they've always been a favorite of mine!
I recently drank the wine mentioned in the tile and absoultely loved it. The only problem is that I bought it in Sevilla, and haven't been able to find the exact wine at the local wine superstores. Any advice on how to find if this wine is imported and how to get it.
Welcome to the board, MrB. A Website we commonly use when looking for an elusive bottle is wine-searcher.com, but unfortunately it turned up a no-find. Also checked my regional benchmark, Spec's in Houston, with the same result. You may need to settle for a reasonable sub.
Despite all you see in the stores, both Spain and Italy (the largest wine producers in the world) only export a relatively small number of wines to the U.S. or anyone else. Find something you like in Chi-town and enjoy.
|
{
"pile_set_name": "pile-cc"
}
|
On January 1, 2018, the most significant overhaul to the Internal Revenue Code in decades took effect. High-income taxpayers stand to benefit from lower tax brackets, higher estate tax exemptions and a less stringent alternative minimum tax. However, high-income earners face new limitations on some favored deductions and notable revisions in charitable write-offs. Some of the most noteworthy changes are…
|
{
"pile_set_name": "pile-cc"
}
|
Abstract
The entorhinal cortex receives a large projection from the piriform cortex, and synaptic plasticity in this pathway may affect olfactory processing. In vitro whole cell recordings have been used here to investigate postsynaptic signalling mechanisms that mediate the induction of long-term synaptic depression (LTD) in layer II entorhinal cortex cells. To induce LTD, pairs of pulses, using a 30-millisecond interval, were delivered at 1 Hz for 15 minutes. Induction of LTD was blocked by the NMDA receptor antagonist APV and by the calcium chelator BAPTA, consistent with a requirement for calcium influx via NMDA receptors. Induction of LTD was blocked when the FK506 was included in the intracellular solution to block the phosphatase calcineurin. Okadaic acid, which blocks activation of protein phosphatases 1 and 2a, also prevented LTD. Activation of protein phosphatases following calcium influx therefore contributes to induction of LTD in layer II of the entorhinal cortex.
1. Introduction
The mechanisms that mediate the
induction of long-term synaptic potentiation (LTP) [1, 2] and depression (LTD) [3–5]
have been studied intensively within the hippocampus, but less is known about the
signalling mechanisms for LTP and LTD in the entorhinal cortex. Because the
entorhinal cortex receives highly processed inputs from sensory and association
cortices and also provides the hippocampal region with much of its sensory
input [6, 7], lasting changes in the strength of synaptic inputs to the entorhinal
cortex could alter the manner in which multimodal cortical inputs are
integrated, modulate the strength of transmission of specific patterns of sensory
input within the hippocampal formation, and contribute to mnemonic function [8–11]. Determining the effective stimulation
parameters and the intracellular signals that mediate synaptic plasticity in
the entorhinal cortex should allow insight into basic mechanisms that contribute
to the cognitive functions of the parahippocampal region.
Long-term potentiation of cortical inputs to
the superficial layers of the entorhinal cortex has been described in vivo [11–14] and in vitro [15, 16]. Stimulation
patterns required to induce LTP tend to be more intense in the entorhinal
cortex than in the hippocampus [12, 14], and we have also found that induction
of LTD in the entorhinal cortex requires intense low-frequency stimulation [17, 18]. In the hippocampus, conventional
1 Hz stimulation trains have been most
effective in slices taken from juvenile animals [19, 20] but are generally
ineffective in adult slices [21–23] and in intact animals ([31, 32], see also [33]).
Similarly, 1 Hz stimulation induces entorhinal LTD in slices from young animals
[28, 29] but is not effective in vivo [17] or in slices from older animals [18].
Repeated stimulation using pairs of pulses separated by a short 25- to 50-millisecond
interval can induce LTD more effectively in both the CA1 ([24–26], but see [27]) and
entorhinal cortex [17, 18, 33, 34]. In the CA1, the LTD induced by this stimulation
pattern is NMDA receptor-dependent, but it also depends upon activation of
local inhibitory mechanisms by the pulse-pairs [30, 31]. In the entorhinal cortex, however, repeated
paired-pulse stimulation using a 10-millisecond interval that evokes maximal paired-pulse
inhibition does not induce LTD, and LTD is induced when a 30-millisecond
interval is used that evokes maximal paired-pulse facilitation [17]. The LTD can also be enhanced when GABAA transmission is reduced with bicuculline [18]. This further suggests that LTD
in the entorhinal cortex does not require activation of local inhibitory
mechanisms but rather requires prolonged stimulation patterns that are strong
enough to overcome local inhibition and lead to NMDA receptor activation. Strong
local inhibition in the entorhinal cortex [8, 35] may thus place a restraint on
activity-dependent synaptic modification. Consistent with this idea is the
finding that the same pairing stimulation protocol that induces LTP in
hippocampus leads to LTD in entorhinal cortex [28].
Signalling mechanisms that mediate LTD in the
superficial layers of the entorhinal cortex share some similarities with NMDA
receptor-dependent LTD in the hippocampus. Long-term depression of superficial
layer inputs to layer II is dependent on NMDA receptor activation both in vivo
and in vitro [17, 18, 28, 33] but does not require activation of group I/II
metabotropic glutamate receptors ([18, 28], see [36, 37]). In the hippocampus, moderate
and prolonged influx of calcium via NMDA receptors activates calmodulin which leads
to LTD via activation of the protein phosphatase calcineurin (PP2b). Calcineurin
increases the activity of protein phosphatase 1 by reducing the activity of
inhibitor 1, and this can cause rapid reductions in AMPA-mediated responses [2, 38, 39]. Hippocampal LTD is expressed partly through the reduced conductance of
AMPA receptors caused by dephosphorylation of the GluR1 subunit by PP1 [2, 4], but
careful study has shown that calcineurin-dependent LTD in deep layer inputs to
layer II neurons in the young entorhinal cortex is not associated with a
reduced AMPA conductance, but rather involves internalization of AMPA receptors
and their proteosome-mediated degradation [28].
In the present study, the early postsynaptic
signalling mechanisms that mediate LTD in layer I inputs to layer II neurons of
the medial entorhinal cortex have been investigated using recordings of whole
cell excitatory postsynaptic potentials. Long-term depression was induced using
a prolonged paired-pulse stimulation pattern that was previously found to be
effective for induction of NMDA-receptor-dependent LTD [18]. Pharmacological agents applied to the
bathing medium or intracellular solution were used to assess the dependence of
LTD on calcium-dependent signalling mechanisms including the phosphatases calcineurin
and PP1/PP2a.
2. Experimental Procedures
2.1. Slices and Whole Cell Recordings
Experiments were performed on slices from male
Long-Evans rats (4 to 8 weeks old). Animals were anesthetized with halothane and brains were rapidly removed
and cooled (4°C) in oxygenated artificial cerebrospinal fluid (ACSF). ACSF
consisted of (in mM) 124 NaCl, 5 KCl, 1.25 NaH2PO4, 2
MgSO4, 2 CaCl2, 26 NaHCO3, and 10 dextrose and
was saturated with 95% O2–5% CO2. All chemicals were obtained
from Sigma (St. Louis, Mo, USA) unless otherwise indicated. Horizontal slices (300𝜇m) were cut with a
vibratome (WPI, Vibroslice
NVSL, Sarasota, Fla, USA) and were allowed to recover for at least one hour before
recordings. Slices were maintained in a recording chamber with oxygenated ACSF
at a rate of 2.0 mL/min, and a temperature from 22 to 24°C was used to
minimize metabolic demands on slices [18, 28]. Neurons were viewed with an
upright microscope (Leica
DML-FS, Wetzlar, Germany) equipped with a 40x objective, differential interference
contrast optics, and an infrared video camera (Cohu, 4990 series, San Diego, Calif, USA).
2.2. LTD Induction and Pharmacology
Whole-cell current clamp recordings of EPSPs
were monitored 10 minutes before and 30 minutes after LTD induction by
delivering test-pulses every 20 seconds. Intensity was adjusted to evoke EPSPs
that were approximately 3 to 4 mV in amplitude, and cells were held 5 mV below
threshold when necessary to prevent the occurrence of spikes in response to
EPSPs. Stimulus parameters for LTD
induction were based on those used previously in vivo and in vitro
[17, 18]. The induction of LTD was tested using pairs of stimulation pulses (30-millisecond
interpulse interval) delivered at a frequency of 1 Hz for either 7.5 or 15 minutes
[18]. Control cells received test-pulses throughout the recording period and
did not receive conditioning stimulation.
Signalling mechanisms mediating the induction
of LTD were tested using stock solutions of pharmacological agents that were
stored frozen and diluted on the day of use. NMDA glutamate receptors were
blocked by constant bath application of 50𝜇M DL-2-amino-5-phosphonovalerate
(APV). The calcium chelator 1,2-bis(2-aminophenoxy)-ethane-N,N,N′N′-tetraacetic
acid (BAPTA, 10 mM) was included in the recording electrode solution to block
increases in intracellular calcium. To block activation of the
calmodulin-dependent protein phosphatase calcineurin (PP2b) slices were pre-exposed
to 250𝜇M cyclosporin A (Toronto Research Chemicals Inc., North York, Ontario, Canada) for 1.5 to 3 hours [39]. In other
experiments, FK506 (50𝜇M) was included in the recording electrode solution to
block calcineurin [39, 40]. In other experiments, okadaic acid (0.1 or 1.0𝜇M)
was included in the recording solution to block activation of protein phosphatases
1 and 2a [40, 41]. Control recordings without paired-pulse stimulation were
used to verify the stability of recordings in cells filled with FK506 and 1.0𝜇M
okadaic acid.
2.3. Data Analysis
Synaptic
responses and electrophysiological properties of layer II neurons were analyzed
using the program Clampfit 8.2 (Axon Instr.). Data were standardized to the mean of baseline responses for plotting
and were expressed as the mean ±SEM. Changes in EPSP amplitude were assessed
using mixed-design ANOVAs and Neuman-Keuls tests that compared the average responses
during the baseline period, 5 minutes after conditioning stimulation, and during
the last 5 minutes of the recording period.
Layer II neurons
were classified as putative stellate
or nonstellate neurons based on electrophysiological characteristics described
by Alonso and Klink [42]. Stellate neurons were characterized by the presence
of low-frequency subthreshold membrane potential oscillations, a depolarizing
afterpotential following spikes, and prominent inward rectification in response
to hyperpolarizing current pulses. Both
pyramidal and stellate neurons in layer II can show inward rectifying sag
responses [43]. Here, neurons recorded were clearly in layer II, usually near
the border with layer I, and a proportion of these neurons did not show clear
sag and were classified as pyramidal neurons. Input resistance was
determined from the peak voltage response to −100 pA current pulses (500-millisecond
duration), and rectification ratio was quantified by expressing peak input
resistance as a proportion of the steady-state resistance at the end of the current
pulse.
3. Results
Stable recordings were obtained from 57 putative
stellate neurons and 21 putative nonstellate cells. Peak input resistance was
similar in stellate and pyramidal neurons (stellate, 95 ± 6 MΩ; pyramidal, 96
± 10 MΩ) but there was a much larger sag in voltage responses to hyperpolarizing
current injection in stellate cells (rectification ratio 1.37±0.04 in stellate
cells versus 1.06±0.01 in pyramidal cells). The amplitude of baseline synaptic
responses evoked by layer I stimulation was similar in stellate (3.9±0.2 mV)
and pyramidal cells (3.7±0.4 mV), and the amount of depression induced was
also similar for recording conditions in which significant LTD was obtained (71.2±5.6% in 14 stellate and 76.8±7.6% in 6 pyramidal cells).
3.1. LTD Induction
To determine if a relatively brief LTD
induction protocol could be used to induce LTD in whole-cell recordings, the
first tests attempted to induce LTD using paired-pulse delivery at 1 Hz for 7.5
minutes (𝑛=10) which can induce
moderate LTD of field potentials in a gas-fluid interface recording chamber [18].
Paired-pulse stimulation for 7.5 minutes did not induce depression of EPSPs
relative to control cells (93.0±10.0% of baseline after 30 minutes; F2,28=0.09,𝑃=.92). We previously observed stronger LTD of field potentials in the
interface recording chamber after 15 minutes versus 7.5 minutes of paired-pulse
stimulation [18], and prolonged
paired-pulse stimulation for 15 minutes also reliably induced LTD of whole-cell
EPSPs (𝑛=7, Figure 1). EPSP amplitude was reduced to 56.3±9.5% of
baseline levels 5 minutes after the conditioning stimulation, and remained at
58.6±6.1% of baseline levels at the end of the 30 minutes follow-up
period (F2,22=14.2,𝑃<.001).
Responses in control cells were stable (𝑛=6),
and remained at 99.6±2.6% of baseline levels at the end of the recording period (Figures 1(b2), 1(c)).
Figure 1: Prolonged, low-frequency stimulation induces long-term depression of
EPSPs in neurons in layer II of the entorhinal cortex. (a) The location of stimulating and recording
electrodes in acute slices containing the entorhinal cortex. (b) and (c) Long-term
depression was induced by repetitive delivery of pairs of stimulation pulses at a rate of 1 Hz for 15 minutes (PP-LFS). The amplitude
of synaptic responses remained stable in control cells that did not receive conditioning
stimulation. Traces in (b) compare responses
recorded during the baseline period (1) and during the follow-up period (2) in a
neuron that received low-frequency stimulation (b1) and in a control cell (b2). Responses were obtained at the times indicated in (c).
Averaged points in (b) indicate the mean ±1 SEM in this
and subsequent figures. (d) Long-term depression was not
reliably induced when low-frequency stimulation was delivered for only 7.5 minutes
rather than 15 minutes, indicating that induction of LTD requires prolonged
stimulation.
3.2. NMDA Receptors and Postsynaptic Calcium
The NMDA receptor antagonist MK-801 blocks
induction of LTD in the entorhinal cortex in vivo [17] and the NMDA receptor blocker APV has been shown to prevent LTD
of field potentials and EPSPs in entorhinal cortex slices [18, 28, 33]. We therefore tested for
the NMDA receptor-dependence of LTD of EPSPs in the current preparation using
constant bath application of APV (50𝜇M). Induction of LTD by 15 minutes of
paired-pulse stimulation was blocked by APV (𝑛=6, Figure 2(a)). There was a tendency for responses to be
potentiated immediately following conditioning stimulation, but this variable effect
was not statistically significant, and responses were close to baseline levels
at the end of the recording period (96.7±13.2% of baseline;
F2,10=2.99,𝑃=.09).
Figure 2: The induction of long-term
depression is dependent on activation of NMDA glutamate receptors and on
increases in postsynaptic calcium. (a) Constant bath application of the
NMDA receptor antagonist APV (50𝜇M) blocked the induction of long-term
depression by 15 minutes of paired-pulse low-frequency stimulation (PP LFS). (b) Blocking increases in
postsynaptic calcium by including the calcium chelator BAPTA (10 mM) in the recording
electrode solution also blocked the induction of LTD. The transient facilitation of EPSPs immediately
following stimulation was significant for the BAPTA condition but not the APV
condition, and responses were at baseline levels at the end of the recording
periods. The block of lasting depression suggests that calcium influx via NMDA
receptors is required for induction of LTD.
The role of postsynaptic calcium in LTD
induction was tested by recording from cells in which the calcium chelator
BAPTA (10 mM) was included in the recording electrode solution (10 mM, 𝑛=6, Figure 2(b)). Cells filled with
BAPTA had longer-duration action potentials than control cells (6.1±0.7 versus
3.3±0.1 milliseconds measured at the base; 𝑡1,9=3,57,𝑃<.01) consistent with a reduction in
calcium-dependent potassium conductances. The induction of LTD was blocked in
cells loaded with BAPTA. There was a significant increase in the amplitude of
EPSPs immediately following paired-pulse stimulation (to 122.3±6.0% of baseline; F2,10=5.46,𝑃<.05; N–K, 𝑃<.05), but responses returned to
baseline levels within 10 minutes and were at 94.8±7.1% of baseline levels
after 30 minutes (N–K, 𝑃=0.50, Figure
2(b)). An increase in postsynaptic calcium is therefore required for induction
of LTD in layer II neurons of the entorhinal cortex.
3.3. Protein Phosphatases
The role of the calmodulin-dependent protein phosphatase
calcineurin (PP2b) in LTD in layer II neurons was tested using either
pre-exposure to 250𝜇M cyclosporin A in the bathing medium [39], or by
including 50𝜇M FK506 postsynaptically in the recording electrode solution. In cells pre-exposed to cyclosporin A, paired-pulse stimulation was followed by a depression in EPSP amplitude that
reached
82.4±7.5% of baseline levels after 30 minutes (Figure 3(a)).
Although the depression in the cyclosporin group was not statistically
significant (F2,10=3.51,𝑃=0.07,𝑛=6), the depression obtained was also not significantly less than that
observed in control ACSF (F1,11=3.79,𝑃=.08). The result was therefore ambiguous with respect to the role
of calcineurin in LTD. To test the involvement of calcineurin more definitively
and to avoid potential presynaptic effects, the calcineurin blocker FK506 was
included in the recording electrode solution for additional groups of cells [40].
Responses in cells filled with FK506 showed a significant potentiation
immediately following paired-pulse stimulation (𝑛=8), but there was no lasting
change in response amplitudes in comparison to control cells filled with FK506
that did not receive conditioning stimulation (𝑛=7). Responses were increased
to 134.9±10.5% of baseline levels immediately following paired-pulse
stimulation, (F2,26=7.71,𝑃<.01; N–K, 𝑃<.001;𝑛=8) but returned to 102.2±6.1% of baseline levels after 30 minutes (Figure 3(b)).
Figure 3: Long-term depression is dependent
on activation of the calmodulin-dependent protein phosphatase calcineurin.
Although LTD was only partially inhibited by pre-exposure to cyclosporin A, it
was completely blocked when FK506 was included in the recording electrode
solution. (a) Pre-exposure of slices to the calcineurin inhibitor cyclosporin
A (250𝜇M) for 1.5 to 3 hours resulted in a partial block of LTD by repeated
paired-pulse stimulation. The amount of LTD induced was smaller than in control
ACSF and was close to statistical significance (𝑛=6,𝑃=.07). (b) Including the FK506 in the recording electrode solution to directly block
postsynaptic calcineurin prevented the induction of LTD. Analysis of group responses showed a
significant increase in responses during the baseline period, but responses in
control cells indicate that this increase is transient and unlikely to have
affected measurement of LTD. Inhibition of postsynaptic calcineurin therefore
prevents induction of LTD in layer II cells of the entorhinal cortex.
Inspection of averaged responses suggested that
there was an initial increase in responses during the baseline period among
cells filled with FK506, and comparison of responses recorded during the first
and last minutes of the baseline period showed that the increase was
significant (𝑡14=3.09,𝑃<.01).
Interestingly, then, interfering with calcineurin function can lead to enhanced
basal synaptic transmission in entorhinal neurons. This increase is not likely
to have affected measures of LTD in conditioned cells, however, because control
responses showed only a transient increase after which responses remained stable.
Protein phosphatase 1 is thought to contribute directly
to suppression of hippocampal EPSPs during LTD by dephosphorylation of the GluR1
AMPA receptor subunit. The involvement of PP1 to LTD in the entorhinal cortex
was therefore tested by including okadaic acid in the recording electrode
solution. In early experiments, a low concentration of 0.1𝜇M okadaic acid [41] did not block LTD
induction, and responses were depressed to 72.7±8.7% of baseline levels at the
end of the recording period (F2,24=4.65,𝑃<.05; N–K, 𝑃<.001;𝑛=8). However, increasing the concentration of okadaic
acid to 1.0𝜇M [40] blocked the
induction of LTD. There was a variable and nonsignificant reduction in
responses immediately following conditioning stimulation (to 89.0±14.9% of
baseline) and responses were also near baseline levels after 30 minutes (96.0±6.6% of baseline 30; F2,22=0.18,𝑃=.84;𝑛=7; Figure 4). Activation of PP1 is therefore likely to contribute
to mechanisms of LTD in the entorhinal cortex.
Figure 4: The induction of LTD was blocked in
a dose-dependent manner by including okadaic acid in the recording electrode
solution to block activation of protein phosphatase 1 (PP1). (a)
and (b) A low concentration of 0.1𝜇M okadaic acid failed to block LTD
induction, but raising the concentration to 1.0𝜇M resulted in a block of LTD
induction (compare traces in A1 versus A2). Responses in
control cells filled with 1.0𝜇M okadaic acid that did not receive conditioning
stimulation remained stable. The block of LTD by okadaic acid suggests that activation
of PP1 mediates LTD in the entorhinal cortex.
4. Discussion
The current paper has used prolonged repetitive
paired-pulse stimulation to induce LTD in layer I inputs to layer II neurons of
the medial entorhinal cortex and has determined the early postsynaptic signals
that mediate LTD in these cells. Consistent with previous observations, the LTD
observed here was obtained in both putatively identified stellate [28] and
pyramidal [44] cells. The induction of LTD was blocked by the NMDA glutamate
receptor antagonist APV, and by the calcium chelator BAPTA, indicating that
calcium influx via NMDA receptors is required for LTD. The induction of LTD was
also blocked by the calcineurin inhibitor FK506, and by okadaic acid which
blocks activation of protein phosphatases 1 and 2a. Calcineurin is required for
LTD of deep layer inputs to layer II stellate cells [28], and
calcineurin-dependent activation of PP1 contributes to NMDA receptor-dependent
LTD of AMPA responses in the hippocampus [2, 4].
The dependence of LTD in the entorhinal cortex
on activation of NMDA receptors has been a consistent finding in vivo and in
slices. It has been observed following stimulation protocols including 1 Hz trains,
pairing of presynaptic stimulation at 0.33 Hz with postsynaptic depolarization
[28], repeated paired-pulse stimulation [18, 33], and spike-timing-dependent
induction of LTD [44]. Long-term depression was blocked by including the calcium
chelator BAPTA in the recording electrode solution (Figure 2) [28], and this is
consistent with calcium influx via NMDA receptors as a critical trigger for
entorhinal LTD. Metabotropic glutamate receptor activation and release of
calcium from intracellular stores can contribute to LTD in the hippocampus [2, 36, 37, 45], but activation of metabotropic glutamate receptors is not required
for entorhinal LTD [18, 28]. Calcium influx through voltage-gated calcium
channels can contribute to spike-timing-dependent LTD in the entorhinal cortex,
however. Cells with broadened action potentials that result in larger calcium transients show greater NMDA
receptor-dependent spike-timing-dependent LTD in layer II-III cells [44]. Calcium influx through voltage-gated
channels also mediates bidirectional spike-timing-dependent plasticity of
inhibitory synapses in entorhinal cortex [46]. A form of long-term depression
on layer V-VI neurons, expressed presynaptically through reduced transmitter
release, is also dependent on activation of voltage-dependent calcium channels
[33]. Calcium signalling mediated by voltage-gated channels therefore plays a
number of roles in modulating synaptic plasticity in the entorhinal cortex.
The contribution of the
calmodulin-dependent protein phosphatase calcineurin to LTD was tested by
incubating slices in cyclosporin A or by including FK506 in the recording
electrode solution. Cyclosporin A appeared to cause a partial block of LTD, and
responses were reduced to 82.4% of baseline as compared to 58.6%
in untreated cells (compare Figures 1(c)
and 3(a)), but the sizes of these LTD effects were not statistically different.
We obtained a more conclusive result with FK506, however, and LTD was
completely blocked by including FK506 in the recording electrode solution. Including
FK506 in the bathing medium has been used to block calcineurin-dependent
depression effects in entorhinal cortex [28], and in excitatory [47] and
inhibitory [48] synapses of the CA1 region. Here, we have loaded FK506 into the
recording electrode solution to avoid possible presynaptic effects of the drug
and to ensure that FK506 could act on calcineurin [39, 40, 49, 50]. The
block of LTD by FK506 indicates that LTD is dependent on calcineurin, and this
suggests that cyclosporin A resulted in only a partial block of calcineurin
activity.
Calcineurin is thought
to mediate expression of LTD in part by dephosphorylating inhibitor 1 and
thereby increasing the activity of PP1 [2, 4, 39]. The PP1/PP2a inhibitor okadaic
acid blocks LTD in the CA1 region [38, 40], and we have shown here that the induction
of LTD in the entorhinal cortex was blocked by including okadaic acid in the
recording electrode solution. This is the first report of LTD in the entorhinal
cortex dependent on PP1/PP2a. Protein phosphatases can regulate synaptic
function through a variety of mechanisms [51] that include dephosphorylation of
the ser-845 residue on the AMPA GluR1 subunit, and LTD in the entorhinal cortex
may be expressed partly through this mechanism. In addition, the work of Deng
and Lei [28] has found entorhinal LTD to be associated with a reduction in the
number of postsynaptic AMPA receptors, with no change in AMPA receptor
conductance, and has shown that this effect is dependent on proteosomes that degrade
AMPA receptors internalized through ubiquitinization. As in the hippocampus,
therefore, entorhinal LTD can be expressed through mechanisms involving
trafficking of AMPA receptors [52].
Long-term depression was induced here using
strong repetitive paired-pulse stimulation which we have used previously to
induce LTD in the entorhinal cortex in vivo and in slices ([17, 18], see also [33, 34]). LTD was induced following 15 minutes, but not 7.5 minutes of paired-pulse
stimulation; this is consistent with a requirement for prolonged activation of
calcium-dependent signalling mechanisms, and is also consistent with the
possibility that NMDA receptor-dependent metaplastic changes early in the train
may promote LTD induced by stimuli that occurred later in the 15-minute
duration trains [53]. We previously found 1 Hz stimulation to be ineffective in
vivo and in slices from Long-Evans rats [17, 18], but deep layer inputs to
stellate neurons in slices from 2 to 3 week-old Sprague-Dawley rats express
NMDA receptor-dependent LTD following 15 minutes of 1 Hz stimulation, or
following low-frequency stimulation paired with postsynaptic depolarization [28].
Thus, there may be developmental, strain-related, or pathway-specific factors
that affect the ability of 1 Hz stimulation to activate these signalling mechanisms.
The entorhinal cortex
is embedded within the temporal lobe through an extensive array of anatomical
connections [7] and has been linked behaviorally to a variety of sensory and
cognitive functions (e.g., [9, 10]). Lasting synaptic plasticity in the
entorhinal cortex is therefore likely to serve a variety of functions depending
on the synaptic pathways involved. Synaptic depression effects are generally thought
to complement synaptic potentiation during the formation of memory [45, 54–56], and it is possible that depression effects contribute to short and/or
long-term memory processing. However, the laminar architecture of the
entorhinal cortex, with superficial layers mediating much of the cortical input
to the hippocampal formation, suggests that long-term depression of synaptic
transmission in layer II may lead to long-term reductions in the salience of
particular elements or patterns of cortical input and may thus lead to lasting
changes in the multimodal inputs processed by the hippocampal formation. Similarly, the general resistance of the
entorhinal cortex to induction of LTD could serve to maintain relatively stable
information processing and integration of multimodal sensory inputs within the
medial entorhinal cortex.
Acknowledgments
This research was funded by grants to C. A.
Chapman from the Natural Sciences and Engineering Research Council of Canada
and the Canada Foundation for Innovation, and by a postdoctoral fellowship to
S.K. from Fondation Fyssen (France). C.A. Chapman is a
member of the Center for Studies in Behavioral Neurobiology funded by the Fonds
pour la Recherche en Santé du Québec.
A. Alonso, M. de Curtis, and R. Llinás, “Postsynaptic Hebbian and non-Hebbian long-term potentiation of synaptic efficacy in the entorhinal cortex in slices and in the isolated adult guinea pig brain,” Proceedings of the National Academy of Sciences of the United States of America, vol. 87, no. 23, pp. 9280–9284, 1990.View at Publisher · View at Google Scholar
S. M. Dudek and M. F. Bear, “Homosynaptic long-term depression in area CA1 of hippocampus and effects of N-methyl-D-aspartate receptor blockade,” Proceedings of the National Academy of Sciences of the United States of America, vol. 89, no. 10, pp. 4363–4367, 1992.View at Publisher · View at Google Scholar
M. F. Bear, “A synaptic basis for memory storage in the cerebral cortex,” Proceedings of the National Academy of Sciences of the United States of America, vol. 93, no. 24, pp. 13453–13459, 1996.View at Publisher · View at Google Scholar
|
{
"pile_set_name": "pile-cc"
}
|
Medical Designers Save Time, Parts With Software
HIGH STRESS: The sensor in Tensys’ new blood-pressure
monitor floats within a rigid frame attached to a serpentine arm designed
to flex. Engineers used SolidWorks and COSMOSWORKS to design the arm and
see how it would perform under real-world flexing, saving prototypes.
“Faster, cheaper, better” is a phrase that may have its genesis in the
aerospace industry, but it has launched breakthrough designs in other industries
as well. Case in point: the medical industry, where device manufacturers have
used CAD and FEA to give flight to new design ideas while slashing
product-development time.
Two recent examples, one in the U.S. and one in the U.K., show the improvements and time/cost savings engineers have realized from their use of engineering software.
In Camberly, U.K., the design team at reseller Williams Medical Supplies set up a new technical development department to design medical products and settled on Solid Edge (UGS) as its core design tool. “Our aim is to take a routine surgical instrument and bring something new to the design that will provide significant benefit,” says Robert Steele, technical development director. Among their projects: the Opmaster Series 4 surgery operating table. The enhanced design allows a patient to be examined, operated on, and recover on the same table.
Using Solid Edge, the team modeled parts and ran collision and interference analyses and motion simulation. “We were able to sit with sales and marketing during the initial design stage and get feedback,” says Steele. “Engineers don’t always get it right, but with the software we could rehearse the design and minimize the risk of getting it wrong.”
The result was a major time savings, largely the result of not doing many prototypes. “In fact, we were almost able to dispense with the prototyping phase and go straight to manufacturing,” Steele asserts.
Working with different software and on a vastly different product line, engineers at San Diego-based Tensys Medical, Inc. had similar results.
Using SolidWorks for CAD and COSMOSWorks for FEA (both from Dassault Systemes, Inc.) the engineering team developed the T-Line[R] Tensymeter, a non-invasive arterial blood-pressure management system for use in surgery. Their goal was to replace the traditional cuff-based monitors that provide only intermittent measurements every few minutes. That kind of irregular monitoring can delay recognition of rapid changes in blood pressure.
The design concept uses an actuator to move a sensor over the patient’s wrist to find the best position for producing a continuous waveform. “The sensor has to float within a rigid frame attached to a serpentine arm that’s designed to flex. The team used COSMOSWORKS to identify areas of high stress for the olefin-based serpentine arm and used the analysis results to make design modifications.
Among those modifications, says Senior Engineer Russ Hempstead: “We removed the stress risers, added radii, and added thickening sections.” He and the engineering team didn’t expect to see the stress risers, but when they did they put static flexing on part of the serpentine arm so they could move it the way it would move in the real world. The software enabled the team to shorten the design cycle by 60 percent. They cut manufacturing time by four percent, material costs by three percent, and labor costs by three-four percent. In all, engineers did seven or eight analysis studies in just a couple of days.
Hempstead says the team cut the number of parts by putting as much functionality in a part as possible. It’s a strategy he recommends to others. “It may complicate tooling, but it still eases manufacturing,” he says. “We got rid of four components.”
Industrial workplaces are governed by OSHA rules, but this isn’t to say that rules are always followed. While injuries happen on production floors for a variety of reasons, of the top 10 OSHA rules that are most often ignored in industrial settings, two directly involve machine design: lockout/tagout procedures (LO/TO) and machine guarding.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.
|
{
"pile_set_name": "pile-cc"
}
|
short party dresses 2017
(119)
it's luxurious style as well as high quality will definitely meet your needs are.by a massive lower price, you may be the particular fortunate someone to receive top selling short party dresses 2017along cheap. therefore, that inexpensive and awe-inspiring ware has to be an ideal giving to your pal.will to acquire the modern short party dresses 2017now? in addition, it is possible to browsing our site and buying various other great points on your own.your sophisticated short party dresses 2017with some other color along with dimensions will certainly suit most of the people’ohydrates flavor. most of us list all of those goods available on the online store.you could pick any one you prefer and buy this today.
|
{
"pile_set_name": "pile-cc"
}
|
How Idris Elba's 'Luther' Puts Us in the Mindset of a Renegade Detective
"Luther" is a series about righteous indignation. Yes, it's a police drama, a dark (sometimes ludicrously so) crime saga set in a moody London with a greater and grimmer murder rate to equal that of other bleak procedurals.
Yes, it's a police drama, a dark (sometimes ludicrously so) crime saga set in a moody version of London with a greater and grimmer murder rate to equal that of other bleak procedurals. But the satisfaction of seeing those cases solved, those murderers and kidnappers caught, is muted, secondary to the suffering and sacrifice and validation of protagonist John Luther, the detective played by Idris Elba with a staggering display of movie star charisma that seems like it ought to produce static shocks with everything with which he comes into contact. Luther's devoted to his job with an obsessiveness that's destroying him, that, as the series began in 2010, had ended his marriage and eaten him up inside, changing him. He's good at what he does, if prone to extremes, and yet he seems to be perpetually doubted, maligned and hurt because of it.
In season one, Luther was framed for the murder of his beloved wife and forced to run from his fellow officers, and it's not the only time in the series he's a suspect. In season two he's treated like a certain career contaminant by a new, ambitious, by-the-books officer assigned to report to him. And in the four-episode third season airing on BBC America from September 3 through 6, that former colleague, DS Erin Gray (Nikki Amuka-Bird), is targeting him as part of an investigation of police corruption with DSU George Stark (David O'Hara), who may be a little obsessive himself. Aside from his sidekick DS Justin Ripley (Warren Brown), few seem to appreciate Luther and his incredible abilities -- instead, he's infamous, the rest of the police force apparently all too able of believing he's capable of dark things.
We, as viewers, don't, because of Idris Elba. John Luther is Elba's best role since that of the fascinatingly savvy Stringer Bell in "The Wire," because it showcases the actor's utterly assured presence, his air of rakishly rumpled confidence in his tweed coat. Luther does not have swagger, he has conviction, conviction that informs his every -- frequently correct -- move. It's why it's so easy to trust him in a way that the characters working with him don't, and not without reason. When the series began in 2010, it was with Luther letting a pedophile fall to what could have been his death after extracting from him information about the location of the girl he'd kidnapped. It didn't doom his career -- he got lucky -- but he hasn't really changed. He even threatens a suspect with a similar fate toward the start of the new season -- but the move doesn't come across as harsh. We're more worried, when it happens, that it'll get him in trouble again.
"Luther" is mesmerizing because of Elba, and because the show is so consumed by his performance that it becomes not one about a maverick cop but instead one of a man outpacing the justice system he's allegedly a part of, one that hampers him with its pesky rules, its politics and its skeptics. It encourages us to buy into his worldview, in which he should just be allowed to do his job and get justice done, though that may mean covering up crimes or allowing culprits he's judged deserving to go free -- like Alice Morgan (Ruth Wilson), his psychopathic superhero of a friend, and a wonderful, preposterous character who's essentially too enjoyable to be locked up. Luther's tactics make him so dangerous to the people around him that the case Stark tries to build against him is based on the peripheral body count rather than evidence, and when, in the new season, he starts a tentative romance with Mary Day (Sienna Guillory), a woman another character dismissively sums up as a "pixie," it's accompanied by a sense of dread.
The series comes close to confronting the nature of its protagonist in the new season, introducing a grieving man who turns to vigilanteism and gathers public support for his actions as he starts targeting rapists and killers who've gotten off lightly. Confronting Luther on opposite sides of a canal, the man says "One out of five murders are committed by men on bail," and demands to know why nothing is being done about it. "It's complicated," Luther replies. "No, it's not," says the man. "No... it's not. You've got me there," Luther admits. The difference is that, while Luther may bend the rules to fit his ideas about crime and punishment, he doesn't do so looking for outside approval the way the antagonist he's facing down does -- the opposite, really. Instead, it's the viewers who seethe on his behalf and yearn for his efforts to continue, and it's that conflicting emotion far more than the procedural aspects that lifts "Luther" above the plethora of similarly lurid recent dark crime dramas it resembles.
|
{
"pile_set_name": "pile-cc"
}
|
The verbals: sports quotes of 1994
There are no small accidents on this circuit. Ayrton Senna, before the San Marino Grand Prix, during which he suffered a fatal crash.
One of my best friends has been killed on the curve where I escaped death. I was lucky; he wasn't. It's like having a cheque book. You start pulling out the pages until one day no pages are left. He was the one driver so perfect nobody thought anything could happen to him. Gerhard Berger, Formula 1 driver, on Ayrton Senna.
It was at the bottom of our hearts to dedicate this victory to our great friend, Ayrton Senna. He was also heading for his fourth title. Claudio Taffarel, Brazil's goalkeeper, following victory in the World Cup final.
There will never be another Senna. The poet of speed is dead. El Diario, Bolivian sports newspaper.
Senna was the greatest driver ever and when someone like him is killed you have to ask yourself what is the point of it all. Nikki Lauda.
When I saw him crash and realised there was no way he was going to be able to continue the race, I cheered with joy. I thought: `He'll be home earlier tonight'. Adrienne Galisteu, Senna's girlfriend.
|
{
"pile_set_name": "pile-cc"
}
|
From 1 July 2018, the Tax Office is advising Australians that if they find an error in their tax return or activity statement they will not incur a penalty but will advise of the error and how to get it right next time.
Penalty relief will only apply to eligible taxpayers or entities (i.e., turnover of less than $10 million) every three years.
Eligible individuals will only be given penalty relief on their tax return or activity statement if they make an inadvertent error because they either:
– took a position on income tax that is not reasonably arguable, or
– failed to take reasonable care
The ATO will not provide penalty relief when individuals have (in the past three years):
Received penalty relief
– Avoided tax payment or committed fraud
– Accrued taxation debts with no intention of being able to pay (i.e., phoenix activity)
– Previously penalised for reckless or intentional disregard of the law
– Participated in the management or control of another entity which has evaded tax.
Individuals can not apply for penalty relief. The ATO is reminding individuals that they will provide relief during an audit should it apply.
Penalty relief will not be applied to:
– Wealthy individuals and their businesses
– Associates of wealthy individuals (that may be deemed a small business entity in their own right)
– Public groups, significant global entities and associates
Penalty relief will also not be applied to certain taxes, i.e., fringe benefits tax (FBT) or super guarantee (SG).
|
{
"pile_set_name": "pile-cc"
}
|
We use cookies to give you the best experience possible. By using this website you consent to our use of these cookies to find out more about how we use cookies and how to manage them, please see our Privacy Policy and our Terms & Conditions. Accept
The Huffington Post: Top Design Destinations for 2017
2017-02-23
By Janette Ewen
Ever since Frank Gehry’s spectacular Guggenheim Bilbao put its sleepy namesake city on the radar of architecture buffs two decades ago, design has became an integral aspect of travel and tourism, joining food, culture and climate when it comes to visitor draws. This year, the list of destinations sure to entice design fans includes spots from the West Indies to North Africa. They offer a wide range of aesthetic attractions, from cutting-edge urban design to exquisite historical gems.
OLD HAVANA, NEW URGENCY
Whether the recent detente between the United States and Cuba will result in an onslaught of American visitors to the island or not, Canadians aren’t waiting to find out: According to KAYAK, a world-leading travel search engine, Havana is one of the year’s top 10 trending destinations among travellers from the Great White North, whose online inquiries about the city skyrocketed by 230 percent compared to last year. In anticipation of more visitors, hotels in Havana are being modernized and restaurants given new polish, but it’s the bustling metropolis’ status as a living design museum that no doubt appeals to most foreigners. For architecture fans, hotels like the Nacional offer glimpses into long-gone eras, while automobile buffs would be hard-pressed to find a greater parade of vintage cars. Speaking of moveable feasts, bars like La Floridita, where Ernest Hemingway indulged his fondness for daiquiris, are modern-day links to literary and artistic legends. Clearly, the time to visit Havana is now, whatever your aesthetic bent.
CARIBBEAN COOL
Over the past several years, restaurant-rich Grand Cayman, the largest of the Cayman Islands, has been nurturing a reputation as the culinary capital of the Caribbean. Now, its growing foodie cred is being matched by its design cachet. In November, the ultra-sleek Kimpton Seafire Resort + Spa, designed by U.S. firm SB Architects, opened on Seven Mile Beach, bringing a welcome shot of global chic (plus four more dining options) to that pristine stretch of coastline. Not far away, Camana Bay, an ambitious mixed-use development, has been heralded as a rare example of new urbanism in the region, its 500 acres encompassing high-end shops, office and residential space, interactive fountains and a pedestrianized main street called the Paseo. Situated between the Kimpton and Camana Bay is the Caribbean Club, a luxury apartment hotel and ideal base for exploring the area; it also houses one of Grand Cayman’s foremost eateries, the trattoria Luca.
ROAD TO MOROCCO
Another top trender among Canadian travellers according to KAYAK is Casablanca, the romantic Moroccan city that has long offered a beguiling mix of French and Arabic cultures. Nowhere is this hybrid allure more visible than in its architecture, which ranges from the art deco elegance of Place Mohammed V to contemporary showstoppers like the Four Seasons Casablanca on the oceanfront Corniche. At bustling Marche Centrale, the Moorish-style setting is as enticing as the fried fish and grilled vegetables, while L’Atelier 21, the city’s leading modern art gallery, showcases emerging and established artists in an au courant space. New air links to Casablanca from Canada this year make visiting even easier.
LONDON CALLING
The British capital has always been a magnet for design aficionados, but 2017 offers an extra-special reason to visit: the recently transplanted Design Museum, which has been moved from its previous home on the south bank of the Thames to much larger digs in Kensington. Ten years in the making, the $140-million wood-and-concrete marvel, reimagined by minimalist architect John Pawson on the site of the former Commonwealth Institute, is the Brit superstar’s first public building in London. Visitors must pay to see special exhibitions, but the museum’s extensive permanent collection, which includes everything from a 2012 Olympic torch to a full-size Tube car, is free to view. Another area museum completing a major update this year is the venerable Victoria and Albert, which will unveil a new underground gallery and a new entrance on Exhibition Road in July. Even the city’s best watering holes are offering new eye candy: Check out the restored blue walls in The Berkeley’s expanded Blue Bar.
|
{
"pile_set_name": "pile-cc"
}
|
The case of the vanished sword
By
washingtonpost.com editors
By John Lockwood
Washington
One of our memorials is missing a sword.
The General Sherman memorial just south of the Treasury Building shows General William Tecumseh Sherman on horseback atop a 32-foot pedestal guarded at each ground-level corner by a soldier. The memorial was designed by a Danish sculptor named Carl Rohl-Smith. It was dedicated on Oct. 15, 1903.
The northwest soldier is an infantryman, holding his rifle by the barrel, with the butt resting on the ground. The southwest soldier is an engineer, holding his rifle in the same position. He also carries a cylinder or tube, about 3 feet long. Perhaps it contains surveying instruments. The southeast soldier is a cavalryman, with sword pointed upward across his left shoulder. At the northeast is an artilleryman — whose hands close upon empty air.
When I saw the northeast soldier, the question arose: Is he missing a rifle or a sword? Well, what would one of my heroes, Sherlock Holmes, do? Look for a cartridge box, or a scabbard.
There was a scabbard there, an empty one. So it was indeed a missing sword — a fact later verified in The Post’s Oct. 16, 1903, edition, which included a drawing showing the soldier with a sword, its point touching the ground.
The lost sword is probably rusting in somebody’s attic or slowly corroding in a landfill. I doubt even Holmes could find it now.
|
{
"pile_set_name": "pile-cc"
}
|
Every industry has its own characteristics and requirements. For detailed benefits of our systems related to your industry please make a selection in the left menu. General benefits of using Hitec Power Protection rotary UPS systems are:
Most reliable systemThe simple design has fewer components than for example static UPS systems. This highly improves the reliability (MTBF). Our systems have a lifetime expectancy of more than 25 years.
Most cost and energy efficient systemOperating efficiency of our systems can exceed 97%, because they do not require power conversion in the power path or a conditioned, energy consuming battery room during operation. You also do not need battery replacement every 3 to 5 years, resulting in a lower total cost of ownership (TCO) compared to for example static UPS technologies.
Most environmental friendly systemOur rotary systems have high energy efficiency and do not use batteries. Static UPS systems for example produce a considerable amount of chemical waste during its lifetime, because batteries need to be replaced every 3 to 5 years. Click here to find out more about the environmental benefits of our systems.
Most space efficient systemA static UPS system requires a diesel generator set, power electronics, batteries and numerous auxiliary equipment. Our compact and simple diesel rotary UPS design combines all these components in one, reducing the footprint with 40 up to 60%.
|
{
"pile_set_name": "pile-cc"
}
|
Holiday Punch — Plus a Cozy Fire
Charles Dickens gave us so much. Including this.
In A Christmas Carol,when Ebenezer Scrooge is presented with the Ghost of Christmas Present, he finds the "jolly Giant" sitting in state on an enormous heap of roast meats and other traditional English Christmas delicacies and flanked by "seething bowls of punch that made the chamber dim with their delicious steam."
Charles Dickens knew all about delicious steam.
He was a committed English traditionalist in his drinking. He didn't drink the international celebrity's customary champagne, champagne, and more champagne or the trendy drinks of his day — gin cocktails, claret cups, brandy smashes, or the like. Rather, his greatest affinity was for a drink that was fading faster and faster into the past by the time he came into fame. From 1700 to 1830, give or take a couple years on each end, the preeminent English social drink was the bowl of punch, a large-bore mixture of spirits (usually rum and cognac), citrus juice, sugar, water, and spice that was guaranteed to unite any gathering in jollity and boozy good cheer. But with the industrialization, commercialization, and urbanization of day-to-day life that the Victorian years brought, the convivial ritual of clustering around the flowing bowl became as quaint and outmoded as the tricorn hat.
Dickens, however, not only bucked the trend but made a whole performance out of bucking it. When he was among friends, it was his custom to brew up a bowl of punch, complete with a running disquisition on the techniques he was using and the ingredients he was deploying, thus adding instruction to delight (as one of his characters might say). Fortunately, in 1847, he wrote the whole procedure out for a friend's wife. It's not hard to follow, and there's no better way to get a holiday party started than by getting everybody involved in draining a bowl of punch. All it takes is a little preparation in advance, a willingness to hold forth a bit in front of your guests, and a high enough ceiling that you won't burn your house down. Dickens was never afraid to employ cheap sensationalism if it would help him get over, and there's nothing more sensational for selling a drink than setting it on fire. Here's our interpretation.
Advertisement - Continue Reading Below
Charles Dickens's Punch Ritual
For 12 to 16 people
Step 1: Three hours before your party, peel 3 lemons with a swivel-bladed vegetable peeler, trying to end up with three long spirals of peel. Put them in a 3- or 4-quart fireproof bowl with 3/4 cup demerara sugar or other raw sugar. Muddle the peels and the sugar together and let sit.
Talking Points: One of the secrets of punch making is to use the fragrant, sweet oil that resides in lemon peels as the sugar extract. The resulting sugar-oil mix ("oleo-saccharum") adds body to the punch.
Talking Points: The cognac is for body and smoothness, the strong rum for bouquet and (frankly) flammability, and the other rum for taming the strong one.
Step 3: Set 1 quart water to boil and put the bowl containing the lemon peels and sugar on a wooden cutting board or other heat-resistant surface in a spot where everyone can gather around. When the water boils, turn it off, gather your guests around the bowl, and pour in the cognac and rum, noting what you're adding and why.
Step 4: With a long-handled barspoon, remove a spoonful of the rum-cognac mixture and set it on fire. Return it to the bowl, using it to ignite the rest.
Stir with a ladle or long-handled spoon, attempting to dissolve the sugar. Let burn for 2 or 3 minutes, occasionally lifting one of the lemon peels up so people can admire the flames running down it.
Talking Points: You're setting the punch alight not because it looks cool but to burn off some of the more volatile elements of the alcohol. That's the story, anyway.
Step 5: Extinguish the flames by covering the bowl with a tray, and add the reserved lemon juice and the boiling water. (For cold punch, add 3 cups cold water, stir, and slide in a 1-quart block of ice, easily made by freezing a quart bowl of water for 24 hours.)
Step 6: Grate fresh nutmeg over the top and ladle out in 3-oz servings.
A Part of Hearst Digital Media
Esquire participates in various affiliate marketing programs, which means we may get paid commissions on editorially chosen products purchased through our links to retailer sites.
|
{
"pile_set_name": "pile-cc"
}
|
Divesting Of Kruger’s Cash (Updated)
Freshman Sen. David Carlucci, one of 17 Senate Democrats who received campaign contributions from Sen. Carl Kruger during the 2010 election cycle, is getting rid of that money after learning of the federal corruption charges lodged against his colleague earlier today.
“It is unfortunate that these types of allegations have clouded the legislature, tainting the hard working men and women who work diligently and honorably to serve their constituents,” Carlucci said in a statement.
“I ran on a platform of ethics reform and these unsavory allegations are just another example of why ethics reform in Albany needs to be addressed immediately. The people of New York deserve better. In light of these allegations, I will be donating the $5,000 given to me during my campaign from Senator Kruger to a charitable organization in my district.”
All told, Kruger, a prodigious fundraiser who had close to $1.9 million in his campaign committee, “Friends of Carl,” as of Jan. 15, doled out $49,000 to fellow senators this cycle, according to campaign finance records on file at the state Board of Elections. He also gave $450,000 to the DSCC.
Sen. Gustavo Rivera, who received $2,500 from Kruger, was the first lawmaker to announce he would divest himself of the scandal-scarred Brooklyn lawmaker’s contributions. Now, apparently, all four of the Independent Democratic Conference members – Carlucci, Diane Savino, Jeff Klein, and Dave Valesky – are following suit.
UPDATE: DSCC spokesman Josh Cherwin says:
“We will not be returning these funds, which were contributed during a previous election cycle and already spent.”
“This is yet another sad day for New York residents who rightfully expect integrity and accountability from their elected officials.”
“During the last election cycle, Senator Kruger’s campaign committee contributed a combined total of $8,000 to Independent Democratic Conference members. We decided to donate that amount to charitable organizations in our communities. We believe this to be the best use for this money.”
|
{
"pile_set_name": "pile-cc"
}
|
Commonwealth Bank and the Australian Chamber Orchestra kick off the 2009 Great Romantics national tour
Sydney, 11 June 2009: The Commonwealth Bank today congratulated the Australian Chamber Orchestra (ACO) on the commencement of its Great Romantics Tour.
Commonwealth Bank Group Executive Human Resources and Group Services, Ms Barbara Chapman, said the Group was committed to supporting the Arts in Australia and helping its customers, staff and the Australian community engage with music at the highest level.
“As a partner of the ACO since 1988, we have been privileged to watch it grow into the world class orchestra that it is today,” she said.
“We are proud of our ongoing support and commitment to the ACO and excited to be the 2009 National Tour Partner for the Great Romantics.”
Ms Chapman said the Commonwealth Bank was especially proud to loan its rare Guadagnini violin – crafted in 1759 in Parma, Italy, and purchased by the Bank in 1996 – to ACO’s Principal Second Violin and leader of the ACO’s Emerging Artists Program, Helena Rathbone.
“We are delighted that on the violin’s 250th birthday, it is played by such an exquisite violinist for the enjoyment and appreciation of thousands of Australians,” she said.
Ms Chapman said the Bank’s partnership with the ACO was one of three national Arts partnerships for the Group, which included Opera Australia and Bangarra Dance Theatre.
The Australian Chamber Orchestra’s Artistic Director, Mr Richard Tognetti, said he was proud of the Orchestra’s long association with the Bank.
“When I started at the ACO in 1989, the Orchestra only had a handful of corporate supporters and we were in desperate need of committed companies who would be prepared to inject cash and help fuel some new ideas,” he said.
“My dream was to create a first-rate Australian Orchestra that could hold its own anywhere in the world. The Commonwealth Bank took a risk on my dreams and, 21 years on, we have one of the most fruitful corporate relationships I’ve ever seen.”
To find out more about the Bank’s support for the Arts, visit commbank.com.au
|
{
"pile_set_name": "pile-cc"
}
|
ABC News’ Good Morning America outstripped NBC News’ Today by 761,000 viewers and 279,000 news demo viewers the week of April 7. It’s GMA‘s seventh consecutive week on top of the morning infotainment show race in both metrics, and its largest demo margin in three months. GMA has ranked No. 1 in overall audience for 89 of the past 93 weeks, and No. 1 in the news demo for 25 of this season’s 29 weeks to date.
Today meanwhile, boasted it finished first with the younger, 18-49 year old age bracket, for the 42nd consecutive week. Today is on top of the ratings in the daypart with men 25-54 this season, NBC noted — as well as adults, men and women 18-49. Today has posted seven consecutive months of ratings growth in total viewers, and both the 25-54 and 18-49 demos which NBC says is the show’s biggest ratings uptick since ’97.
For the week, GMA clocked 5.617 million viewers — 2.212 million in the demo. Today logged 4.856 million viewers — 1.933 million in the demo. GMA bested CBS This Morning‘s 3.041 million viewers — 956,000 in the news demo.
8 Comments
now if they would only get rid of Roker and Daly, maybe I would watch again. Also replace Hall in the 9 o’clock hour. She is awful. GIVE ME MY GEIST BACK
B stock • on Apr 17, 2014 8:54 am
I love GMA but they really need to get rid of the music that you have to listen to even when the anchors are talking…. So annoying….off today and excited to watch but had to turn channel because the music is too loud and so annoying… George even asked for the music to be turned down!
Bill • on Apr 17, 2014 8:54 am
who cares
Carol Dehart • on Apr 17, 2014 8:54 am
I miss Sam and josh very much. Congrats over you numbers. Please have Sarah on more
edna • on Apr 17, 2014 8:54 am
I love GMA, but I miss Sam and Josh.
Carla • on Apr 17, 2014 8:54 am
Format is fantastic – notice Today ditched there ugly sofa for the “round table.” Nothing like GMA comradery! Little late Today producers! Greatly miss Gosh and Sam. Not so keen w/Ginger – maybe trying too hard, not found her “nitch.” Only complaint? Too much Estrogen on the show! Enjoy success GMA!
Barrack • on Apr 17, 2014 8:54 am
I love, The new Weather Person…….Sam was great, but it was good that he moved on. Ginger is fresh and of course the storm chaser!
Lara, has done well in her position. I did not think anyone could
take Dianne’s place she has done very well. Now as for Josh, well he did not stay long enough to matter. Easy to replace. Robin is a fixture, so is George. The rest just compliment them. Ohhhhhh and
Stahan wow that will be awesome!! Go GMA!!
Sixto • on Apr 17, 2014 8:54 am
Thanks God that people are discarding Lauer and in the future Al Roker as hosts of Today. People are being conscientious that Lauer is pucking and that Al is passe with the same phrase over and over and over “now lets see whats happening in your neck of the woods” .
|
{
"pile_set_name": "pile-cc"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.