text
stringlengths
18
160k
meta
dict
--- author: - Tarek Sayed Ahmed title: 'On complete representability of Pinter’s algebras and related structures' --- We answer an implicit question of Ian Hodkinson’s. We show that atomic Pinters algebras may not be completely representable, however the class of completely representable Pinters algebras is elementary and finitely axiomatizable. We obtain analagous results for infinite dimensions (replacing finite axiomatizability by finite schema axiomatizability). We show that the class of subdirect products of set algebras is a canonical variety that is locally finite only for finite dimensions, and has the superamalgamation property; the latter for all dimensions. However, the algebras we deal with are expansions of Pinter algebras with substitutions corresponding to tranpositions. It is true that this makes the a lot of the problems addressed harder, but this is an acet, not a liability. Futhermore, the results for Pinter’s algebras readily follow by just discarding the substitution operations corresponding to transpostions. Finally, we show that the multi-dimensional modal logic corresponding to finite dimensional algebras have an $NP$-complete satisfiability problem. [^1] Introduction ============ Suppose we have a class of algebras infront of us. The most pressing need is to try and classify it. Classifying is a kind of defining. Most mathematical classification is by axioms, either first order, or even better equations. In algebraic logic the typical question is this. Given a class of concrete set algebras, that we know in advance is elementary or is a variety. Furthermore, such algebras consist of sets of sequences (usually with the same length called the dimension) and the operations are set - theoretic, utilizing the form of elements as sets of sequences. Is there a [*simple*]{} elementary (equational) axiomatization of this class? A harder problem is: Is their a [*finite*]{} elementary (equational) axiomatization of this class? The prime examples of such operations defined on the unit of the algebra, which is in the form of $^nU$ ($n\geq 2)$ are cylindrifiers and substitutions. For $i<n$, and $t, s\in {}^nU$, define, the equivalence relation, $s\equiv_i t$ if $s(j)=t(j)$ for all $j\neq i$. Now fix $i<n$ and $\tau\in {}^nn$, then these operations are defined as follows $$c_iX=\{s\in {}^nU: \exists t\in X, t \equiv_i s\},$$ $$s_{\tau}X=\{s\in {}^nU: s\circ \tau\in X\}.$$ Both are unary operations on $\wp(^nU)$; the $c_i$ is called the $i$th cylindrfier, while the $s_{\tau}$ is called the substitution operation corresponding to te transformation $\tau$, or simply a substitution. For Boolean algebras this question is completely settled by Stone’s representation theorem. Every Boolean algebra is representable, equivalently, the class of Boolean algebras is finitely axiomatizable by a set of equations. This is equivalent to the completeness of propositional logic. When we approach the realm of first order logic things tend to become much more complicated. The standard algebraisation of first order logic is cylindric algebras (where cylindrifiers are the prominent citizens) and polyadic algebras (where cylindrifiers and substitutions are the prominent citizens). Such algebras, or rather the abstract version thereof, are defined by a finite set of equations that aim to capture algebraically the properties of cylindrifiers and substitutions (and diagonal elements if present in the signature). Let us concentrate on polyadic algebras of dimension $n$; where $n$ is a finite ordinal. A full set algebra is one whose unit is of the form $^nU$ and the non-Boolean operations are cylindrifiers and substitutions. The class of representable algebras, defined as the class of subdirect products of full set algebras is a discriminator variety that is not finitely axiomatizable for $n\geq 3$, thus the set of equations postulated by Halmos is not complete. Furthermore, when we also have diagonal elements, then there is an inevitable degree of complexity in any potential universal axiomatization. There is another type of representations for polyadic algebras, and that is [*complete*]{} representations. An algebra is completely representable if it has a representation that preserves arbitrary meets whenever they exist. For Boolean algebras the completely representable algebras are easily characterized; they are simply the atomic ones; in particular, this class is elementary and finitey axiomatizable, one just adds the first order sentence expressing atomicity. For cylindric and polyadic algebras, again, this problem turns much more involved, This class for $n\geq 3$ is not even elementary. Strongly related to complete representations [@Tarek], is the notion of omitting types for the corresponging multi-dimensional modal logic. Let $W$ be a class of algebras (usually a variety or at worst quasi-variety) with a Boolean reduct, having the class $RW$ as the class of representable algebras, so that $RW\subseteq W$, and for $\B\in RW$, $\B$ has top element a set of sequences having the same length, say $n$ (in our case the dimension of the algebra), and the Boolean operations are interpreted as concrete intersections and complementation of $n$-ary relatons. We say the $\L_V$, the multi-dimensional modal logic has the omitting types theorem, if whenever $\A\in V$ is countable, and $(X_i: i\in \omega)$ is a family of non-principal types, meaning that $\prod X_i=0$ for each $i\in \omega$, then there is a $\B\in RW$ with unit $V$, and an injective homomorphism $f:\A\to \wp(V)$ such that $\bigcap_{i\in \omega}f(X_i)=\emptyset$ for each $i\in \omega$. In this paper we study, among other things, complete representability for cylindrifier free reducts of polyadic algebras, as well as omitting types for the corresponding multi-dimensional model logic. We answer a question of Hodkinson [@atomic] p. by showing that for various such reducts of polyadic algebras, atomic algebras might not be completely representable, however, they can be easily characterized by a finite simple set of first order formulas. Let us describle our algebras in a somewhat more general setting. Let $T$ be a submonoid of $^nn$ and $U$ be a non-empty set. A set $V\subseteq {}^nU$ is called $T$ closed, if whenever $s\in V$ and $\tau\in T$, then $s\circ \tau\in V$. (For example $T$ is $T$ closed). If $V$ is $T$ closed then $\wp(V)$ denotes the set algebra $({\cal P}(V),\cap,\sim s_{\tau})_{\tau\in T}$. $\wp(^nU)$ is called a full set algebra. Let $GT$ be a set of generators of $T$. One can obtain a variety $V_T$ of Boolean algebras with extra non-boolean operators $s_{\tau}$, $\tau\in GT$ by translating a presentation of $T$, via the finite set of generators $GT$ to equations, and stipulating that the $s_{\tau}$’s are Boolean endomorphisms. It is known that every monoid not necessarily finite, has a presentation. For finite monoids, the multiplicative table provides one. Encoding finite presentations in terms of a set of generators of $T$ into a finite set of equations $\Sigma$, enables one to define for each $\tau\in T$, a substitution unary operation $s_{\tau}$ and for any algebra $\A$, such that $\A\models \Sigma$, $s_{\tau}$ is a Boolean endomorpsim of $\A$ and for $\sigma, \tau\in T$, one has $s_{\sigma}^{\A}\circ s_{\tau}^{\A}(a)=s_{\sigma\circ \tau}^{\A}(a)$ for each $a\in A$. The translation of presentations to equations, guarantee that if $\A\models \Sigma$ and $a\in \A$ is non zero, then for any Boolean ultrafilter $F$ of $\A$ containing $a$, the map $f:\A\to \wp(T)$ defined via $$x\mapsto \{\tau \in T: s_{\tau}x\in F\}$$ is a homomorphism such that $f(a)\neq 0$. Such a homomorphism determines a (finite) relativized representation meaning that the unit of the set algebra is possibly only a proper subset of the square $^nn$. Let $RT_n$ be the class of subdirect products of [*full*]{} set algebras; those set algebras whose units are squares (possibly with infinite base). One can show that $\Sigma$ above axiomatizes the variety generated by $RT_n$, but it is not obvious that $RK_n$ is closed under homomorphic images. Indeed, if $T$ is the monoid of all non-bijective maps, that $RTA_n$ is only a quasi-variety. Such algebras are called Pinters algebras. Sagi [@sagiphd] studied the representation problem for such algebras. In his recent paper [@atomic], Hodkinson asks whether atomic such algebras are completely representable. In this paper we answer Hodkinson’s question completely; but we deal with the monoid $T={}^nn$, with transpositions and replacements as a set of generators; all our results apply to Pinter’s algebras. In particular, we show that atomic algebras are not necessarily completely representable, but that the class of completely representable algebras is far less complex than in the case when we have cyindrifiers, like cylindric algebras. It turns out that this class is finitely axiomatizable in first order logic by a very simple set of first order sentences, expressing additivity of the extra non-boolean operations, namely, the substitutions. Taking the appropriate reduct we answer Hodkinson’s question formulated for Pinter’s algebras. We also show that this variety is locally finite and has the superamalgamaton property. All results except for local finiteness are proved to hold for infinite dimensions. We shall always deal with a class $K$ of Boolean algebras with operators. We shall denote its corresponding multi-dimensional modal lgic by $\L_K$. Representability ================ Here we deal with algebras, where substitutions are indexed by transpositions and replacements, so that we are dealing with the full monoid $^nn$. A transpostion that swaps $i$ and $j$ will be denoted by $[i,j]$ and the replacement that take $i$ to $j$ and leaves everything else fixed will be denoted by $[i|j]$.The treatment resembles closely Sagi’s [@sagiphd], with one major difference, and that is we prove that the class of subdirect product of full set algebras is a variety (this is not the case with Pinter’s algebras). Let $U$ be a set. *The full substitution set algebra with transpositions of dimension* $\alpha$ *with base* $U$ is the algebra $$\langle\mathcal{P}({}^\alpha U); \cap,\sim,S^i_j,S_{ij}\rangle_{i\neq j\in\alpha},$$ where the $S_i^j$’s and $S_{ij}$’s are unary operations defined by $$S_{i^j}(X)=\{q\in {}^\alpha U:q\circ [i|j]\in X\},$$ and $$S_{ij}(X)=\{q\in {}^{\alpha}U: q\circ [i,]\in X\}.$$ The class of *Substitution Set Algebras with Transpositions of dimension* $\alpha$ is defined as follows: $$SetSA_\alpha=\mathbf{S}\{\A:\A\text{ is a full substitution set algebra with transpositions}$$ $$\text{of dimension }\alpha \text{ with base }U,\text{ for some set }U\}.$$ The full set algebra $\wp(^{\alpha}U)$ can be viewed as the complex algebra of the atom structure or the modal frame $(^{\alpha}U, S_{ij})_{i,j\in \alpha}$ where for all $i,j, S_{ij}$ is an accessibility binary relation, such that for $s,t\in {}^{\alpha}U$, $(s,t)\in S_{ij}$ iff $s\circ [i,j]=t.$ When we consider arbitrary subsets of the square $^nU$, then from the modal point of view we are restricting or relativizing the states or assignments to $D$. On the other hand, subalgebras of full set algebras, can be viewed as [*general*]{} modal frames, which are $BAO$’s and ordinary frames, rolled into one. In this context, if one wants to use traditional terminology from modal logic, this means that the assignments are [*not*]{} links between the possible (states) worlds of the model; they [*themselves*]{} are the possible (states) worlds. The class of [*representable substitution set algebras with transpositions of dimension $\alpha$* ]{}is defined to be $$RSA_\alpha=\mathbf{SP}SetSA_\alpha.$$ Let $U$ be a given set, and let $D\subseteq{}^\alpha U.$ We say that $D$ is *locally square* iff it satisfies the following condition: $$(\forall i\neq j\in\alpha)(\forall s\in{}^\alpha U)(s\in D\Longrightarrow s\circ [i/j]\mbox{ and }s\circ [i,j]\in D),$$ The class of *locally square Set Algebras* of dimension $\alpha$ is defined to be $$WSA_{\alpha}=\mathbf{SP}\{\langle\mathcal{P}(D); \cap,\sim,S^i_j,S_{ij}\rangle_{i\neq j\in \alpha}: U\text{ \emph{is a set}}, D\subseteq{}^\alpha U\text{\emph{permutable}}\}.$$ Here the operatins are relatvized to $D$, namely $S_j^i(X)=\{q\in D:q\circ [i/j]\in X\}$ and $S_{ij}(X)=\{q\in D:q\circ [i,j]\in X\}$, and $\sim$ is complement w.r.t. $D$.\ If $D$ is a locally square set then the algebra $\wp(D)$ is defined to be $$\wp(D)=\langle\mathcal{P}(D);\cap,\sim,S^i_j,S_{ij}\rangle_{i\neq j\in n}.$$ It is easy to show: \[relativization\] Let $U$ be a set and suppose $G\subseteq{}^n U$ is locally square. Let $\A=\langle\mathcal{P}({}^n U);\cap,\sim,S^i_j,S_{ij}\rangle_{i\neq j\in n}$ and let $\mathcal{B}=\langle\mathcal{P}(G);\cap,\sim,S^i_j,S_{ij}\rangle_{i\neq j\in n}$. Then the following function $h$ is a homomorphism. $$h:\A\longrightarrow\mathcal{B},\quad h(x)=x\cap G.$$ Straigtforward from the definitions. For any natural number $k\leq n$ the algebra $\A_{nk}$ is defined to be $$\A_{nk}=\langle\mathcal{P}({}^nk);\cap,\sim,S^i_j,S_{ij}\rangle_{i\neq j\in n}.$$ So $\A_{nk}\in SetSA_n$. $RSA_n=\mathbf{SP}\{\A_{nk}:k\leq n\}.$ Exactly like the proof in [@sagiphd] for Pinter’s algebras, however we include the proof for self completeness. Of course, $\{\A_{nk}:k\leq n\}\subseteq RSA_n,$ and since, by definition, $RSA_n$ is closed under the formation of subalgebras and direct products, $RSA_n\supseteq\mathbf{SP}\{\A_{nk}:k\leq n\}.$ To prove the other slightly more difficult inclusion, it is enough to show $SetSA_n\subseteq \mathbf{SP}\{\A_{nk}:k\leq n\}.$ Let $\A\in SetSA_n$ and suppose that $U$ is the base of $\A.$ If $U$ is empty, then $\A$ has one element, and one can easily show $\A\cong\A_{n0}.$ Otherwise for every $0^\A\neq a\in A$ we can construct a homomorphism $h_a$ such that $h_a(a)\neq 0$ as follows. If $a\neq 0^\A$ then there is a sequence $q\in a.$ Let $U_0^a=range(q)$. Clearly, $^nU_0^a$ is locally square and herefore by theorem \[relativization\] relativizing by $^nU_0^a$ is a homomorphism to $\A_{nk_a}$ (where $k_a:=|range(q)|\leq n$). Let $h_a$ be this homomorphism. Since $q\in {}^nU_0^a$ we have $h_a(a)\neq0^{\A_{nk_a}}.$ One readily concludes that $\A\in\mathbf{SP}\{\A_{nk}:k\leq n\}$ as desired. Axiomatizing $RSA_n$. --------------------- We know that the variety generated by $RTA_n$ is finitely axiomatizable since it is generated by finitely many finite algebras, and because, having a Boolean reduct, it is congruence distributive. This follows from a famous theorem by Baker. In this section we show that $RSA_n$ is a variety by providing a particular finite set $\Sigma_n$ of equations such that $\mathbf{Mod}(\Sigma_n)=RSA_n$. We consider the similarity types $\{., -, s_i^j, s_{ij}\}$, where $.$ is the Boolean meet, $-$ is complementation and for $i,j\in n$, $s_i^j$ and $s_{ij}$ are unary operations, designating substitutions. We consider meets and complementation are the basic operation and $a+b$ abbreviates $-(-a.-b).$ Our choice of equations is not haphazard; we encode a presentation of the semigroup $^nn$ into the equations, and further stipulate that the substitution operations are Boolean endomorphisms. We chose the presentation given in [@semigroup] \[ax2\] For all natural $n>1$, let $\Sigma'_n$ be the following set of equations joins. For distinct $i,j,k,l$ 1. The Boolean axioms 2. $s_{ij}$ preserves joins and meets 3. $s_{kl}s^j_is_{kl}x=s^j_ix$ 4. $s_{jk}s^j_is_{jk}x=s^k_ix$ 5. $s_{ki}s^j_is_{ki}x=s^j_kx$ 6. $s_{ij}s^j_is_{ij}x=s^i_jx$ 7. $s^j_is^k_lx=s^k_ls^j_ix$ 8. $s^j_is^k_ix=s^k_is^j_ix=s^j_is^k_jx$ 9. $s^j_is^i_kx=s^j_ks_{ij}x$ 10. $s^j_is^j_kx=s^j_kx$ 11. $s^j_is^j_ix=s^j_ix$ 12. $s^j_is^i_jx=s^j_ix$ 13. $s^j_is_{ij}x=s_{ij}x$ Let $SA_n$ be the abstractly defined class $\mathbf{Mod}(\Sigma_n)$. In the above axiomatization, it is stipulated that $s_{ij}$ respects meet and join. From this it can be easily inferred that $s_{ij}$ respects $-$, so that it is in fact a Boolean endomorphism. Indeed if $x=-y$, then $x+y=1$ and $x.y=0$, hence $s_{ij}(x+y)=s_{ij}x+s_{ij}y=0$ and $s_{ij}(x.y)=s_{ij}x.s_{ij}y=0,$ hence $s_{ij}x=-s_{ij}y$. We chose not to involve negation in our axiomtatization, to make it strictly positive. Note that different presentations of $^nn$ give rise to different axiomatizations, but of course they are all definitionally equivalent. Here we are following the conventions of $\cite{HMT2}$ by distinguishing in notation between operations defined in abstract algebras, and those defined in concrete set algebras. For example, for $\A\in {\bf Mod}(\Sigma_n)$, $s_{ij}$ denotes the $i, j$ substitution operator, while in set algebras we denote the (interpretation of this) operation by capital $S_{ij}$; similarly for $s_i^j$. This convention will be followed for all algebras considered in this paper without any further notice.(Notice that the Boolean operations are also distinguished notationally). To prove our main representation theorem, we need a few preparations: Let $R(U)=\{s_{ij}:i\neq j\in U\}\cup\{s^i_j:i\neq j\in U\}$ and let $\hat{}:R(U)^*\longrightarrow {}^UU$ be defined inductively as follows: it maps the empty string to $Id_U$ and for any string $t$, $$(s_{ij}t)^{\hat{}}=[i,j]\circ t^{\hat{}}\;\;and\; (s^i_jt)^{\hat{}}=[i/j]\circ t^{\hat{}}.$$ For all $n\in\omega$ the set of (all instances of the) axiom-schemas 1 to 11 of Def.\[ax2\] is a presentation of the semigroup ${}^nn$ via generators $R(n)$ (see [@semigroup]). That is, for all $t_1,t_2\in R(n)^*$ we have $$\mbox{1 to 11 of Def.\ref{ax2} }\vdash t_1=t_2\text{ iff }t_1^{\hat{}}=t_2^{\hat{}}.$$ Here $\vdash$ denotes derivability using Birkhoff’s calculus for equational logic. This is clear because the mentioned schemas correspond exactly to the set of relations governing the generators of ${}^nn$ (see [@semigroup]). For every $\xi\in {}^nn$ we associate a sequence $s_\xi\in R(U)^*$ (like we did before for $S_n$ using $^nn$ instead) such that $s_\xi^{\hat{}}=\xi.$ Such an $s_\xi$ exists, since $R(n)$ generates ${}^nn.$ Like before, we have \[lemma\] Let $\A$ be an $RSA_n$ type $BAO$. Suppose $G\subseteq {}^nn$ is a locally square set, and $\langle\mathcal{F}_\xi:\xi\in G\rangle$ is a system of ultrafilters of $\A$ such that for all $\xi\in G,\;i\neq j\in n$ and $a\in\A,$ the following conditions hold: $${S_{ij}}^\A(a)\in\mathcal{F}_\xi\Leftrightarrow a\in \mathcal{F}_{\xi\circ[i,j]}\quad\quad (*),\text{and}$$ $${S^i_j}^\A(a)\in\mathcal{F}_\xi\Leftrightarrow a\in \mathcal{F}_{\xi\circ[i/j]}\quad\quad (**)$$ Then the following function $h:\A\longrightarrow\wp(G)$ is a homomorphism$$h(a)=\{\xi\in G:a\in \mathcal{F}_\xi\}.$$ Now, we show, unlike replacement algebras $RSA_n$ is a variety. \[variety\] For any finite $n\geq 2$, $RSA_n=SA_n$ Clearly, $RSA_n\subseteq SA_n$ because $SetSA_n\models\Sigma'_n$ (checking it is a routine computation). Conversely, $RSA_n\supseteq SA_n$. To see this, let $\A\in SA_n$ be arbitrary. We may suppose that $\A$ has at least two elements, otherwise, it is easy to represent $\A$. For every $0^\A\neq a\in A$ we will construct a homomorphism $h_a$ on $\A_{nn}$ such that $h_a(a)\neq 0^{\A_{nn}}$. Let $0^\A\neq a\in A$ be an arbitrary element. Let $\mathcal{F}$ be an ultrafilter over $\A$ containing $a$, and for every $\xi\in {}^nn$, let $\mathcal{F}_\xi=\{z\in A: S^\A_\xi(z)\in\mathcal{F}\}$ (which is an ultrafilter). (Here we use that all maps in $^nn$ are available, which we could not do before). Then, $h:\A\longrightarrow\A_{nn}$ defined by $h(z)=\{\xi\in{}^nn:z\in\mathcal{F}_{\xi}\}$ is a homomorphism by \[lemma\] as $(*),$ $(**)$ hold. Simple algebras are finite. Finiteness follow from the previous theorem, since if $\A$ is simple, then the map $h$ defined above is injective. However, not any finite algebra is simple. Indeed if $V\subseteq {}^nn,$ and $s\in V$ is constant, then $\wp(\{s\})$ is a homomorphic image of $\wp(V)$. So if $|V|>2$, then this homomorphism will have a non-trivial kernel. Let $Sir(SA_n)$ denote the class of subdirectly indecomposable algebrbra, and $Sim(SA_n)$ be the class of simple algebras. Characterize the simple and subdirectly irreducible elements, is $SA_n$ a discriminator variety? \[super\]$SA_n$ is locally finite, and has the superamalgamation property For the first part let $\A\in SA_n$ be generated by $X$ and $|X|=m$. We claim that $|\A|\leq 2^{2^{m\times {}^nn}}.$ Let $Y=\{s_{\tau}x: x\in X, \tau\in {}^nn\}$. Then $\A=\Sg^{\Bl\A}Y$. This follows from the fact that the substitutions are Boolean endomorphisms. Since $|Y|\leq m\times {}^nn,$ the conclusion follows. For the second part, first a piece of notation. For an algebra $\A$ and $X\subseteq A$, $fl^{\Bl\A}X$ denotes the Boolean filter generated by $X$. We show that the following strong form of interpolation holds for the free algebras: Let $X$ be a non-empty set. Let $\A=\Fr_XSA_n$, and let $X_1, X_2\subseteq \A$ be such that $X_1\cup X_2=X$. Assume that $a\in \Sg^{\A}X_1$ and $c\in \Sg^{\A}X_2$ are such that $a\leq c$. Then there exists an interpolant $b\in \Sg^{\A}(X_1\cap X_2)$ such that $a\leq b\leq c$. Assume that $a\leq c$, but there is no such $b$. We will reach a contradiction. Let $$H_1=fl^{\Bl\Sg^{\A}X_1}\{a\}=\{x: x\geq a\},$$ $$H_2=fl^{\Bl\Sg^{\A}X_2}\{-c\}=\{x: x\geq -c\},$$ and $$H=fl^{\Bl\Sg^{\A}(X_1\cap X_2)}[(H_1\cap \Sg^{\A}(X_1\cap X_2))\cup (H_2\cap \Sg^{\A}(X_1\cap X_2))].$$ We show that $H$ is a proper filter of $\Sg^{\A}(X_1\cap X_2)$. For this, it suffices to show that for any $b_0,b_1\in \Sg^{\A}(X_1\cap X_2)$, for any $x_1\in H_1$ and $x_2\in H_2$ if $a.x_1\leq b_0$ and $-c.x_2\leq b_1$, then $b_0.b_1\neq 0$. Now $a.x_1=a$ and $-c.x_2=-c$. So assume, to the contrary, that $b_0.b_1=0$. Then $a\leq b_0$ and $-c\leq b_1$ and so $a\leq b_0\leq-b_1\leq c$, which is impossible because we assumed that there is no interpolant. Hence $H$ is a proper filter. Let $H^*$ be an ultrafilter of $\Sg^{\A}(X_1\cap X_2)$ containing $H$, and let $F$ be an ultrafilter of $\Sg^{\A}X_1$ and $G$ be an ultrafilter of $\Sg^{\A}X_2$ such that $$F\cap \Sg^{\A}(X_1\cap X_2))=H^*=G\cap \Sg^{\A}(X_1\cap x_2).$$ Such ultrafilters exist. For simplicity of notation let $\A_1=\Sg^{\A}(X_1)$ and $\A_2=\Sg^{\A}(X_2).$ Define $h_1:\A_1\to \wp({}^nn)$ by $$h_1(x)=\{\eta\in {}^nn: x\in s_{\eta}F\},$$ and $h_2:\A_1\to \wp({}^nn)$ by $$h_2(x)=\{\eta\in {}^nn: x\in s_{\eta}G_\},$$ Then $h_1, h_2$ are homomorphisms, they agree on $\Sg^{\A}(X_1\cap X_2).$ Indeed let $x\in \Sg^{\A}(X_1\cap X_2)$. Then $\eta\in h_1(x)$ iff $s_{\eta}x\in F$ iff $s_{\eta}x\in F\cap \Sg^{\A}(X_1\cap X_2)=H^*=G\cap \Sg^{\A}(X_1\cap X_2)$ iff $s_{\eta}x\in G$ iff $\eta\in h_2(x)$. Thus $h_1\cup h_2$ is a function. By freeness there is an $h:\A\to \wp({}^nn)$ extending $h_1$ and $h_2$. Now $Id\in h(a)\cap h(-c)\neq \emptyset$ which contradicts $a\leq c$. The result now follows from [@Mak], stating that the super amalgamtion property for a variety of $BAO$s follow from the interpolation property in the free algebras. Complete representability for $SA_n$ ==================================== For $SA_n$, the problem of complete representations is delicate since the substitutions corresponding to replacements may not be completey additive, and a complete representation, as we shall see, forces the complete additivity of the so represented algebra. In fact, as we discover, they are not. We first show that representations may not preserve arbitrary joins, from which we infer that the omitting types theorem fails, for the corresponding multi dimensional modal logic. Throughout this section $n$ is a natural number $\geq 2$. All theorems in this subsection, with the exception of theorem \[additive\], apply to Pinter’s algebras, by simply discarding the substitution operations corresponding to transpositions, and modifying the proofs accordingly. \[counter\] There exists a countable $\A\in SA_n$ and $X\subseteq \A$, such that $\prod X=0$, but there is no representation $f:\A\to \wp(V)$ such that $\bigcap_{x\in X}f(x)=\emptyset$. We give the example for $n=2$, and then we show how it extends to higher dimensions. It suffices to show that there is an algebra $\A$, and a set $S\subseteq A$, such that $s_0^1$ does not preserves $\sum S$. For if $\A$ had a representation as stated in the theorem, this would mean that $s_0^1$ is completely additive in $\A$. For the latter statement, it clearly suffices to show that if $X\subseteq A$, and $\sum X=1$, and there exists an injection $f:\A\to \wp(V)$, such that $\bigcup_{x\in X}f(x)=V$, then for any $\tau\in {}^nn$, we have $\sum s_{\tau}X=1$. So fix $\tau \in V$ and assume that this does not happen. Then there is a $y\in \A$, $y<1$, and $s_{\tau}x\leq y$ for all $x\in X$. (Notice that we are not imposing any conditions on cardinality of $\A$ in this part of the proof). Now $$1=s_{\tau}(\bigcup_{x\in X} f(x))=\bigcup_{x\in X} s_{\tau}f(x)=\bigcup_{x\in X} f(s_{\tau}x).$$ (Here we are using that $s_{\tau}$ distributes over union.) Let $z\in X$, then $s_{\tau}z\leq y<1$, and so $f(s_{\tau}z)\leq f(y)<1$, since $f$ is injective, it cannot be the case that $f(y)=1$. Hence, we have $$1=\bigcup_{x\in X} f(s_{\tau}x)\leq f(y) <1$$ which is a contradiction, and we are done. Now we turn to constructing the required counterexample, which is an easy adaptation of a construction dut to Andréka et all in [@AGMNS] to our present situation. We give the detailed construction for the reader’s conveniance. Let $\B$ be an atomless Boolean set algebra with unit $U$, that has the following property: For any distinct $u,v\in U$, there is $X\in B$ such that $u\in X$ and $v\in {}\sim X$. For example $\B$ can be taken to be the Stone representation of some atomless Boolean algebra. The cardinality of our constructed algebra will be the same as $|B|$. Let $$R=\{X\times Y: X,Y\in \B\}$$ and $$A=\{\bigcup S: S\subseteq R: |S|<\omega\}.$$ Then indeed we have $|R|=|A|=|B|$. We claim that $\A$ is a subalgebra of $\wp(^2U)$. Closure under union is obvious. To check intersections, we have: $$(X_1\times Y_1)\cap (X_2\times Y_2)=(X_1\cap X_2) \times (Y_1\cap Y_2).$$ Hence, if $S_1$ and $S_2$ are finite subsets of $R$, then $$S_3=\{W\cap Z: W\in S_1, Z\in S_2\}$$ is also a finite subset of $R$ and we have $$(\bigcup S_1)\cap (\bigcup S_2)=\bigcup S_3$$ For complementation: $$\sim (X\times Y)=[\sim X\times U]\cup [U\times \sim Y].$$ If $S\subseteq R$ is finite, then $$\sim \bigcup S=\bigcap \{\sim Z: Z\in S\}.$$ Since each $\sim Z$ is in $A$, and $A$ is closed under intersections, we conclude that $\sim \bigcup S$ is in $A$. We now show that it is closed under substitutions: $$S_0^1(X\times Y)=(X\cap Y)\times U, \\ \ S_1^0(X\times Y)=U\times (X\cap Y)$$ $$S_{01}(X\times Y)=Y\times X.$$ Let $$D_{01}=\{s\in U\times U: s_0=s_1\}.$$ We claim that the only subset of $D_{01}$ in $\A$ is the empty set. Towards proving this claim, assume that $X\times Y$ is a non-empty subset of $D_{01}$. Then for any $u\in X$ and $v\in Y$ we have $u=v$. Thus $X=Y=\{u\}$ for some $u\in U$. But then $X$ and $Y$ cannot be in $\B$ since the latter is atomless and $X$ and $Y$ are atoms. Let $$S=\{X\times \sim X: X\in B\}.$$ Then $$\bigcup S={}\sim D_{01}.$$ Now we show that $$\sum{}^{\A}S=U\times U.$$ Suppose that $Z$ is an upper bound different from $U\times U$. Then $\bigcup S\subseteq Z.$ Hence $\sim D_{01}\subseteq Z$, hence $\sim Z=\emptyset$, so $Z=U\times U$. Now $$S_{0}^1(U\times U) =(U\cap U)\times U=U\times U.$$ But $$S_0^1(X\times \sim X)=(X\cap \sim X)\times U=\emptyset.$$ for every $X\in B$. Thus $$S_0^1(\sum S)=U\times U$$ and $$\sum \{S_{0}^1(Z): Z\in S\}=\emptyset.$$ For $n>2$, one takes $R=\{X_1\times\ldots\times X_n: X_i\in \B\}$ and the definition of $\A$ is the same. Then, in this case, one takes $S$ to be $X\times \sim X\times U\times\ldots\times U$ such that $X\in B$. The proof survives verbatim. By taking $\B$ to be countable, then $\A$ can be countable, and so it violates the omitting types theorem. \[additive\]Let $\A$ be in $SA_n$. Then $\A$ is completely additive iff $s_0^1$ is completely additive, in particular if $\A$ is atomic, and $s_0^1$ is additive, then $\A$ is completely representable. It suffices to show that for $i, j\in n$, $i\neq j$, we have $s_i^j$ is completely additive. But $[i|j]= [0|1]\circ [i,j]$, and $s_{[i,j]}$ is completely additive. For replacement algebras, when we do not have transpositions, so the above proof does not work. So in principal we could have an algebra such that $s_0^1$ is completely additive in $\A$, while $s_1^0$ is not. Find a Pinter’s algebra for which $s_0^1$ is completely additive but $s_1^0$ is not However, like $SA_n$, we also have: For every $n\geq 2$, and every distinct $i,j\in n$, there is an algebra $\B\in SA_n$ such that $s_i^j$ is not completely additive. One modifies the above example by letting $X$ occur in the $ith$ place of the product, and $\sim X$, in the $jth$ place. Now we turn to the notion of complete representability which is not remote from that of minimal completions [@complete]. [^2] Let $\A\in SA_n$ and $b\in A$, then $\Rl_{b}\A=\{x\in \A: x\leq b\}$, with operations relativized to $b$. \[atomic\] Let $\A\in SA_n$ atomic and assume that $\sum_{x\in X} s_{\tau}x=b$ for all $\tau\in {}^nn$. Then $\Rl_{b}\A$ is completely representable. In particular, if $\sum_{x\in x}s_{\tau}x=1$, then $\A$ is completely representable. Clearly $\B=\Rl_{b}\A$ is atomic. For all $a\neq 0$, $a\leq b$, find an ultrafilter $F$ that contains $a$ and lies outside the nowhere dense set $S\sim \bigcup_{x\in X} N_{s_{\tau}x}.$ This is possible since $\B$ is atomic, so one just takes the ultrafilter generated by an atom below $a$. Define for each such $F$ and such $a$, $rep_{F,a}(x)=\{\tau \in {}^nn: s_{\tau}x\in F\}$; for each such $a$ let $V_a={}^nn$, and then set $g:\B\to \prod_{a\ne 0} \wp(V_a)$ by $g(b)=(rep_{F,a}(b): a\leq b, a\neq 0)$. Since $SA_n$ can be axiomatized by Sahlqvist equations, it is closed under taking canonical extensions. The next theorem says that canonical extensions have complete (square) representations. The argument used is borrowed from Hirsch and Hodkinson [@step] which is a first order model-theoretic view of representability, using $\omega$-saturated models. A model is $\omega$ saturated if every type that is finitely realized, is realized. Every countable consistent theory has an $\omega$ saturated model. \[canonical\] Let $\A\in SA_n$. Then $\A^+$ is completely representable on a square unit. For a given $\A\in SA_n$, we define a first order language $L(\A)$, which is purely relational; it has one $n-ary$ relation symbol for each element of $\A$. Define an $L(\A)$ theory $T(\A)$ as follows; for all $R,S,T\in \A$ and $\tau\in S_n$: $$\sigma_{\lor}(R,S,T)=[R(\bar{x})\longleftrightarrow S(\bar{x})\lor T(\bar{x})], \\ R=S\lor T$$ $$\sigma_{\neg}(R,S)=(1(\bar{x})\to (R(\bar{x})\longleftrightarrow \neg S(\bar{x})], \\ R=\neg S$$ $$\sigma _{\tau}(R,S)=R(\bar{x})\longleftrightarrow s_{\tau}S(\bar{x}), \\R=s_{\tau}S.$$ $$\sigma_{\neq 0}(R)=\exists \bar{x}R(\bar{x}).$$ Let $\A$ be given. Then since $\A$ has a representation, hence $T(\A)$ is a consistent first order theory. Let $M$ be an $\omega$ saturated model of $T({\A})$. Let $1^{M}={}^nM$. Then for each $\bar{x}\in 1^{M}$, the set $f_{\bar{x}}=\{a\in \A: M\models a(\bar{x})\}$ is an ultrafilter of $\A$. Define $h:\A^+\to \wp ({}^nM)$ via $$S\mapsto \{\bar{x}\in 1^M: f_{\bar{x}}\in S\}.$$ Here $S$, an element of $\A^+$, is a set of ultrafilters of $\A$. Clearly, $h(0)=h(\emptyset)=\emptyset$. $h$ respects complementation, for $\bar{x}\in 1^M$ and $S\in \A^+$, $\bar{x}\notin h(S)$ iff $f_{\bar{x}}\notin S$ iff $f_{\bar{x}}\in -S$ iff $\bar{x}\notin h(-S).$ It is also straightforward to check that $h$ preserves arbitrary unions. Indeed, we have $\bar{x}\in h(\bigcup S_i)$ iff $f_{\bar{x}}\in \bigcup S_i$ iff $\bar{x}\in h(S_i)$ for some $i$. We now check that $h$ is injective. Here is where we use saturation. Let $\mu$ be an ultrafilter in $\A$, we show that $h(\{\mu\})\neq \emptyset$. Take $p(\bar{x})=\{a(\bar{x}): a\in \mu\}$. Then this type is finitely satisfiable in $M$. For if $\{a_0(\bar{x}),\ldots, a_{n-1}(\bar{x})\}$ is an arbitrary finite subset of $p(\bar{x})$, then $a=a_0.a_1..a_{n-1}\in \mu$, so $a>0$. By axiom $\sigma_{\neq 0}(a)$ in $T(\A)$, we have $M\models \exists\bar{x}a(\bar{x})$. Since $a\leq a_i$ for each $i<n$, we obtain using $\sigma_{\lor}(a_i, a, a_i)$ axiom of $T_{\A}$ that $M\models \exists \bar{x}\bigwedge_{i<n}a_i(\bar{x})$, showing that $p(\bar{x})$ is finitely satisfiable as required. Hence, by $\omega$ saturation $p$ is realized in $M$ by some $n$ tuple $\bar{x}$. Hence $p$ is realized in $M$ by the tuple $\bar{x}$, say. Now $\mu=f_{\bar{x}}$. So $\bar{x}\in h(\{\mu\}$ and we have proved that $h$ is an injection. Preservation of the substitution operations is straightforward. For $\A\in SA_n$, the following two conditions are equivalent: There exists a locally square set $V,$ and a complete representation $f:\A\to \wp(V)$. For all non zero $a\in A$, there exists a homomorphism $f:\A\to \wp(^nn)$ such that $f(a)\neq 0$, and $\bigcup_{x\in \At\A} f(x)={}^nn$. [Proof]{} Having dealt with the other implication before, we prove that (1) implies (2). Let there be given a complete representation $g:\A\to \wp(V)$. Then $\wp(V)\subseteq \prod_{i\in I} \A_i$ for some set $I$, where $\A_i=\wp{}(^nn)$. Assume that $a$ is non-zero, then $g(a)$ is non-zero, hence $g(a)_i$ is non-zero for some $i$. Let $\pi_j$ be the $j$th projection $\pi_j:\prod \A_i\to \A_j$, $\pi_j[(a_i: i\in I)]=a_j$. Define $f:\A\to \A_i$ by $f(x)=(\pi_i\circ g(x)).$ Then clearly $f$ is as required. The following theorem is a converse to \[atomic\] \[converse\] Assume that $\A\in SA_n$ is completely representable. Let $f:\A\to \wp(^nn)$ be a non-zero homomorphism that is a complete representation. Then $\sum_{x\in X} s_{\tau}x=1$ for every $\tau\in {}^nn$. Like the first part of the proof of theorem \[counter\]. Adapting another example in [@AGMNS] constructed for $2$ dimensional quasi-polyadic algebras, we show that atomicity and complete representability do not coincide for $SA_n$. Because we are lucky enough not have cylindrifiers, the construction works for all $n\geq 2$, and even for infinite dimensions, as we shall see. Here it is not a luxary to include the details, we have to do so because we will generalize the example to all higher dimensions. \[counter2\] There exists an atomic complete algebra in $SA_n$ ($2\leq n<\omega$), that is not completely representable. Furthermore, dropping the condition of completeness, the algebra can be atomic and countable. It is enough, in view of the previous theorem, to construct an atomic algebra such that $\Sigma_{x\in \At\A} s_0^1x\neq 1$. In what follows we produce such an algebra. (This algebra will be uncountable, due to the fact that it is infinite and complete, so it cannot be countable. In particular, it cannot be used to violate the omitting types theorem, the most it can say is that the omitting types theorem fails for uncountable languages, which is not too much of a surprise). Let $\mathbb{Z}^+$ denote the set of positive integers. Let $U$ be an infinite set. Let $Q_n$, $n\in \omega$, be a family of $n$-ary relations that form partition of $^nU$ such that $Q_0=D_{01}=\{s\in {}^nU: s_0=s_1\}$. And assume also that each $Q_n$ is symmetric; for any $i,j\in n$, $S_{ij}Q_n=Q_n$. For example one can $U=\omega$, and for $n\geq 1$, one sets $$Q_n=\{s\in {}^n\omega: s_0\neq s_1\text { and }\sum s_i=n\}.$$ Now fix $F$ a non-principal ultrafilter on $\mathcal{P}(\mathbb{Z}^+)$. For each $X\subseteq \mathbb{Z}^+$, define $$R_X = \begin{cases} \bigcup \{Q_k: k\in X\} & \text { if }X\notin F, \\ \bigcup \{Q_k: k\in X\cup \{0\}\} & \text { if } X\in F \end{cases}$$ Let $$\A=\{R_X: X\subseteq \mathbb{Z}^+\}.$$ Notice that $\A$ is uncountable. Then $\A$ is an atomic set algebra with unit $R_{\mathbb{Z}^+}$, and its atoms are $R_{\{k\}}=Q_k$ for $k\in \mathbb{Z}^+$. (Since $F$ is non-principal, so $\{k\}\notin F$ for every $k$). We check that $\A$ is indeed closed under the operations. Let $X, Y$ be subsets of $\mathbb{Z}^+$. If either $X$ or $Y$ is in $F$, then so is $X\cup Y$, because $F$ is a filter. Hence $$R_X\cup R_Y=\bigcup\{Q_k: k\in X\}\cup\bigcup \{Q_k: k\in Y\}\cup Q_0=R_{X\cup Y}$$ If neither $X$ nor $Y$ is in $F$, then $X\cup Y$ is not in $F$, because $F$ is an ultrafilter. $$R_X\cup R_Y=\bigcup\{Q_k: k\in X\}\cup\bigcup \{Q_k: k\in Y\}=R_{X\cup Y}$$ Thus $A$ is closed under finite unions. Now suppose that $X$ is the complement of $Y$ in $\mathbb{Z}^+$. Since $F$ is an ultrafilter exactly one of them, say $X$ is in $F$. Hence, $$\sim R_X=\sim{}\bigcup \{Q_k: k\in X\cup \{0\}\}=\bigcup\{Q_k: k\in Y\}=R_Y$$ so that $\A$ is closed under complementation (w.r.t $R_{\mathbb{Z}^+}$). We check substitutions. Transpositions are clear, so we check only replacements. It is not too hard to show that $$S_0^1(R_X)= \begin{cases} \emptyset & \text { if }X\notin F, \\ R_{\mathbb{Z}^+} & \text { if } X\in F \end{cases}$$ Now $$\sum \{S_0^1(R_{k}): k\in \mathbb{Z}^+\}=\emptyset.$$ and $$S_0^1(R_{\mathbb{Z}^+})=R_{\mathbb{Z}^+}$$ $$\sum \{R_{\{k\}}: k\in \mathbb{Z}^+\}=R_{\mathbb{Z}^+}=\bigcup \{Q_k:k\in \mathbb{Z}^+\}.$$ Thus $$S_0^1(\sum\{R_{\{k\}}: k\in \mathbb{Z}^+\})\neq \sum \{S_0^1(R_{\{k\}}): k\in \mathbb{Z}^+\}.$$ For the completeness part, we refer to [@AGMNS]. The countable algebra required is that generated by the countably many atoms. Our next theorem gives a plathora of algebras that are not completely representable. Any algebra which shares the atom structure of $\A$ constructed above cannot have a complete representation. Formally: Let $\A$ be as in the previous example. Let $\B$ be an atomic an atomic algebra in $SA_n$ such that $\At\A\cong \At\B$. Then $\B$ is not completely representable Let such a $\B$ be given. Let $\psi:\At\A\to \At\B$ be an isomorphism of the atom structures (viewed as first order structures). Assume for contradiction that $\B$ is completely representable, via $f$ say; $f:\B\to \wp(V)$ is an injective homomorphism such that $\bigcup_{x\in \At\B}f(x)=V$. Define $g:\A\to \wp(V)$ by $g(a)=\bigcup_{x\in \At\A, x\leq a} f(\psi(x))$. Then, it can be easily checked that $f$ establishes a complete representation of $\A$. There is a wide spread belief, almost permenantly established that like cylindric algebras, any atomic [*poyadic algebras of dimension $2$*]{} is completely representable. This is wrong. The above example, indeed shows that it is not the case, because the set algebras consrtucted above , if we impose the additional condition that each $Q_n$ has $U$ as its domain and range, then the algebra in question becomes closed under the first two cylindrfiers, and by the same reasoning as above, it [*cannot*]{} be completely representable. However, this condition cannot be imposed for for higher dimension, and indeed for $n\geq 3$, the class of completely representable quasiplyadic algebras is not elementary. When we have diagonal elements, the latter result holds for quasipolyadic equality algebras, but the former does not. On the other hand, the variety of polyadic algebras of dimension $2$ is conjugated (which is not the case when we drop diagonal elements), hence atomic representable algebras are completely representable. We introduce a certain cardinal that plays an essential role in omitting types theorems [@Tarek]. A Polish space is a complete separable metric space. For a Polish space $X$, $K(X)$ denotes the ideal of meager subsets of $X$. Set $$cov K(X)=min \{|C|: C\subseteq K(X),\ \cup C=X\}.$$ If $X$ is the real line, or the Baire space $^{\omega}\omega$, or the Cantor set $^{\omega}2$, which are the prime examples of Polish spaces, we write $K$ instead of $K(X)$. The above three spaces are sometimes referred to as [*real*]{} spaces since they are all Baire isomophic to the real line. Clearly $\omega <covK \leq {}2^{\aleph_0}.$ The crdinal $covK$ is an important cardinal studied extensively in descriptive set theory, and it turns out strongly related to the number of types that can be omitted in consitent countable first order theory, a result of Newelski. It is known, but not trivial to show, that $covK$ is the least cardinal $\kappa$ such that the real space can be covered by $\kappa$ many closed nowhere dense sets, that is the least cardinal such that the Baire Category Theorem fails. Also it is the largest cardinal for which Martin’s axiom restricted to countable Boolean algebras holds. Indeed, the full Martin’s axiom, imply that $covK=2^{\aleph_0}$ but it is also consistent that $covK=\omega_1<2^{\aleph_0}.$ Varying the value of $covK$ by (iterated) forcing in set theory is known. For example $covK<2^{\aleph_0}$ is true in the random real model. It also holds in models constructed by forcings which do not add Cohen reals. Let $\A\in SA_n$ be countable and completely additive, and let $\kappa$ be a cardinal $<covK$. Assume that $(X_i: i<\kappa)$ is a family on non principal types. Then there exists a countable locally square set $V$, and an injective homomorphism $f:\A\to \wp(V)$ such that $\bigcap_{x\in X_i} f(x)=\emptyset$ for each $i\in \kappa$. Let $a\in A$ be non-zero. For each $\tau\in {}^nn$ for each $i\in \kappa$, let $X_{i,\tau}=\{{\sf s}_{\tau}x: x\in X_i\}.$ Since the algebra is additive, then $(\forall\tau\in V)(\forall i\in \kappa)\prod{}^{\A}X_{i,\tau}=0.$ Let $S$ be the Stone space of $\A$, and for $a\in N$, let $N_a$ be the clopen set consisting of all Boolean ultrafilters containing $a$. Let $\bold H_{i,\tau}=\bigcap_{x\in X_i} N_{{\sf s}_{\tau}x}.$ Each $\bold H_{i,\tau}$ is closed and nowhere dense in $S$. Let $\bold H=\bigcup_{i\in \kappa}\bigcup_{\tau\in V}\bold H_{i,\tau}.$ By properties of $covK$, $\bold H$ is a countable collection of nowhere dense sets. By the Baire Category theorem for compact Hausdorff spaces, we get that $S\sim \bold H$ is dense in $S$. Let $F$ be an ultrafilter that contains $a$ and is outside $\bold H$, that is $F$ intersects the basic set $N_a$; exists by density. Let $h_a(z)=\{\tau \in V: s_{\tau}\in F\}$, then $h_a$ is a homomorphism into $\wp(V)$ such that $h_a(a)\neq 0$. Define $g:\A\to \prod_{a\in A}\wp(V)$ via $a\mapsto (h_a(x): x\in A)$. Let $V_{a}=V\times \{a\}$. Since $\prod_{a\in A}\wp(V)\cong \wp(\bigcup_{a\in A} V_a)$, then we are done for $g$ is clearly an injection. The statement of omitting $< {}^{\omega}2$ non-principal types is independent of $ZFC$. Martins axiom offers solace here, in two respects. Algebras adressed could be uncountable (though still satisfying a smallness condition, that is a natural generalization of countability, and in fact, is necessary for Martin’s axiom to hold), and types omitted can be $< {}^{\omega}2$. More precisely: In $ZFC+MA$ the following can be proved: Let $\A\in SA_n$ be completely additive, and further assume that $\A$ satisfies the countable chain condition (it contains no uncountable anti-chains). Let $\lambda<{}^{\omega}2$, and let $(X_i: i<\lambda$) be a family of non-principal types. Then there exists a countable locally square set $V$, and an injective homomorphism $f:\A\to \wp(V)$ such that $\bigcap_{x\in X_i} f(x)=\emptyset$ for each $i\in \lambda$. The idea is exactly like the previous proof, except that now we have a union of $<{}^{\omega}2$ no where dense sets; the required ultrafilter to build the representation we need lies outside this union. $MA$ offers solace here because it implies that such a union can be written as a countable union, and again the Baire category theorem readily applies. But without $MA$, if the given algebra is countable and completely additive, and if the types are maximal, then we can also omit $<{} ^{\omega}2$ types. This [*is indeed* ]{} provable in $ZFC$, without any additional assumptions at all. For brevity, when we have an omitting types theore, like the previos one, we say that the hypothesis implies existence of a representation omitting the given set of non-principl types. Let $\A\in SA_n$ be a countable, let $\lambda<{}^{\omega}2$ and $\bold F=(F_i: i<\lambda)$ be a family of non principal ultrafilters. Then there is a single representation that omits these non-principal ultrafilters. One can easily construct two representations that overlap only on principal ultrafilters [@h]. With a pinch of diagonalization this can be pushed to countable many. But even more, using ideas of Shelah [@Shelah] thm 5.16, this this can be further pushed to $^{\omega}2$ many representations. Such representations are not necessarily pair-wise distinct, but they can be indexed by a set $I$ such that $|I|={}^{\omega}2$, and if $i, j\in I$, are distinct and there is an ultrafilter that is realized in the representations indexed by $i$ and $j$, then it is principal. Note that they can be the same representation. Now assume for contradiction that there is no representation omitting the given non-principal ultrafilters. Then for all $i<{}^{\omega}2$, there exists $F$ such that $F$ is realized in $\B_i$. Let $\psi:{}^{\omega}2\to \wp(\bold F)$, be defined by $\psi(i)=\{F: F \text { is realized in }\B_i\}$. Then for all $i<{}^{\omega}2$, $\psi(i)\neq \emptyset$. Furthermore, for $i\neq j$, $\psi(i)\cap \psi(j)=\emptyset,$ for if $F\in \psi(i)\cap \psi(j)$ then it will be realized in $\B_i$ and $\B_j$, and so it will be principal. This implies that $|\F|={}^{\omega}2$ which is impossible. In case of omitting the single special type, $\{-x: x\in \At\A\}$ for an atomic algebra, no conditions whatsoever on cardinalities are pre-supposed. If $\A\in SA_n$ is completelyadditive and atomic, then $\A$ is completely representable Let $\B$ be an atomic transposition algebra, let $X$ be the set of atoms, and let $c\in \B$ be non-zero. Let $S$ be the Stone space of $\B$, whose underlying set consists of all Boolean ulltrafilters of $\B$. Let $X^*$ be the set of principal ultrafilters of $\B$ (those generated by the atoms). These are isolated points in the Stone topology, and they form a dense set in the Stone topology since $\B$ is atomic. So we have $X^*\cap T=\emptyset$ for every nowhere dense set $T$ (since principal ultrafilters, which are isolated points in the Stone topology, lie outside nowhere dense sets). Recall that for $a\in \B$, $N_a$ denotes the set of all Boolean ultrafilters containing $a$. Now for all $\tau\in S_n$, we have $G_{X, \tau}=S\sim \bigcup_{x\in X}N_{s_{\tau}x}$ is nowhere dense. Let $F$ be a principal ultrafilter of $S$ containing $c$. This is possible since $\B$ is atomic, so there is an atom $x$ below $c$; just take the ultrafilter generated by $x$. Also $F$ lies outside the $G_{X,\tau}$’s, for all $\tau\in S_n.$ Define, as we did before, $f_c$ by $f_c(b)=\{\tau\in S_n: s_{\tau}b\in F\}$. Then clearly for every $\tau\in S_n$ there exists an atom $x$ such that $\tau\in f_c(x)$, so that $S_n=\bigcup_{x\in \At\A} f_c(x)$ For each $a\in A$, let $V_a=S_n$ and let $V$ be the disjoint union of the $V_a$’s. Then $\prod_{a\in A} \wp(V_a)\cong \wp(V)$. Define $f:\A\to \wp(V)$ by $f(x)=g[(f_ax: a\in A)]$. Then $f: \A\to \wp(V)$ is an embedding such that $\bigcup_{x\in \At\A}f(x)=V$. Hence $f$ is a complete representation. Another proof inspired from modal logic, and taken from Hirsch and Hodkinson [@atomic], with the slight difference that we assume complete additivity not conjugation. Let $\A\in SA_n$ be additive and atomic, so the first order correspondants of the equations are valid in its atom structure $\At\A$. But $\At\A$ is a bounded image of a disjoint union of square frames $\F_i$, that is there exists a bounded morphism from $\At\A\to \bigcup_{i\in I}\F_i$, so that $\Cm\F_i$ is a full set algebra. Dually the inverse of this bounded morphism is an embedding frm $\A$ into $\prod_{i\in I}\Cm\F_i$ that preserves all meets. The previous theorem tells us how to capture (in first order logic) complete representability. We just spell out first order axioms forcing complete additivity of substitutions corresponding to replacements. Other substitutions, corresponding to the transpositions, are easily seen to be completely additive anway; in fact, they are self-conjugate. \[elementary\] The class $CSA_n$ is elementary and is finitely axiomatizable in first order logic. We assume $n>1$, the other cases degenerate to the Boolean case. Let $\At(x)$ be the first order formula expressing that $x$ is an atom. That is $\At(x)$ is the formula $x\neq 0\land (\forall y)(y\leq x\to y=0\lor y=x)$. For distinct $i,j<n$ let $\psi_{i,j}$ be the formula: $y\neq 0\to \exists x(\At(x)\land s_i^jx\neq 0\land s_i^jx\leq y).$ Let $\Sigma$ be obtained from $\Sigma_n$ by adding $\psi_{i,j}$ for every distinct $i,j\in n$. We show that $CSA_n={\bf Mod}(\Sigma)$. Let $\A\in CSA_n$. Then, by theorem \[converse\], we have $\sum_{x\in X} s_i^jx=1$ for all $i,j\in n$. Let $i,j\in $ be distinct. Let $a$ be non-zero, then $a.\sum_{x \in X}s_i^jx=a\neq 0$, hence there exists $x\in X$, such that $a.s_i^jx\neq 0$, and so $\A\models \psi_{i,j}$. Conversely, let $\A\models \Sigma$. Then for all $i,j\in n$, $\sum_{x\in X} s_i^jx=1$. Indeed, assume that for some distinct $i,j\in n$, $\sum_{x\in X}s_i^jx\neq 1$. Let $a=1-\sum_{x\in X} s_i^jx$. Then $a\neq 0$. But then there exists $x\in X$, such that $s_i^jx.\sum_{x\in X}s_i^jx\neq 0$ which is impossible. But for distinct $i, j\in n$, we have $\sum_{x\in X}s_{[i,j]}X=1$ anyway, and so $\sum s_{\tau}X=1$, for all $\tau\in {}^nn$, and so it readily follows that $\A\in CRA_n.$ Call a completely representable algebra [*square completely representable*]{}, if it has a complete representation on a square. If $\A\in SA_n$ is completely representable, then $\A$ is square completely representable. Assume that $\A$ is completely representable. Then each $s_i^j$ is completely additive for all $i,j\in n$. For each $a\neq 0$, choose $F_a$ outside the nowhere dense sets $S\sim \bigcup_{x\in \At \A}N_{s_{\tau}x}$, $\tau\in {}^nn$, and define $h_a:\A\to \wp(^nn)$ the usual way, that is $h(x)=\{\tau\in {}^nn: s_{\tau}a\in F_a\}.$ Let $V_a={}^nn\times \{a\}.$ Then $h:\A\to \prod_{a\in \A}\wp(V_a)$ defined via $a\mapsto (h_a(x): a\in \A)$ is a complete representation. But $\prod_{a\in \A, a\neq 0}\wp(V_a)\cong \wp(\bigcup_{a\in \A, a\neq 0} V_a)$ and the latter is square. A variety $V$ is called completion closed if whenever $\A\in V$ is completely additive then its minimal completion $\A^*$ (which exists) is in $V$. On completions, we have: If $\A\in SA_n$ is completely additive, then $\A$ has a strong completion $\A^*$. Furthermore, $\A^*\in SA_n.$ In other words, $SA_n$ is completion closed. The completion can be constructed because the algebra is completely additive. The second part follows from the fact that the stipulaed positive equations axiomatizing $SA_n$ are preserved in completions [@v3]. We could add diagonal elements $d_{ij}'s$ to our signature and consider $SA_n$ enriched by diagonal elements, call this class $DSA_n$. In set algebras with unit $V$ a locally square unit, the diagonal $d_{ij}, i,j \in n$ will be interpreted as $D_{ij}=\{s\in V: s_i=s_j\}$. All positive results, with no single exception, established for the diagonal fre case, i.e. for $SA_n$ will extend to $DSA_n$, as the reader is invited to show. However, the counterexamples constructed above to violate complete representability of atomic algebras, and an omitting types theorem for the corresponding muti-dimensional modal logic [*do not work*]{} in this new context, because such algebras do not contain the diagonal $D_{01}$, and this part is essential in the proof. We can show though that again the class of completely represenable algebras is elementary. We will return to such an issue in the infinite dimensional csae, where even more interesting results hold; for example square representaion and weak ones form an interesting dichotomy. The Infinite Dimensional Case ----------------------------- For $SA$’s, we can lift our results to infinite dimensions. We give, what we believe is a reasonable generalization to the above theorems for the infinite dimensional case, by allowing weak sets as units, a weak set being a set of sequences that agree cofinitely with some fixed sequence. That is a weak set is one of the form $\{s\in {}^{\alpha}U: |\{i\in \alpha, s_i\neq p_i\}|<\omega\}$, where $U$ is a set, $\alpha$ an ordinal and $p\in {}^{\alpha}U$. This set will be denoted by $^{\alpha}U^{(p)}$. The set $U$ is called the base of the weak set. A set $V\subseteq {}^{\alpha}\alpha^{(Id)}$, is defined to be di-permutable just like the finite dimensional case. Altering top elements to be weak sets, rather than squares, turns out to be fruitful and rewarding. We let $WSA_{\alpha}$ be the variety generated by $$\wp(V)=\langle\mathcal{P}(V),\cap,-,S_{ij}, S_i^j\rangle_{i,j\in\alpha},$$ where $V\subseteq {}^\alpha\alpha^{(Id)}$ is locally square (Recall that $V$ is locally square, if whenever $s\in V$, then, $s\circ [i|j]$ and $s\circ [i,j]\in V$, for $i,j\in \alpha$.) Let $\Sigma_{\alpha}$ be the set of finite schemas obtained from $\Sigma_n$ but now allowing indices from $\alpha$. Obviously $\Sigma_{\alpha}$ is infinite, but it has a finitary feature in a two sorted sense. One sort for the ordinals $<\alpha$, the other for the first order situation. Indeed, the system $({\bf Mod}(\Sigma_{\alpha}): \alpha\geq \omega)$ is an instance of what is known in the literature as a system of varieties definable by a finite schema, which means that it is enough to specify a strictly finite subset of $\Sigma_{\omega}$, to define $\Sigma_{\alpha}$ for all $\alpha\geq \omega$. (Strictly speaking, systems of varieties definable by schemes require that we have a unary operation behaving like a cylindrifier, but such a distinction is immaterial in the present context.) More precisely, let $L_{\kappa}$ be the language of $WSA_{\kappa}$. Let $\rho:\alpha\to \beta$ be an injection. One defines for a term $t$ in $L_{\alpha}$ a term $\rho(t)$ in $L_{\beta}$ by recursion as follows: For any variable $v_i$, $\rho(v_i)=v_i$ and for any unary operation $f$ $\rho(f(\tau))=f(\rho(\tau))$. Now let $e$ be a given equation in the language $L_{\alpha}$, say $e$ is the equation $\sigma=\tau$. then one defines $\rho(e)$ to be the equation $\rho(\sigma)=\rho(\tau)$. Then there exists a finite set $\Sigma\subseteq \Sigma_{\omega}$ such that $\Sigma_{\alpha}=\{\rho(e): \rho:\omega\to \alpha \text { is an injection },e\in \Sigma\}.$ Notice that $\Sigma_{\omega}=\bigcup_{n\geq 2} \Sigma_n.$ Let $SA_{\alpha}={\bf Mod}(\Sigma_{\alpha})$. We give two proofs of the next main representation theorem, but first a definition. Let $\alpha\leq\beta$ be ordinals and let $\rho:\alpha\rightarrow\beta$ be an injection. For any $\beta$-dimensional algebra $\B$ we define an $\alpha$-dimensional algebra $\Rd^\rho\B$, with the same base and Boolean structure as $\B$, where the $(i,j)$th transposition substitution of $\Rd^\rho\B$ is $s_ {\rho(i)\rho(j)}\in\B$, and same for replacements. For a class $K$, $\Rd^{\rho}K=\{\Rd^{\rho}\A: \A\in K\}$. When $\alpha\subseteq \beta$ and $\rho$ is the identity map on $\alpha$, then we write $\Rd_{\alpha}\B$, for $\Rd^{\rho}\B$. Our first proof, is more general than the present context; it is basically a lifting argument that can be used to transfer results in the finite dimensional case to infinite dimensions. \[r\] For any infinite ordinal $\alpha$, $SA_{\alpha}=WSA_{\alpha}.$ [First proof]{} First for $\A\models \Sigma_{\alpha}$, $\rho:n\to \alpha,$ an injection, and $n\in \omega,$ we have $\Rd^{\rho}\A\in SA_n$. For any $n\geq 2$ and $\rho:n\to \alpha$ as above, $SA_n\subseteq\mathbf{S}\Rd^{\rho}WSA_{\alpha}$ as in [@HMT2] theorem 3.1.121. $WTA_{\alpha}$ is, by definition, closed under ultraproducts. Now we show that if $\A\models \Sigma_{\alpha}$, then $\A$ is representable. First, for any $\rho:n\to \alpha$, $\Rd^{\rho}\A\in SA_n$. Hence it is in $S\Rd^{\rho}WSA_{\alpha}$. Let $I$ be the set of all finite one to one sequences with range in $\alpha$. For $\rho\in I$, let $M_{\rho}=\{\sigma\in I:\rho\subseteq \sigma\}$. Let $U$ be an ultrafilter of $I$ such that $M_{\rho}\in U$ for every $\rho\in I$. Exists, since $M_{\rho}\cap M_{\sigma}=M_{\rho\cup \sigma}.$ Then for $\rho\in I$, there is $\B_{\rho}\in WSA_{\alpha}$ such that $\Rd^{\rho}\A\subseteq \Rd^{\rho}\B_{\rho}$. Let $\C=\prod\B_{\rho}/U$; it is in $\mathbf{Up}WSA_{\alpha}=WSA_{\alpha}$. Define $f:\A\to \prod\B_{\rho}$ by $f(a)_{\rho}=a$ , and finally define $g:\A\to \C$ by $g(a)=f(a)/U$. Then $g$ is an embedding. The **second proof** follows from the next lemma, whose proof is identical to the finite dimensional case with obvious modifications. Here, for $\xi\in {}^\alpha\alpha^{(Id)},$ the operator $S_\xi$ works as $S_{\xi\upharpoonright J}$ (which can be defined as in in the finite dimensional case because we have a finite support) where $J=\{i\in\alpha:\xi(i)\neq i\}$ (in case $J$ is empty, i.e., $\xi=Id_\alpha,$ $S_\xi$ is the identity operator). \[f\] Let $\A$ be an $SA_\alpha$ type $BAO$ and $G\subseteq{}^\alpha\alpha^{(Id)}$ permutable. Let $\langle\mathcal{F}_\xi:\xi\in G\rangle$ is a system of ultrafilters of $\A$ such that for all $\xi\in G,\;i\neq j\in \alpha$ and $a\in\A$ the following condition holds:$$S_{ij}^\A(a)\in\mathcal{F}_\xi\Leftrightarrow a\in \mathcal{F}_{\xi\circ[i,j]}\quad\quad (*).$$ Then the following function $h:\A\longrightarrow\wp(G)$ is a homomorphism $$h(a)=\{\xi\in G: a\in \mathcal{F}_\xi\}.$$ We let $WSA_{\alpha}$ be the variety generated by $$\wp(V)=\langle\mathcal{P}(V),\cap,\sim,S^i_j,S_{ij}\rangle_{i,j\in\alpha}, \ \ V\subseteq{}^\alpha\alpha^{(Id)}$$ is locally square. Let $\Sigma_{\alpha}$ be the set of finite schemas obtained from from the $\Sigma_n$ but now allowing indices from $\alpha$; and let $SA_{\alpha}={\bf Mod}(\Sigma_{\alpha}')$. Then as before, we can prove, completeness and interpolation (for the corresponding multi dimensional modal logic): \[1\] Let $\alpha$ be an infinite ordinal. Then, we have: $WSA_{\alpha}=SA_{\alpha}$ $SA_{\alpha}$ has the superamalgamation property Like before undergoing the obvious modifications. In particular, from the first item, it readily follows, that if $\A\subseteq \wp(^{\alpha}U)$ and $a\in A$ is non-zero, then there exists a homomorphism $f:\A\to \wp(V)$ for some permutable $V$ such that $f(a)\neq 0$. We shall prove a somewhat deep converse of this result, that will later enable us to verify that the quasi-variety of subdirect products of full set algebras is a variety. But first a definition and a result on the number of non-isomorphic models. Let $\A$ and $\B$ be set algebras with bases $U$ and $W$ respectively. Then $\A$ and $\B$ are *base isomorphic* if there exists a bijection $f:U\to W$ such that $\bar{f}:\A\to \B$ defined by ${\bar f}(X)=\{y\in {}^{\alpha}W: f^{-1}\circ y\in x\}$ is an isomorphism from $\A$ to $\B$ An algebra $\A$ is *hereditary atomic*, if each of its subalgebras is atomic. Finite Boolean algebras are hereditary atomic of course, but there are infinite hereditary atomic Boolean algebras, example any Boolean algebra generated by the atoms is necessarily hereditory atomic, like the finite cofinite Boolean algebra. An algebra that is infinite and complete, like that in our example violating complete representability, is not hereditory atomic, whether atomic or not. Hereditary atomic algebras arise naturally as the Tarski Lindenbaum algebras of certain countable first order theories, that abound. If $T$ is a countable complete first order theory which has an an $\omega$-saturated model, then for each $n\in \omega$, the Tarski Lindenbuam Boolean algebra $\Fm_n/T$ is hereditary atomic. Here $\Fm_n$ is the set of formulas using only $n$ variables. For example $Th(\Q,<)$ is such with $\Q$ the $\omega$ saturated model. A well known model-theoretic result is that $T$ has an $\omega$ saturated model iff $T$ has countably many $n$ types for all $n$. Algebraically $n$ types are just ultrafilters in $\Fm_n/T$. And indeed, what characterizes hereditary atomic algebras is that the base of their Stone space, that is the set of all ultrafilters, is at most countable. \[b\] Let $\B$ be a countable Boolean algebra. If $\B$ is hereditary atomic then the number of ultrafilters is at most countable; of course they are finite if $\B$ is finite. If $\B$ is not hereditary atomic then it has exactly $2^{\omega}$ ultrafilters. [@HMT1] p. 364-365 for a detailed discussion. A famous conjecture of Vaught says that the number of non-isomorphic countable models of a complete theory is either $\leq \omega$ or exactly $^{\omega}2$. We show that this is the case for the multi (infinite) dimensional modal logic corresponding to $SA_{\alpha}$. Morleys famous theorem excluded all possible cardinals in between except for $\omega_1$. \[2\] Let $\A\in SA_{\omega}$ be countable and simple. Then the number of non base isomorphic representations of $\A$ is either $\leq \omega$ or $^{\omega}2$. Furthermore, if $\A$ is assumed completely additive, and $(X_i: i< covK)$ is a family of non-principal types, then the number of models omitting these types is the same. For the first part. If $\A$ is hereditary atomic, then the number of models $\leq$ the number of ultrafilters, hence is at most countable. Else, $\A$ is not hereditary atomic, then it has $^{\omega}2$ ultrafilters. For an ultrafilter $F$, let $h_F(a)=\{\tau \in V: s_{\tau}a\in F\}$, $a\in \A$. Then $h_F\neq 0$, indeed $Id\in h_F(a)$ for any $a\in \A$, hence $h_F$ is an injection, by simplicity of $\A$. Now $h_F:\A\to \wp(V)$; all the $h_F$’s have the same target algebra. We claim that $h_F(\A)$ is base isomorphic to $h_G(\A)$ iff there exists a finite bijection $\sigma\in V$ such that $s_{\sigma}F=G$. We set out to confirm our claim. Let $\sigma:\alpha\to \alpha$ be a finite bijection such that $s_{\sigma}F=G$. Define $\Psi:h_F(\A)\to \wp(V)$ by $\Psi(X)=\{\tau\in V:\sigma^{-1}\circ \tau\in X\}$. Then, by definition, $\Psi$ is a base isomorphism. We show that $\Psi(h_F(a))=h_G(a)$ for all $a\in \A$. Let $a\in A$. Let $X=\{\tau\in V: s_{\tau}a\in F\}$. Let $Z=\Psi(X).$ Then $$\begin{split} &Z=\{\tau\in V: \sigma^{-1}\circ \tau\in X\}\\ &=\{\tau\in V: s_{\sigma^{-1}\circ \tau}(a)\in F\}\\ &=\{\tau\in V: s_{\tau}a\in s_{\sigma}F\}\\ &=\{\tau\in V: s_{\tau}a\in G\}.\\ &=h_G(a)\\ \end{split}$$ Conversely, assume that $\bar{\sigma}$ establishes a base isomorphism between $h_F(\A)$ and $h_G(\A)$. Then $\bar{\sigma}\circ h_F=h_G$. We show that if $a\in F$, then $s_{\sigma}a\in G$. Let $a\in F$, and let $X=h_{F}(a)$. Then, we have $$\begin{split} &\bar{\sigma\circ h_{F}}(a)=\sigma(X)\\ &=\{y\in V: \sigma^{-1}\circ y\in h_{F}(X)\}\\ &=\{y\in V: s_{\sigma^{-1}\circ y}a\in F\}\\ &=h_G(a)\\ \end{split}$$ Now we have $h_G(a)=\{y\in V: s_{y}a\in G\}.$ But $a\in F$. Hence $\sigma^{-1}\in h_G(a)$ so $s_{\sigma^{-1}}a\in G$, and hence $a\in s_{\sigma}G$. Define the equivalence relation $\sim $ on the set of ultrafilters by $F\sim G$, if there exists a finite permutation $\sigma$ such that $F=s_{\sigma}G$. Then any equivalence class is countable, and so we have $^{\omega}2$ many orbits, which correspond to the non base isomorphic representations of $\A$. For the second part, suppose we want to count the number of representations omitting a family $\bold X=\{X_i:i<\lambda\}$ ($\lambda<covK)$ of non-isolated types of $T$. We assume, without any loss of generality, that the dimension is $\omega$. Let $X$ be stone space of $\A$. Then $$\mathbb{H}={\bf X}\sim \bigcup_{i\in\lambda,\tau\in W}\bigcap_{a\in X_i}N_{s_\tau a}$$ (where $W=\{\tau\in{}^\omega\omega:|\{i\in \omega:\tau(i)\neq i\}|<\omega\}$) is clearly (by the above discussion) the space of ultrafilters corresponding to representations omitting $\Gamma.$ Note that $\mathbb{H}$ the intersection of two dense sets. But then by properties of $covK$ the union $\bigcup_{i\in\lambda}$ can be reduced to a countable union. We then have $\mathbb{H}$ a $G_\delta$ subset of a Polish space, namely the Stone space $X$. So $\mathbb{H}$ is Polish and moreover, $\mathcal{E}'=\sim \cap (\mathbb{H}\times \mathbb{H})$ is a Borel equivalence relation on $\mathbb{H}.$ It follows then that the number of representations omitting $\Gamma$ is either countable or else $^{\omega}2.$ The above theorem is not so deep, as it might appear on first reading. The relatively simple proof is an instance of the obvious fact that if a countable Polish group, acts on an uncountable Polish space, then the number of induced orbits has the cardinality of the continuum, because it factors out an uncountable set by a countable one. When the Polish group is uncountable, finding the number of orbits is still an open question, of which Vaught’s conjecture is an instance (when the group is the symmetric group on $\omega$ actong on the Polish space of pairwise non-isomorphic models.) We shall prove that weak set algebras are strongly isomorphic to set algebras in the sense of the following definition. This will enable us to show that $RSA_{\alpha}$, like the finite dimensional case, is also a variety. Let $\A$ and $\B$ be set algebras with units $V_0$ and $V_0$ and bases $U_0$ and $U_1,$ respectively, and let $F$ be an isomorphism from $\B$ to $\A$. Then $F$ is a *strong ext-isomorphism* if $F=(X\cap V_0: X\in B)$. In this case $F^{-1}$ is called a *strong subisomorphism*. An isomorphism $F$ from $\A$ to $\B$ is a *strong ext base isomorphism* if $F=g\circ h$ for some base isomorphism and some strong ext isomorphism $g$. In this case $F^{-1}$ is called a [*strong sub base isomorphism.*]{} The following, this time deep theorem, uses ideas of Andréka and Németi, reported in [@HMT2], theorem 3.1.103, in how to square units of so called weak cylindric set algebras (cylindric algebras whose units are weak spaces): \[weak\] If $\B$ is a subalgebra of $ \wp(^{\alpha}\alpha^{(Id)})$ then there exists a set algebra $\C$ with unit $^{\alpha}U$ such that $\B\cong \C$. Furthermore, the isomorphism is a strong sub-base isomorphism. We square the unit using ultraproducts. We prove the theorem for $\alpha=\omega$. We warn the reader that the proof uses heavy machinery of proprties of ultraproducts for algebras consisting of infinitary relations. Let $F$ be a non-principal ultrafilter over $\omega$. (For $\alpha>\omega$, one takes an $|{\alpha}^+|$ regular ultrafilter on $\alpha$). Then there exists a function $h: \omega\to \{\Gamma\subseteq_{\omega} \omega\}$ such that $\{i\in \omega: \kappa\in h(i)\}\in F$ for all $\kappa<\omega$. Let $M={}^{\omega}U/F$. $M$ will be the base of our desired algebra, that is $\C$ will have unit $^{\omega}M.$ Define $\epsilon: U\to {}^{\omega}U/F$ by $$\epsilon(u)=\langle u: i\in \omega\rangle/F.$$ Then it is clear that $\epsilon$ is one to one. For $Y\subseteq {}^{\omega}U$, let $$\bar{\epsilon}(Y)=\{y\in {}^{\omega}(^{\omega}U/F): \epsilon^{-1}\circ y\in Y\}.$$ By an $(F, (U:i\in \omega), \omega)$ choice function we mean a function $c$ mapping $\omega\times {}^{\omega}U/F$ into $^{\omega}U$ such that for all $\kappa<\omega$ and all $y\in {}^{\omega}U/F$, we have $c(k,y)\in y.$ Let $c$ be an $(F, (U:i\in \omega), \omega)$ choice function satisfying the following condition: For all $\kappa, i<\omega$ for all $y\in X$, if $\kappa\notin h(i)$ then $c(\kappa,y)_i=\kappa$, if $\kappa\in h(i)$ and $y=\epsilon u$ with $u\in U$ then $c(\kappa,y)_i=u$. Let $\delta: \B\to {}^{\omega}\B/F$ be the following monomorphism $$\delta(b)=\langle b: i\in \omega\rangle/F.$$ Let $t$ be the unique homomorphism mapping $^{\omega}\B/F$ into $\wp{}^{\omega}(^{\omega}U/F)$ such that for any $a\in {}^{\omega}B$ $$t(a/F)=\{q\in {}^{\omega}(^{\omega}U/F): \{i\in \omega: (c^+q)_i\in a_i\}\in F\}.$$ Here $(c^+q)_i=\langle c(\kappa,q_\kappa)_i: k<\omega\rangle.$ It is easy to show that show that $t$ is well-defined. Assume that $J=\{i\in \omega: a_i=b_i\}\in F$. If $\{i\in \omega: (c^+q)_i\in a_i\}\in F$, then $\{i\in \omega; (c^+q)_i\in b_i\}\in F$. The converse inclusion is the same, and we are done. Now we check that the map preserves the operations. That the Boolean operations are preserved is obvious. So let us check substitutions. It is enough to consider transpositions and replacements. Let $i,j\in \omega.$ Then $s_{[i,j]}g(a)=g(s_{[i,j]}a)$, follows from the simple observation that $(c^+q\circ [i,j])_k\in a$ iff $(c^+q)_k\in s_{[i,j]}a$. The case of replacements is the same; $(c^+q\circ [i|j])_k\in a$ iff $(c^+q)_k\in s_{[i|j]}a.$ Let $g=t\circ \delta$. Then for $a\in B$, we have $$g(a)=\{q\in {}^{\omega}(^{\omega}U/F): \{i\in \omega: (c^+q)_i\in a\}\in F\}.$$ Let $\C=g(\B)$. Then $g:\B\to \C$. We show that $g$ is an isomorphism onto a set algebra. First it is clear that $g$ is a monomorphism. Indeed if $a\neq 0$, then $g(a)\neq \emptyset$. Now $g$ maps $\B$ into an algebra with unit $g(V)$. Recall that $M={}^{\omega}U/F$. Evidently $g(V)\subseteq {}^{\omega}M$. We show the other inclusion. Let $q\in {}^{\omega}M$. It suffices to show that $(c^+q)_i\in V$ for all $i\in\omega$. So, let $i\in \omega$. Note that $(c^+q)_i\in {}^{\omega}U$. If $\kappa\notin h(i)$ then we have $$(c^+q)_i\kappa=c(\kappa, q\kappa)_i=\kappa.$$ Since $h(i)$ is finite the conclusion follows. We now prove that for $a\in B$ $$(*) \ \ \ g(a)\cap \bar{\epsilon}V=\{\epsilon\circ s: s\in a\}.$$ Let $\tau\in V$. Then there is a finite $\Gamma\subseteq \omega$ such that $$\tau\upharpoonright (\omega\sim \Gamma)= p\upharpoonright (\omega\sim \Gamma).$$ Let $Z=\{i\in \omega: \Gamma\subseteq hi\}$. By the choice of $h$ we have $Z\in F$. Let $\kappa<\omega$ and $i\in Z$. We show that $c(\kappa,\epsilon\tau \kappa)_i=\tau \kappa$. If $\kappa\in \Gamma,$ then $\kappa\in h(i)$ and so $c(\kappa,\epsilon \tau \kappa)_i=\tau \kappa$. If $\kappa\notin \Gamma,$ then $\tau \kappa=\kappa$ and $c(\kappa,\epsilon \tau \kappa)_i=\tau\kappa.$ We now prove $(*)$. Let us suppose that $q\in g(a)\cap {\bar{\epsilon}}V$. Since $q\in \bar{\epsilon}V$ there is an $s\in V$ such that $q=\epsilon\circ s$. Choose $Z\in F$ such that $$c(\kappa, \epsilon(s\kappa))\supseteq\langle s\kappa: i\in Z\rangle$$ for all $\kappa<\omega$. This is possible by the above. Let $H=\{i\in \omega: (c^+q)_i\in a\}$. Then $H\in F$. Since $H\cap Z$ is in $F$ we can choose $i\in H\cap Z$. Then we have $$s=\langle s\kappa: \kappa<\omega\rangle= \langle c(\kappa, \epsilon(s\kappa))_i:\kappa<\omega\rangle= \langle c(\kappa,q\kappa)_i:\kappa<\omega\rangle=(c^+q)_i\in a.$$ Thus $q\in \epsilon \circ s$. Now suppose that $q=\epsilon\circ s$ with $s\in a$. Since $a\subseteq V$ we have $q\in \epsilon V$. Again let $Z\in F$ such that for all $\kappa<\omega$ $$c(\kappa, \epsilon s \kappa)\supseteq \langle s\kappa: i\in Z\rangle.$$ Then $(c^+q)_i=s\in a$ for all $i\in Z.$ So $q\in g(a).$ Note that $\bar{\epsilon}V\subseteq {}^{\omega}(^{\omega}U/F)$. Let $rl_{\epsilon(V)}^{\C}$ be the function with domain $\C$ (onto $\bar{\epsilon}(\B))$ such that $$rl_{\epsilon(V)}^{\C}Y=Y\cap \bar{\epsilon}V.$$ Then we have proved that $$\bar{\epsilon}=rl_{\bar{\epsilon V}}^{\C}\circ g.$$ It follows that $g$ is a strong sub-base-isomorphism of $\B$ onto $\C$. Like the finite dimensional case, we get: \[v2\] $\mathbf{SP}\{ \wp(^{\alpha}U): \text {$U$ a set }\}$ is a variety. Let $\A\in SA_\alpha$. Then for $a\neq 0$ there exists a weak set algebra $\B$ and $f:\A\to \B$ such that $f(a)\neq 0$. By the previous theorem there is a set algebra $\C$ such that $\B\cong \C$, via $g$ say. Then $g\circ f(a)\neq 0$, and we are done. We then readily obtain: \[3\] Let $\alpha$ be infinite. Then $$WSA_{\alpha}= {\bf Mod}(\Sigma_{\alpha})={\bf HSP}\{{}\wp(^{\alpha}\alpha^{(Id)})\}={\bf SP}\{\wp(^{\alpha}U): U \text { a set }\}.$$ Here we show that the class of subdirect prouct of Pinter’s algebras is not a variety, this is not proved by Sagi. \[notvariety\] For infinite ordinals $\alpha$, $RPA_{\alpha}$ is not a variety. Assume to the contrary that $RTA_{\alpha}$ is a variety and that $RTA_{\alpha}={\bf Mod}(\Sigma_{\alpha})$ for some (countable) schema $\Sigma_{\alpha}.$ Fix $n\geq 2.$ We show that for any set $U$ and any ideal $I$ of $\A=\wp(^nU)$, we have $\A/I\in RTA_n$, which is not possible since we know that there are relativized set algebras to permutable sets that are not in $RTA_n$. Define $f:\A\to \wp(^{\alpha}U)$ by $f(X)=\{s\in {}^{\alpha}U: f\upharpoonright n\in X\}$. Then $f$ is an embedding of $\A$ into $\Rd_n(\wp({}^nU))$, so that we can assume that $\A\subseteq \Rd_n\B$, for some $\B\in RTA_{\alpha}.$ Let $I$ be an ideal of $\A$, and let $J=\Ig^{\B}I$. Then we claim that $J\cap \A=I$. One inclusion is trivial; we need to show $J\cap \A\subseteq I$. Let $y\in A\cap J$. Then $y\in \Ig^{\B}I$ and so, there is a term $\tau$, and $x_1,\ldots x_n\in I$ such that $y\leq \tau(x_1,\dots x_n)$. But $\tau(x_1,\ldots x_{n-1})\in I$ and $y\in A$, hence $y\in I$, since ideals are closed downwards. It follows that $\A/I$ embeds into $\Rd_n(\B/J)$ via $x/I\mapsto x/J$. The map is well defined since $I\subseteq J$, and it is one to one, because if $x,y\in A$, such that $x\delta y\in J$, then $x\delta y\in I$, where $\delta$ denotes symmetric difference. We have $\B/J\models \Sigma_{\alpha}$. For $\beta$ an ordinal, let $K_{\beta}$ denote the class of all full set algebras of dimension $\beta$. Then ${\bf SP}\Rd_nK_{\alpha}\subseteq {\bf SP}K_n$. It is enough to show that $\Rd_nK_{\alpha}\subseteq {\bf SP}K_n$, and for that it suffices to show that that if $\A\subseteq \Rd_n(\wp({}^{\alpha}U))$, then $\A$ is embeddable in $\wp(^nW)$, for some set $W$. Let $\B=\wp({}^{\alpha}U)$. Just take $W=U$ and define $g:\B\to \wp(^nU)$ by $g(X)=\{f\upharpoonright n: f\in X\}$. Then $g\upharpoonright \A$ is the desired embedding. Now let $\B'=\B/I$, then $\B'\in {\bf SP}K_{\alpha}$, so $\Rd_n\B'\in \Rd_n{\bf SP}K_{\alpha}={\bf SP}\Rd_nK_{\alpha}\subseteq {\bf SP}K_n$. Hence $\A/I\in RTA_n$. But this cannot happen for all $\A\in K_n$ and we are done. Next we approach the issue of representations preserving infinitary joins and meets. But first a lemma. Let $\A\in SA_{\alpha}$. If $X\subseteq \A$, is such that $\sum X=0$ and there exists a representation $f:\A\to \wp(V)$, such that $\bigcap_{x\in X}f(x)=\emptyset$, then for all $\tau\in {}^{\alpha}{\alpha}^{(Id)}$, $\sum_{x\in X} s_{\tau}x=\emptyset$. In particular, if $\A$ is completely representable, then for every $\tau\in {}^{\alpha}\alpha^{(Id)}$, $s_{\tau}$ is completely additive. Like the finite dimensional case. \[counterinfinite\] For any $\alpha\geq \omega$, there is an $\A\in SA_{\alpha}$, and $S\subseteq \A$, such that $\sum S$ is not preserved by $s_0^1$. In particular, the omitting types theorem fails for our multi-modal logic. The second part follows from the previous lemma. Now we prove the first part. Let $\B$ be the Stone representation of some atomless Boolean algebra, with unit $U$ in the Stone representation. Let $$R=\{\times_{i\in \alpha} X_i, X_i\in \B \text { and $X_i=U$ for all but finitely many $i$} \}$$ and $$A=\{\bigcup S: S\subseteq R: |S|<\omega\}$$ $$S=\{X\times \sim X\times \times_{i>2} U_i: X\in B\}.$$ Then one proceeds exactly like the finite dimensional case, theorem \[counter\] showing that the sum $\sum S$ is not preserved under $s_0^1$. Like the finite dimensional case, adapting the counterexample to infinite dimensions, we have: \[counterinfinite2\] There is an atomic $\A\in SA_{\alpha}$ such that $\A$ is not completely representable. First it is clear that if $V$ is any weak space, then $\wp(V)\models \Sigma$. Let $(Q_n: n\in \omega)$ be a sequence $\alpha$-ary relations such that $(Q_n: n\in \omega)$ is a partition of $$V={}^{\alpha}\alpha^{(\bold 0)}=\{s\in {}^{\alpha}\alpha: |\{i: s_i\neq 0\}|<\omega\}.$$ Each $Q_n$ is symmetric. Take $Q_0=\{s\in V: s_0=s_1\}$, and for each $n\in \omega\sim 0$, take $Q_n=\{s\in {}^{\alpha}\omega^{({\bold 0})}: s_0\neq s_1, \sum s_i=n\}.$ (Note that this is a finite sum). Clearly for $n\neq m$, we have $Q_n\cap Q_m=\emptyset$, and $\bigcup Q_n=V.$ Furthermore, obviously each $Q_n$ is symmetric, that is $S_{[i,j]}Q_n=Q_n$ for all $i,j\in \alpha$. Now fix $F$ a non-principal ultrafilter on $\mathcal{P}(\mathbb{Z}^+)$. For each $X\subseteq \mathbb{Z}^+$, define $$R_X = \begin{cases} \bigcup \{Q_n: n\in X\} & \text { if }X\notin F, \\ \bigcup \{Q_n: n\in X\cup \{0\}\} & \text { if } X\in F \end{cases}$$ Let $$\A=\{R_X: X\subseteq \mathbb{Z}^+\}.$$ Then $\A$ is an atomic set algebra, and its atoms are $R_{\{n\}}=Q_n$ for $n\in \mathbb{Z}^+$. (Since $F$ is non-principal, so $\{n\}\notin F$ for every $n$. Then one proceeds exactly as in the finite dimensional case, theorem \[counter2\]. Let $CRSA_{\alpha}$ be the class of completely representable algebras of dimension $\alpha$, then we have For $\alpha\geq \omega$, $CRSA_{\alpha}$ is elementary that is axiomatized by a finite schema Let $\At(x)$ is the formula $x\neq 0\land (\forall y)(y\leq x\to y=0\lor y=x)$. For distinct $i,j<\alpha$ let $\psi_{i,j}$ be the formula: $y\neq 0\to \exists x(\At(x)\land s_i^jx\neq 0\land s_i^jx\leq y).$ Let $\Sigma$ be obtained from $\Sigma_{\alpha}$ by adding $\psi_{i,j}$ for every distinct $i,j\in \alpha$. These axioms force additivity of the operations $s_i^j$ for every $i,j\in \alpha$. The rest is like the finite dimensional case. The folowing theorem can be easilly destilled from the literature. $SA_{\alpha}$ is Sahlqvist variety, hence it is canonical, $\Str SA_{\alpha}$ is elementary and ${\bf S}\Cm(\Str SA_{\alpha})=SA_{\alpha}.$ We know that if $\A$ is representable on a weak unit, then it is representable on a square one. But for complete representability this is not at all clear, because the isomorphism defined in \[weak\] might not preserve arbitrary joins. For canonical extensions, we guarantee complete representations. Let $\A\in SA_{\alpha}$. Then $\A^+$ is completely representable on a weak unit. Let $S$ be the Stone space of $\A$, and for $a\in \A$, let $N_a$ denote the clopen set consisting of all ultrafilters containing $\A$. The idea is that the operations are completely additive in the canonical extension. Indeed, for $\tau\in {}^{\alpha}\alpha^{(Id)}$, we have $$s_{\tau}\sum X=s_{\tau}\bigcup X=\bigcup s_{\tau}X=\sum s_{\tau}X.$$ (Indeed this is true for any full complex algebra of an atom structure, and $\A^+=\Cm\Uf \A$.) In particular, since $\sum \At\A=1$, because $\A$ is atomic, we have $\sum s_{\tau}\At\A=1$, for each $\tau$. Then we proceed as the finite dimensional case for transposition algebras. Given any such $\tau$, let $G(\At \A, \tau)$ be the following no where dense subset of the Stone space of $\A$: $$G(\At \A, \tau)=S\sim \bigcup N_{s_{\tau}x}.$$ Now given non-zero $a$, let $F$ be a principal ultrafilter generated by an atom below $a$. Then $F\notin \bigcup_{\tau\in {}^{\alpha}\alpha^{(Id)}} G(\At\A, \tau)$, and the map $h$ defined via $x\mapsto \{\tau\in {}^{\alpha}\alpha^{(Id)}: s_{\tau}x\in F\}$, as can easily be checked, establishes the complete representation. We do not know whether canonical extensions are completely representable on square units. For any finite $\beta$, $\Fr_{\beta}SA_{\alpha}$ is infinite. Furthermore If $\beta$ is infinite, then $\Fr_{\beta}SA_{\alpha}$ is atomless. In particular, $SA_{\alpha}$ is not locally finite. For the first part, we consider the case when $\beta=1$. Assume that $b$ is the free generator. First we show that for any finite transposition $\tau$ that is not the identity $s_{\tau}b\neq b$. Let such a $\tau$ be given. Let $\A=\wp(^{\alpha}U)$, and let $X\in \A$, be such that $s_{\tau}X\neq X.$ Such an $X$ obviously exists. Assume for contradiction that $s_{\tau}b=b$. Let $\B=\Sg^{\A}\{X\}$. Then, by freeness, there exists a surjective homomorphism $f:\Fr_{\beta}SA_{\alpha}\to \B$ such that $f(b)=X$. Hence $$s_{\tau}X=s_{\tau}f(b)=f(s_{\tau}b)=f(b)=X,$$ which is impossible. We have proved our claim. Now consider the following subset of $\Fr_{\beta}SA_{\alpha}$, $S=\{s_{[i,j]}b: i,j\in \alpha\}$. Then for $i,j,k, l\in \alpha$, with $\{i,j\}\neq \{k,l\}$, we have $s_{[i,j]}b\neq s_{[k,l]}b$, for else, we would get $s_{[i,j]}s_{[k,l]}b=s_{\sigma}b=b $ and $\sigma\neq Id$. It follows that $S$ is infinite, and so is $\Fr_{\beta}SA_{\alpha}.$ The proof for $\beta>1$ is the same. For the second part, let $X$ be the infinite generating set. Let $a\in A$ be non-zero. Then there is a finite set $Y\subseteq X$ such that $a\in \Sg^{\A} Y$. Let $y\in X\sim Y$. Then by freeness, there exist homomorphisms $f:\A\to \B$ and $h:\A\to \B$ such that $f(\mu)=h(\mu) $ for all $\mu\in Y$ while $f(y)=1$ and $h(y)=0$. Then $f(a)=h(a)=a$. Hence $f(a.y)=h(a.-y)=a\neq 0$ and so $a.y\neq 0$ and $a.-y\neq 0$. Thus $a$ cannot be an atom. For Pinter’s algebras the second part applies equally well. For the first part one takes for distict $i,j,k,l$ such that $\{i,j\}\cap \{k,l\}=\emptyset$, a relation $X$ in the full set algebra such that $s_i^jX\neq s_k^lX$, and so the set $\{s_i^jb: i,j\in \alpha\}$ will be infinite, as well. Adding Diagonals ================ We now show that adding equality to our infinite dimensional modal logic, algebraically reflected by adding diagonals, does not affect the positive representability results obtained for $SA_{\alpha}$ so far. Also, in this context, atomicity does not imply complete representability. However, we lose elementarity of the class of square completely representable algebras; which is an interesting twist. We start by defining the concrete algebras, then we provide the finite schema axiomatization. The class of *Representable Diagonal Set Algebras* is defined to be $$RDSA_{\alpha}=\mathbf{SP}\{\langle\mathcal{P}(^{\alpha}U); \cap,\sim,S^i_j,S_{ij}, D_{ij}\rangle_{i\neq j\in n}: U\text{ \emph{is a set}}, \}$$ where $S_j^i$ and $S_{ij}$ are as before and $D_{ij}=\{q\in D: q_i=q_j\}$. We show that $RDSA_{\alpha}$ is a variety that can be axiomatized by a finite schema. Let $L_{\alpha}$ be the language of $SA_{\alpha}$ enriched by constants $\{d_{ij}: i,j\in \alpha\}.$ Let $\Sigma'^d_{\alpha}$ be the axiomatization in $L_{\alpha}$ obtained by adding to $\Sigma'_{\alpha}$ the following equations for al $i,j<\alpha$. 1. $d_{ii}=1$ 2. $d_{i,j}=d_{j,i}$ 3. $d_{i,k}.d_{k,j}\leq d_{i,j}$ 4. $s_{\tau}d_{i,j}=d_{\tau(i), \tau(i)}$, $\tau\in \{[i,j], [i|j]\}$. \[infinite\] For any infinite ordinal $\alpha$, we have ${\bf Mod}(\Sigma_{\alpha})=RDSA_{\alpha}$. Let $\A\in\mathbf{Mod}(\Sigma'^d_{\alpha})$ and let $0^\A\neq a\in A$. We construct a homomorphism $h:\A\longrightarrow\wp (^{\alpha}\alpha^{(Id)})$. such that $h(a)\neq 0$. Like before, choose an ultrafilter $\mathcal{F}\subset A$ containing $a$. Let $h:\A\longrightarrow \wp(^{\alpha}\alpha^{(Id)})$ be the following function $h(z)=\{\xi\in ^{\alpha}\alpha^{(Id)}:S_{\xi}^\A(z)\in\mathcal{F}\}.$ The function $h$ respects substitutions but it may not respect the newly added diagonal elements. To ensure that it does we factor out $\alpha$, the base of the set algebra, by a congruence relation. Define the following equivalence relation $\sim$ on $\alpha$, $i\sim j$ iff $d_{ij}\in F$. Using the axioms for diagonals $\sim$ is an equivalence relation. Let $V={}^{\alpha}\alpha^{(Id}),$ and $M=V/\sim$. For $h\in V$ we write $h=\bar{\tau}$, if $h(i)=\tau(i)/\sim$ for all $i\in n$. Of course $\tau$ may not be unique. Now define $f(z)=\{\bar{\xi}\in M: S_{\xi}^{\A}(z)\in \mathcal{F}\}$. We first check that $f$ is well defined. We use extensively the property $(s_{\tau}\circ s_{\sigma})x=s_{\tau\circ \sigma}x$ for all $\tau,\sigma\in {}^{\alpha}\alpha^{(Id)}$, a property that can be inferred form our axiomatization. We show that $f$ is well defined, by induction on the cardinality of $$J=\{i\in \mu: \sigma (i)\neq \tau (i)\}.$$ Of course $J$ is finite. If $J$ is empty, the result is obvious. Otherwise assume that $k\in J$. We introduce a piece of notation. For $\eta\in V$ and $k,l<\alpha$, write $\eta(k\mapsto l)$ for the $\eta'\in V$ that is the same as $\eta$ except that $\eta'(k)=l.$ Now take any $$\lambda\in \{\eta\in \alpha: \sigma^{-1}\{\eta\}= \tau^{-1}\{\eta\}=\{\eta\}\}$$ We have $${ s}_{\sigma}x={ s}_{\sigma k}^{\lambda}{ s}_{\sigma (k\mapsto \lambda)}x.$$ Also we have (b) $${s}_{\tau k}^{\lambda}({ d}_{\lambda, \sigma k}. {\sf s}_{\sigma} x) ={ d}_{\tau k, \sigma k} { s}_{\sigma} x,$$ and (c) $${ s}_{\tau k}^{\lambda}({ d}_{\lambda, \sigma k}.{\sf s}_{\sigma(k\mapsto \lambda)}x)$$ $$= { d}_{\tau k, \sigma k}.{ s}_{\sigma(k\mapsto \tau k)}x.$$ and (d) $${ d}_{\lambda, \sigma k}.{ s}_{\sigma k}^{\lambda}{ s}_{{\sigma}(k\mapsto \lambda)}x= { d}_{\lambda, \sigma k}.{ s}_{{\sigma}(k\mapsto \lambda)}x$$ Then by (b), (a), (d) and (c), we get, $${ d}_{\tau k, \sigma k}.{ s}_{\sigma} x= { s}_{\tau k}^{\lambda}({ d}_{\lambda,\sigma k}.{ s}_{\sigma}x)$$ $$={ s}_{\tau k}^{\lambda}({ d}_{\lambda, \sigma k}.{ s}_{\sigma k}^{\lambda} { s}_{{\sigma}(k\mapsto \lambda)}x)$$ $$={s}_{\tau k}^{\lambda}({ d}_{\lambda, \sigma k}.{s}_{{\sigma}(k\mapsto \lambda)}x)$$ $$= { d}_{\tau k, \sigma k}.{ s}_{\sigma(k\mapsto \tau k)}x.$$ The conclusion follows from the induction hypothesis. Clearly $f$ respects diagonal elements. Now using exactly the technique in theorem \[weak\] (one can easily check that the defined isomorphism respects diagonal elements), we can square the weak unit, obtaining the desired result. All positive representation theorems, \[1\], \[2\], \[weak\], \[v2\], proved for the diagonal free case, hold here. But the negative do not, because our counter examples [*do not*]{} contain diagonal elements. Is it the case, that for $\alpha\geq \omega$, $RDSA_{\alpha}$ is conjuagted, hence completey additive. If the answer is affirmative, then we would get all positive results formulated for transposition algebras, given in the second subsection of the next section (for infinte dimensions). In the finite dimensional case, we could capture [*square*]{} complete representability by stipulating that all the operations are completey additive. However, when we have one single diagonal element, this does not suffice. Indeed, using a simple cardinality argument of Hirsch and Hodkinson, that fits perfectly here, we get the following slightly surprising result: \[hh\] For $\alpha\geq \omega$, the class of square completely representable algebras is not elementary. In particular, there is an algebra that is completely representable, but not square completely representable. [@Hirsh]. Let $\C\in SA_{\alpha}$ such that $\C\models d_{01}<1$. Such algebras exist, for example one can take $\C$ to be $\wp(^{\alpha}2).$ Assume that $f: \C\to \wp(^{\alpha}X)$ is a square complete representation. Since $\C\models d_{01}<1$, there is $s\in h(-d_{01})$ so that if $x=s_0$ and $y=s_1$, we have $x\neq y$. For any $S\subseteq \alpha$ such that $0\in S$, set $a_S$ to be the sequence with $ith$ coordinate is $x$, if $i\in S$ and $y$ if $i\in \alpha\sim S$. By complete representability every $a_S$ is in $h(1)$ and so in $h(\mu)$ for some unique atom $\mu$. Let $S, S'\subseteq \alpha$ be destinct and assume each contains $0$. Then there exists $i<\alpha$ such that $i\in S$, and $i\notin S'$. So $a_S\in h(d_{01})$ and $a_S'\in h (-d_{01}).$ Therefore atoms corresponding to different $a_S$’s are distinct. Hence the number of atoms is equal to the number of subsets of $\alpha$ that contain $0$, so it is at least $^{|\alpha|}2$. Now using the downward Lowenheim Skolem Tarski theorem, take an elementary substructure $\B$ of $\C$ with $|\B|\leq |\alpha|.$ Then in $\B$ we have $\B\models d_{01}<1$. But $\B$ has at most $|\alpha|$ atoms, and so $\B$ cannot be [*square*]{} completely representable (though it is completely representable on a weak unit). Axiomatizating the quasi-varieties =================================== We start this section by proving a somewhat general result. It is a non-trival generalisation of Sági’s result providing an axiomatization for the quasivariety of full replacement algebras [@sagiphd], the latter result is obtained by taking $T$ to be the semigroup of all non-bijective maps on $n$. submonoid of $^nn$. Let $$G=\{\xi\in S_n: \xi\circ \sigma\in T,\text{ for all }\sigma\in T\}.$$ Let $RT_n$ be the class of subdirect products of full set algebras, in the similarity type of $T$, and let $\Sigma_n$ be the axiomatization of the variety generated by $RT_n$, obtained from any presentation of $T$. (We know that every finite monoid has a representation). We now give an axiomatization of the quasivariety $RT_n$, which may not be a variety. For all $n\in\omega,$ $n\geq 2$ the set of quasiequations $\Sigma^q_n$ defined to be $$\Sigma^q_n=\Sigma_n\cup\{\bigwedge_{\sigma\in T}s_{\sigma}(x_{\sigma})=0\Rightarrow \bigwedge_{\sigma\in T} s_{i\circ \sigma}(x_{\sigma})=0: i \in G\}.$$ Let $\A$ be an $RT_n$ like algebra. Let $\xi\in {}^nn$ and let $F$ be an ultrafilter over $\A.$ Then $F_\xi$ denotes the following subset of $A.$ $$F_\xi = \begin{cases} \{t\in A:(\forall \sigma\in T)(\exists a_{\sigma}\in A)S_{\xi\circ\sigma}(a_{\sigma})\in F\\\mbox{ and } t\geq\bigwedge_{\xi\in G}S_{\eta\circ \sigma} (a_{\sigma}) & \text{ if }\xi\in G\\ \{a\in A: S^\A_\xi(a)\in F\} & \text{otherwise } \end{cases}$$ The proof of the following theorem, is the same as Sagi’s corresponding proof for Pinter’s algebras [@sagiphd], modulo undergoing the obvious replacements; and therefore it will be omitted. Let $\A\in {\bf Mod}(\Sigma^q_n).$ Let $\xi\in {}^nn$ and let $F$ be an ultrafilter over $\A.$ Then, $F_\xi$ is a proper filter over $\A$ Let $\A\in {\bf Mod}(\Sigma^q_n)$ and let $F$ be an ultrafilter over $\A.$ Then, $F_{Id}\subseteq F$. $\A\in {\bf Mod}(\Sigma^q_n)$ and let $F$ be an ultrafilter over $\A.$ For every $\xi\in {}^nn$ let us choose an ultrafilter $F^*_\xi$ containing $F_\xi$ such that $F^*_{Id}=F$. Then the following condition holds for this system of ultrafilters: $$(\forall \xi\in {}^nn)(\forall \sigma\in T)(\forall a\in A)({S^k_l}^\A(a)\in F^*_\xi\Leftrightarrow a\in F^*_{\xi\circ\sigma})$$ For finite $n$, we have ${\bf Mod}(\Sigma^q_n)=RT_n.$ Soundness is immediate [@sagiphd]. Now we prove completeness. Let $\eta\in {}^nn$, and let $a\in A$ be arbitrary. If $\eta=Id$ or $\eta\notin G$, then $F_{\eta}$ is the inverse image of $F$ with respect to $s_{\eta}$ and so we are done. Else $\eta\in G$, and so for all $\sigma\in T$, we have $F_{\eta\circ \sigma}$ is an ultrafilter. Let $a_{\sigma}=a$ and $a_f=1$ for $f\in T$ and $f\neq \sigma$. Now $a_{\tau}\in F$ for all $\tau\in T$. Hence $s_{\eta\circ \tau}(a_{\tau})\in F$, but $s_{\eta}a\geq \prod s_{\tau}(a_{\tau})$ and we are done. Now with the availabity of $F_{\eta}$ for every $\eta\in {}^nn,$ we can represent our algebra on square units by $f(x)=\{\tau\in {}^nn: x\in F_{\tau}\}$ Transpositions only -------------------- Consider the case when $T=S_n$ so that we have substitutions corresponding to transpositions. This turns out an interesting case with a plathora of positive results, with the sole exception that the class of subdirect products of set algebras is [*not*]{} a variety; it is only a quasi-variety. We can proceed exactly like before obtaining a finite equational axiomatization for the variety generated by full set algebras by translating a presentation of $S_n$. In this case all the operation (corresponding to transpositions) are self-conjugate (because a transposition is the inverse of itself) so that our variety, call it $V$, is conjugated, hence completely additive. Now, the following can be proved exactly like before, undergoing the obvious modifications. $V$ is finitely axiomatizable $V$ is locally finite $V$ has the superamalgamation property $V$ is canonical and atom canonical, hence $\At V=\Str V$ is elementary, and finitely axiomatizable. $V$ is closed under canonical extensions and completions $\L_V$ enjoys an omiting types theorem Atomic algebras are completely representable. Here we give a different proof (inspired by the duality theory of modal logic) that $V$ has the the superamalgamation property. The proof works verbatim for any submonoid of $^nn$, and indeed, so does the other implemented for $SA_{\alpha}$. In fact the two proofs work for any submonoid of $^nn$. Recall that a frame of type $TA_n$ is a first order structure $\F=(V, S_{ij})_{i,j\in \alpha}$ where $V$ is an arbitrary set and and $S_{ij}$ is a binary relation on $V$ for all $i, j\in \alpha$. Given a frame $\F$, its complex algebra will be denotet by $\F^+$; $\F^+$ is the algebra $(\wp(\F), s_{ij})_{i,j}$ where for $X\subseteq V$, $s_{ij}(X)=\{s\in V: \exists t\in X, (t, s)\in S_{i,j} \}$. For $K\subseteq TA_n,$ we let $\Str K=\{\F: \F^+\in K\}.$ For a variety $V$, it is always the case that $\Str V\subseteq \At V$ and equality holds if the variety is atom-canonical. If $V$ is canonical, then $\Str V$ generates $V$ in the strong sense, that is $V= {\bf S}\Cm \Str V$. For Sahlqvist varieties, as is our case, $\Str V$ is elementary. Given a family $(\F_i)_{i\in I}$ of frames, a [*zigzag product*]{} of these frames is a substructure of $\prod_{i\in I}\F_i$ such that the projection maps restricted to $S$ are onto. Let $\F, \G, \H$ be frames, and $f:\G\to \F$ and $h:\F\to \H$. Then $INSEP=\{(x,y)\in \G\times \H: f(x)=h(y)\}$. The frame $INSEP \upharpoonright G\times H$ is a zigzag product of $G$ and $H$, such that $\pi\circ \pi_0=h\circ \pi_1$, where $\pi_0$ and $\pi_1$ are the projection maps. [@Marx] 5.2.4 For $h:\A\to \B$, $h_+$ denotes the function from $\Uf\B\to \Uf\A$ defined by $h_+(u)=h^{-1}[u]$ where the latter is $\{x\in a: h(x)\in u\}.$ For an algebra $\C$, Marx denotes $\C$ by $\C_+,$ and proves: ([@Marx] lemma 5.2.6) Assume that $K$ is a canonical variety and $\Str K$ is closed under finite zigzag products. Then $K$ has the superamalgamation property. [Sketch of proof]{} Let $\A, \B, \C\in K$ and $f:\A\to \B$ and $h:\A\to \C$ be given monomorphisms. Then $f_+:\B_+\to \A_+$ and $h_+:\C_+\to \A_+$. We have $INSEP=\{(x,y): f_+(x)=h_+(y)\}$ is a zigzag connection. Let $\F$ be the zigzag product of $INSEP\upharpoonright \A_+\times \B_+$. Then $\F^+$ is a superamalgam. The variety $TA_n$ has $SUPAP$. Since $TA_n$ can be easily defined by positive equations then it is canonical. The first order correspondents of the positive equations translated to the class of frames will be Horn formulas, hence clausifiable [@Marx] theorem 5.3.5, and so $\Str K$ is closed under finite zigzag products. Marx’s theorem finishes the proof. The following example is joint with Mohammed Assem (personnel communication). \[not\] For $n\geq 2$, $RTA_n$ is not a variety. Let us denote by $\sigma$ the quasi-equation $$s_f(x)=-x\longrightarrow 0=1,$$ where $f$ is any permutation. We claim that for all $k\leq n,$ $\sigma$ holds in the small algebra $\A_{nk}$ (or more generally, any set algebra with square unit). This can be seen using a constant map in $^nk.$ More precisely, let $q\in {}^nk$ be an arbitrary constant map and let $X$ be any subset of $^nk.$ We have two cases for $q$ which are $q\in X$ or $q\in -X$. In either case, noticing that $q\in X\Leftrightarrow q\in S_f(X),$ it cannot be the case that $S_f(X)=-X.$ Thus, the implication $\sigma$ holds in $\A_{nk}.$ It follows then, that $RTA_n\models\sigma$ (because the operators $\mathbf{S}$ and $\mathbf{P}$ preserve quasi-equations). Now we are going to show that there is some element $\B\in PTA_n$, and a specific permutation $f$, such that $\B\nvDash\sigma.$ Let $G\subseteq {}^nn$ be the following permutable set $$G=\{s\in {}^n2:|\{i:s(i)=0\}|=1\}.$$ Let $\B=\wp(G)$, then $\wp(G)\in PTA_n.$ Let $f$ be the permutation defined as follows For $n=2,3,$ $f$ is simply the transposition $[0,1]$. For larger $n$: $$f = \begin{cases} [0,1]\circ[2,3]\circ\ldots\circ[n-2,n-1] & \text{if $n$ is even}, \\ [0,1]\circ[2,3]\circ\ldots\circ[n-3,n-2] & \text{if $n$ is odd} \end{cases}$$ Notice that $f$ is the composition of disjoint transpositions. Let $X$ be the following subset of $G,$ $$X=\{e_i:i\mbox{ is odd, }i<n\},$$ where $e_i$ denotes the map that maps every element to $1$ except that the $i$th element is mapped to $0$. It is easy to see that, for all odd $i<n,$ $e_i\circ f=e_{i-1}.$ This clearly implies that $$S_f^\B(X)=-X=\{e_i:i\mbox{ is even, }i<n\}.$$ Since $0^\B\neq 1^\B,$ $X$ falsifies $\sigma$ in $\B.$ Since $\B\in {\bf H}\{{\wp(^nn)}\}$ we are done. Let $Sir(K)$ denote he class of subdirectly indecomposable algebras in $K$. The variety $PTA_n$ is not a discriminator variety If it were then, there would be a discriminator term on $Sir(RTA_n)$ forcing $RTA_n$ to be variety, which is not the case. All of the above positive results extend to the infinite dimensional case, by using units of the form $V=\{t\in {}^{\alpha}\alpha^{(Id)}: t\upharpoonright {\sf sup} t\text { is a bijection }\},$ where ${\sf sup} t=\{i\in \alpha: s_i\neq i\}$, and defining $s_{\tau}$ for $\tau\in V$ the obvious way. This will be well defined, because the schema axiomatizating the variety generated by the square set algebras, is given by lifting the finite axiomatization for $n\geq 5$ (resulting from a presentation of $S_n$, to allow indices ranging over $\alpha$. Also, $RTA_{\alpha}$ will [*not*]{} be a variety using the previous example together with the same lifting argument implemented for Pinter’s algebras, and finally $TA_{\alpha}$ is not locally finite. Decidability ============ The decidability of the studied $n$ dimensional multi modal logic, can be proved easily by filtration (since the corresponding varieties are locally finite, so such logics are finitely based), or can inferred from the decidability of the word problem for finite semigroups. But this is much more than needed. In fact we shall prove a much stronger result, concerning $NP$ completeness. The $NP$ completeness of our multi dimensional modal logics (for all three cases, Pinters algebras, transposition algebras, and substitution algebras), by the so called [*selection method*]{}, which gives a (polynomial) bound on a model satifying a given formula in terms of its length. This follows from the simple observation that the accessibility relations, are not only partial functions, but they are actually total functions, so the method of selection works. This for example is [*not*]{} the case for accessibility relations corresponding to cylindrifiers, and indeed cylindric modal logic of dimension $>2$, is highly [*undecidable*]{}, a result of Maddux. We should also mention that the equational theory of the variety and quasi-varieties (in case of non-closure under homomorphic images) are also decidable. This is proved exactly like in [@sagiphd], so we omit the proof. (Basically, the idea is to reduce the problem to decidability in the finite dimensional using finite reducts). Our proof of $NP$ completeness is fairly standard. We prepare with some well-known definitions [@modal]. Let $\L$ be a normal modal logic, $\M$ a family of finitely based models (based on a $\tau$-frame of finite character). $\L$ *has the polysize model property* with respect to $\M$ if there is a polynomial function $f$ such that any consistent formula $\phi$ is satisfiable in a model in $\M$ containing at most $f(|\phi|)$ states. Let $\tau$ be finite similarity type. Let $\L$ be a consistent normal modal logic over $\tau$ with the polysize model property with respect to some class of models $\M$. If the problem of deciding whether $M\in\M$ is computable in time polynomial in $|M|$, then $\L$ has an NP-complete satisfiability problem. See Lemma 6.35 in [@modal]. If $\F$ is a class of frames definable by a first order sentence, then the problem of deciding whether $F$ belongs to $\F$ is decidable in time polynomial in the size of $F$. See Lemma 6.36 [@modal]. The same theorem can be stated for models based on elements in $\F$. More precisely, replace $\F$ by $\M$ (the class of models based on members of $\F$), and $F$ by $M.$ This is because models are roughly frames with valuations. We prove our theorem for any submonoid $T\subseteq {}^nn$. $V_T$ has an NP-complete satisfiability problem. By the two theorems above, it remains to show that $V_T$ has the polysize model property. We use the [*selection method.*]{} Suppose $M$ is a model. We define a selection function as follows (intuitively, it selects states needed when evaluating a formula in $M$ at $w$): $$s(p,w)=\{w\}$$ $$s(\neg\psi,w)=s(\phi,w)$$ $$s(\theta\wedge\psi,w)=s(\theta,w)\cup s(\psi,w)$$ $$s(s_\tau\psi,w)=\{w\}\cup s(\psi,\tau(w)).$$ It follows by induction on the complexity of $\phi$ that for all nodes $w$ such that $$M,w\Vdash\phi\mbox{ iff }M\upharpoonright s(\phi,w),w\Vdash\phi.$$ The new model $M\upharpoonright s(\phi,w)$ has size $|s(\phi,w)|= 1+ \mbox{ the number of modalities in }\phi$. This is less than or equal to $|\phi|+1,$ and we done. Andréka, H., [*Complexity of equations valid in algebras of relations*]{}. Annals of Pure and Applied logic, [**89**]{}, (1997) p. 149 - 209. Andréka, H., Givant, S., Mikulas, S. Németi, I., Simon A., [*Notions of density that imply representability in algebraic logic.*]{} Annals of Pure and Applied logic, [**91**]{}(1998) p. 93 –190. H. Andréka, I. Németi [*Reducing first order logic to $Df_3$ free algebras.*]{} In Cylindric-like Algebras and Algebraic Logic, Bolyai Society Mathematical Studies, Vol. 22 Andréka, Hajnal; Ferenczi, Miklós; Németi, István (Eds.) (2013.) Blackburn, P., de Rijke M, Venema, y. [*Modal Logic*]{} Cambridge test in theoretical Computer Science, Third printing 2008. S. Burris, H.P., Sankappanavar, A course in universal algebra Graduate Texts in Mathematics, Springer Verlag, New York 1981. M. Ferenzci [*The polyadic generalization of the Boolean axiomatization of field of sets*]{} Trans. Amer Math. society 364 (2012) 867-886. Givant and Venema Y. [*The preservation of Sahlqvist equations in completions of Boolean algebras with operators*]{} Algebra Universalis, 41, 47-48 (1999). R. Hirsch and I. Hodkinson [Step-by step building representations in algebraic logic]{} Journal of Symbolic Logic (62)(1)(1997) 225-279. R. Hirsch and I. Hodkinson, *Complete Representations in Algebraic Logic* Journal of Symbolic Logic, 62(3) (1997), pp. 816-847 R. Hirsch and I. Hodkinson [*Completions and complete representations*]{} In Cylindric-like Algebras and Algebraic Logic, Bolyai Society Mathematical Studies, Vol. 22 Andréka, Hajnal; Ferenczi, Miklós; Németi, István (Eds.) (2013.) L. Henkin, J.D. Monk and A.Tarski, [*Cylindric Algebras Part I*]{}. North Holland, 1971. L. Henkin, J.D. Monk and A.Tarski, [*Cylindric Algebras Part II*]{}. North Holland, 1985. Hodges [*Model theory*]{} Cambridge, Encyclopedia of Mathematics. Hodkinson, I. [*A construction of cylindric algebras and polyadic algebras from atomic relation algebras*]{} Algebra Universalis, 68 (2012) 257-285 Maksimova, L. [*Amalgamation and interpolation in normal modal logics*]{}. Studia Logica [**50**]{}(1991) p.457-471. M. Marx [*Algebraic relativization and arrow logic*]{} ILLC Dissertation Series 1995-3, University of Amsterdam, (1995). A. Kurusz [*Represetable cylindric algebras and many dimensional modal logic* ]{} In Cylindric-like Algebras and Algebraic Logic, Bolyai Society Mathematical Studies, Vol. 22 Andŕeka, Hajnal; Ferenczi, Miklos; Németi, Istvan (Eds.) (2013.) O. Ganyushkin and V. Mazorchuk,*Classical Finite Transformation Semigroups-An Introduction*, Springer, 2009. G. Sági, *A Note on Algebras of Substitutions*, Studia Logica, (72)(2) (2002), p 265-284. G. Sági [*Polyadic algebras*]{} In Cylindric-like Algebras and Algebraic Logic, Bolyai Society Mathematical Studies, Vol. 22 Andréka, Hajnal; Ferenczi, Miklos; Németi, Istvan (Eds.) (2013.) Shelah [*Classification theory, the number of non isomorphic models*]{} North Holland. Studies in Logic and Foundation in Mathematics. (1978) T. Sayed Ahmed [*Complete representations, completions and omitting types*]{} In Cylindric-like Algebras and Algebraic Logic, Bolyai Society Mathematical Studies, Vol. 22 Andréka, Hajnal; Ferenczi, Miklos; Németi, Istvan (Eds.) (2013.) T. Sayed Ahmed [*Classes of algebras without the amalgamation property*]{} Logic Journal of IGPL, 192 (2011) p.87-2011. Sayed Ahmed,T. and Mohamed Khaled [*On complete representations in algebras of logic*]{} Logic journal of IGPL [**17**]{}(3)(2009)p. 267-272 Ide Venema [*Cylindric modal logic*]{} Journal of Symbolic Logic. (60) 2 (1995)p. 112-198 Ide Venema [*Cylindric modal logic*]{} In Cylindric-like Algebras and Algebraic Logic, Bolyai Society Mathematical Studies, Vol. 22 Andréka, Hajnal; Ferenczi, Miklós; Németi, Istvan (Eds.) (2013.) Ide Venema[*Atom structures and Sahlqvist equations*]{} Algebra Universalis, 38 (1997) p. 185-199 [^1]: Mathematics Subject Classification. 03G15; 06E25 Key words: multimodal logic, substitution algebras, interpolation [^2]: One way to show that varieties of representable algebras, like cylindric algebras, are not closed under completions, is to construct an atom structure $\F$, such that $\Cm\F$ is not representable, while $\Tm\F$, the subalgebra of $\Cm\F$, generated by the atoms, is representable. This algebra cannot be completely representable; because a complete representation induces a representation of the full complex algebra.
{ "pile_set_name": "arxiv" }
# frozen_string_literal: true require File.expand_path('lib/jekyll-last-modified-at/version.rb', __dir__) Gem::Specification.new do |s| s.name = 'jekyll-last-modified-at' s.version = Jekyll::LastModifiedAt::VERSION s.summary = 'A liquid tag for Jekyll to indicate the last time a file was modified.' s.authors = 'Garen J. Torikian' s.homepage = 'https://github.com/gjtorikian/jekyll-last-modified-at' s.license = 'MIT' s.files = Dir['lib/**/*.rb'] s.add_dependency 'jekyll', '>= 3.7', ' < 5.0' s.add_dependency 'posix-spawn', '~> 0.3.9' s.add_development_dependency 'rake' s.add_development_dependency 'rspec', '~> 3.4' s.add_development_dependency 'rubocop' s.add_development_dependency 'rubocop-performance' s.add_development_dependency 'rubocop-standard' s.add_development_dependency 'spork' end
{ "pile_set_name": "github" }
fileFormatVersion: 2 guid: c6be551879cd14d739b0188844ef2c60 timeCreated: 1447582131 licenseType: Pro MonoImporter: serializedVersion: 2 defaultReferences: [] executionOrder: 0 icon: {fileID: 2800000, guid: e1e5ef31262d242ce8efe2020a27425e, type: 3} userData: assetBundleName: assetBundleVariant:
{ "pile_set_name": "github" }
Šeduva Šeduva () is a city in the Radviliškis district municipality, Lithuania. It is located east of Radviliškis. Šeduva was an agricultural town dealing in cereals, flax and linseed, pigs and geese and horses, at the site of a royal estate and beside a road from Kaunas to Riga. The population from the fifteenth century was Catholic and Jewish. Until then, Lithuania had been the last pagan kingdom in Europe and allowed freedom of worship and toleration of Jews and other religions. The first Catholic shrine of Šeduva, the Church of the Invention of the Holy Cross, was built and the parish founded between 1512 and 1529. The present brick church Cross was built in Šeduva in 1643 with a donation from bishop Jurgis Tiškevičius of Vilnius. During the 18th century the bell tower was added to the structure, with further renovations and extensions in 1905. Baroque and renaissance architectural styles characterise both the exterior and interior of the church. It has a cruciform plan with an apse, low sacristy and five altars. During the 15th century the region was redefined as the Voivodeship of Trakai and Vilnius. Later it became part of the Grand Duchy of Lithuania until the Union of Lublin in 1569 created the Polish-Lithuanian Commonwealth. The Šeduva coat of arms were granted on June 25, 1654 by John II Casimir Vasa, King of Poland and Grand Duke of Lithuania and at the same time the city was granted burger rights at the request of Maria Ludvika, Queen of Poland. She descended from the Princes of Gonzaga, from Mantua in Italy. The arms of the family showed a black eagle. The small breastshield shows the French fleur-de-lis, because the Gonzaga family was related to the French Royal family. The eagle was made white in reference to the white eagle of Poland. 1792 Stanislaw II August Poniatowski, the last royal proprietor of Šeduva, concluded an agreement with the town's citizens, giving them rights to be excused from labour on the estate for a fee. In 1795, the year of a terrible fire in Šeduva, Lithuania became part of Russia when Poland was partitioned. From 1798, Baron Theodore von Ropp did not acknowledge the rights of Šeduva citizens and required of the citizens to perform labour in the town's manor. The citizens petitioned for their rights to the Russian Senate. In 1812, the Senate passed the decision to recognise the former charters of Šeduva. Between 1696 and 1762, a Jesuit mission, connected with their college at Pašiaušė, was active in the town, operating a lower school with 96 pupils up until 1828. After an insurrection in 1863 (the January Uprising), all parish schools in Šeduva were closed and replaced by public Russian language schools. In the same year a Russian Orthodox Church, designed by the architect Ustinas Golinevicius, was built and in 1866 a wooden Synagogue was added near the central market square. The Molotov-Ribbentrop Pact between Nazi Germany and Communist Russia in August 1939 and the German-Soviet Boundary and Friendship Treaty a month later placed Lithuania under Soviet control. By June 1940 the Soviets had set up a pro-Soviet government and stationed many Red Army troops in Lithuania as part of the Mutual Assistance Pact between the countries. President Antanas Smetona was forced to leave as 15 Red Army divisions came in. The pro-Soviet puppet government was controlled by Vladimir Dekanozov and Justas Paleckis, and Lithuania was made part of the Soviet Union. A Sovietisation programme began immediately. Land, banks and large businesses were nationalised. All religious, cultural, and political organizations were abolished except the Communist party. 17,000 people were deported to Siberia, where many would perish. During the years of Lithuanian anti-Soviet partisan resistance (1944–1953) in Šeduva and neighbouring districts Lithuanian Žalioji rinktinė (The Green Squad), belonging to partisans' Algimantas military district was active. Industry Šeduva is famous for sheep farming, Lithuanian Black-headed sheep are grown. The state enterprise Šeduvos avininkystė is responsible for the preservation of the genetic stock of Lithuanian Black-Headed sheep. The Holocaust in Shadeve The German army invaded Lithuania on 22 June 1941, taking Shadova - Šeduva a few days later as part of Operation Barbarossa. At first the Lithuanian population considered the Nazis to be liberators saving them from the Red Army. Five hundred years of Jewish life in Shadova - Šeduva ended in just two days of slaughter. Shadova's Jews attempted to flee east to Russia but were badly treated by Lithuanian nationalists and most returned to their homes. The German forces entered Shadova - Šeduva on 25 June 1941 and were received with flowers by many locals. By the beginning of July, Jews had to wear the yellow Star of David. Jews who had participated in the Soviet rule were immediately arrested and executed. Jews were taken to dismantle the remnants of the munitions factory in Linkaičiai, and were then accused of stealing and executed. Others were forced into labour gangs. They were set to work cleaning the streets and at the warehouses of the rail station. All the work was guarded by armed Lithuanian militi. Next all the Jews of Shadova - Šeduva had to gather in the market place with no more than a small package each, and to hand over the keys to their houses to the police. Under guard, they were escorted at night to the village of Pavartyčiai, five kilometres north-west of Shadova - Šeduva, where they were crowded into two unfinished Soviet barracks surrounded with barbed wire. The Jews were ordered to hand over all their valuables and cash. Some were shot in the next few days. On 25 August 1941 the remaining Jews of Shadova - Šeduva were loaded on trucks and taken to Liaudiškiai, ten kilometres south-west of the town where the Rollcommando Hamann of Einsatzcommando 3 and Lithuanian collaborators of the 3rd company of the Tautinio Darbo Apsaugos Batalionas were waiting for them. Over the coming two days the entire Jewish community of Shadova was shot and buried in two pre-prepared mass graves. One site was located 400 meters north of the Shadova - Šeduva road and a second 900 meters north west of the same road, close to a path in the forest. The lists of mass graves in the book The Popular Massacres of Lithuania, Part II, include the following: Liaudiskiai forest about 10 km southwest of Šeduva, one site 400 meters north of the Šeduva road and a second site 900 meters northwest of the same road, close to a path in the forest. The Jäger report concludes that Einsatzcommando 3 registered the murder in Šeduva on the 25 and 26 August 1941 of 230 Jewish men, 275 Jewish women and 159 Jewish children, a total of 664 people. References External links The murder of the Jews of Šeduva during World War II, at Yad Vashem website. Category:Cities in Lithuania Category:Cities in Šiauliai County Category:Trakai Voivodeship Category:Shavelsky Uyezd Category:Holocaust locations in Lithuania Category:Radviliškis District Municipality
{ "pile_set_name": "wikipedia_en" }
// Copyright 2000-2020 JetBrains s.r.o. Use of this source code is governed by the Apache 2.0 license that can be found in the LICENSE file. package com.intellij.openapi.vcs.impl import com.intellij.ProjectTopics import com.intellij.openapi.application.ApplicationManager import com.intellij.openapi.components.service import com.intellij.openapi.extensions.ExtensionNotApplicableException import com.intellij.openapi.module.Module import com.intellij.openapi.module.ModuleManager import com.intellij.openapi.project.ModuleListener import com.intellij.openapi.project.Project import com.intellij.openapi.project.rootManager import com.intellij.openapi.roots.ModuleRootEvent import com.intellij.openapi.roots.ModuleRootListener import com.intellij.openapi.startup.StartupActivity import com.intellij.openapi.vcs.AbstractVcs import com.intellij.openapi.vcs.ProjectLevelVcsManager import com.intellij.openapi.vcs.VcsDirectoryMapping import com.intellij.openapi.vfs.VirtualFile internal class ModuleVcsDetector(private val project: Project) { private val vcsManager by lazy(LazyThreadSafetyMode.NONE) { (ProjectLevelVcsManager.getInstance(project) as ProjectLevelVcsManagerImpl) } internal class MyPostStartUpActivity : StartupActivity.DumbAware { init { if (ApplicationManager.getApplication().isUnitTestMode) { throw ExtensionNotApplicableException.INSTANCE } } override fun runActivity(project: Project) { val vcsDetector = project.service<ModuleVcsDetector>() val listener = vcsDetector.MyModulesListener() val busConnection = project.messageBus.connect() busConnection.subscribe(ProjectTopics.MODULES, listener) busConnection.subscribe(ProjectTopics.PROJECT_ROOTS, listener) if (vcsDetector.vcsManager.needAutodetectMappings()) { vcsDetector.autoDetectVcsMappings(true) } } } private inner class MyModulesListener : ModuleRootListener, ModuleListener { private val myMappingsForRemovedModules: MutableList<VcsDirectoryMapping> = mutableListOf() override fun beforeRootsChange(event: ModuleRootEvent) { myMappingsForRemovedModules.clear() } override fun rootsChanged(event: ModuleRootEvent) { myMappingsForRemovedModules.forEach { mapping -> vcsManager.removeDirectoryMapping(mapping) } // the check calculates to true only before user has done any change to mappings, i.e. in case modules are detected/added automatically // on start etc (look inside) if (vcsManager.needAutodetectMappings()) { autoDetectVcsMappings(false) } } override fun moduleAdded(project: Project, module: Module) { myMappingsForRemovedModules.removeAll(getMappings(module)) autoDetectModuleVcsMapping(module) } override fun beforeModuleRemoved(project: Project, module: Module) { myMappingsForRemovedModules.addAll(getMappings(module)) } } private fun autoDetectVcsMappings(tryMapPieces: Boolean) { if (vcsManager.haveDefaultMapping() != null) return val usedVcses = mutableSetOf<AbstractVcs?>() val detectedRoots = mutableSetOf<Pair<VirtualFile, AbstractVcs>>() val roots = ModuleManager.getInstance(project).modules.flatMap { it.rootManager.contentRoots.asIterable() }.distinct() for (root in roots) { val moduleVcs = vcsManager.findVersioningVcs(root) if (moduleVcs != null) { detectedRoots.add(Pair(root, moduleVcs)) } usedVcses.add(moduleVcs) // put 'null' for unmapped module } val commonVcs = usedVcses.singleOrNull() if (commonVcs != null) { // Remove existing mappings that will duplicate added <Project> mapping. val rootPaths = roots.map { it.path }.toSet() val additionalMappings = vcsManager.directoryMappings.filter { it.directory !in rootPaths } vcsManager.setAutoDirectoryMappings(additionalMappings + VcsDirectoryMapping.createDefault(commonVcs.name)) } else if (tryMapPieces) { val newMappings = detectedRoots.map { (root, vcs) -> VcsDirectoryMapping(root.path, vcs.name) } vcsManager.setAutoDirectoryMappings(vcsManager.directoryMappings + newMappings) } } private fun autoDetectModuleVcsMapping(module: Module) { if (vcsManager.haveDefaultMapping() != null) return val newMappings = mutableListOf<VcsDirectoryMapping>() for (file in module.rootManager.contentRoots) { val vcs = vcsManager.findVersioningVcs(file) if (vcs != null && vcs !== vcsManager.getVcsFor(file)) { newMappings.add(VcsDirectoryMapping(file.path, vcs.name)) } } if (newMappings.isNotEmpty()) { vcsManager.setAutoDirectoryMappings(vcsManager.directoryMappings + newMappings) } } private fun getMappings(module: Module): List<VcsDirectoryMapping> { return module.rootManager.contentRoots .mapNotNull { root -> vcsManager.directoryMappings.firstOrNull { it.directory == root.path } } } }
{ "pile_set_name": "github" }
--- abstract: | The aim of this paper is to numerically solve a diffusion differential problem having time derivative of fractional order. To this end we propose a collocation-Galerkin method that uses the fractional splines as approximating functions. The main advantage is in that the derivatives of integer and fractional order of the fractional splines can be expressed in a closed form that involves just the generalized finite difference operator. This allows us to construct an accurate and efficient numerical method. Several numerical tests showing the effectiveness of the proposed method are presented.\ [**Keywords**]{}: Fractional diffusion problem, Collocation method, Galerkin method, Fractional spline author: - 'Laura Pezza[^1], Francesca Pitolli[^2]' title: 'A fractional spline collocation-Galerkin method for the time-fractional diffusion equation' --- Introduction. {#sec:intro} ============= The use of fractional calculus to describe real-world phenomena is becoming increasingly widespread. Integro-differential equations of [*fractional*]{}, [*i.e.*]{} positive real, order are used, for instance, to model wave propagation in porous materials, diffusive phenomena in biological tissue, viscoelastic properties of continuous media [@Hi00; @Ma10; @KST06; @Ta10]. Among the various fields in which fractional models are successfully used, viscoelasticity is one of the more interesting since the memory effect introduced by the time-fractional derivative allows to model anomalous diffusion phenomena in materials that have mechanical properties in between pure elasticity and pure viscosity [@Ma10]. Even if these models are empirical, nevertheless they are shown to be consistent with experimental data.\ The increased interest in fractional models has led to the development of several numerical methods to solve fractional integro-differential equations. Many of the proposed methods generalize to the fractional case numerical methods commonly used for the classical integer case (see, for instance, [@Ba12; @PD14; @ZK14] and references therein). But the nonlocality of the fractional derivative raises the challenge of obtaining numerical solution with high accuracy at a low computational cost. In [@PP16] we proposed a collocation method especially designed for solving differential equations of fractional order in time. The key ingredient of the method is the use of the fractional splines introduced in [@UB00] as approximating functions. Thus, the method takes advantage of the explicit differentiation rule for fractional B-splines that allows us to evaluate accurately the derivatives of both integer and fractional order.\ In the present paper we used the method to solve a diffusion problem having time derivative of fractional order and show that the method is efficient and accurate. More precisely, the [*fractional spline collocation-Galerkin method*]{} here proposed combines the fractional spline collocation method introduced in [@PP16] for the time discretization and a classical spline Galerkin method in space.\ The paper is organized as follows. In Section \[sec:diffeq\], a time-fractional diffusion problem is presented and the definition of fractional derivative is given. Section \[sec:fractBspline\] is devoted to the fractional B-splines and the explicit expression of their fractional derivative is given. The fractional spline approximating space is described in Section \[sec:app\_spaces\], while the fractional spline collocation-Galerkin method is introduced in Section \[sec:Galerkin\]. Finally, in Section \[sec:numtest\] some numerical tests showing the performance of the method are displayed. Some conclusions are drawn in Section \[sec:concl\]. A time-fractional diffusion problem. {#sec:diffeq} ==================================== We consider the [*time-fractional differential diffusion problem*]{} [@Ma10] $$\label{eq:fracdiffeq} \left \{ \begin{array}{lcc} \displaystyle D_t^\gamma \, u(t, x) - \frac{\partial^2}{\partial x^2} \, u(t, x) = f(t, x)\,, & \quad t \in [0, T]\,, & \quad x \in [0,1] \,,\\ \\ u(0, x) = 0\,, & & \quad x \in [0,1]\,, \\ \\ u(t, 0) = u(t, 1) = 0\,, & \quad t \in [0, T]\,, \end{array} \right.$$ where $ D_t^\gamma u$, $0 < \gamma < 1$, denotes the [*partial fractional derivative*]{} with respect to the time $t$. Usually, in viscoelasticity the fractional derivative is to be understood in the Caputo sense, [*i.e.*]{} $$\label{eq:Capfrac} D_t^\gamma \, u(t, x) = \frac1{\Gamma(1-\gamma)} \, \int_0^t \, \frac{u_t(\tau,x)}{(t - \tau)^\gamma} \, d\tau\,, \qquad t\ge 0\,,$$ where $\Gamma$ is the Euler’s gamma function $$\Gamma(\gamma+1)= \int_0^\infty \, s^\gamma \, {\rm e}^{-s} \, ds\,.$$ We notice that due to the homogeneous initial condition for the function $u(t,x)$, solution of the differential problem (\[eq:fracdiffeq\]), the Caputo definition (\[eq:Capfrac\]) coincides with the Riemann-Liouville definition (see [@Po99] for details). One of the advantage of the Riemann-Liouville definition is in that the usual differentiation operator in the Fourier domain can be easily extended to the fractional case, [*i.e.*]{} $${\cal F} \bigl(D_t^\gamma \, f(t) \bigr) = (i\omega)^\gamma {\cal F} (f(t))\,,$$ where ${\cal F}(f)$ denotes the Fourier transform of the function $f(t)$. Thus, analytical Fourier methods usually used in the classical integer case can be extended to the fractional case [@Ma10]. The fractional B-splines and their fractional derivatives. {#sec:fractBspline} ========================================================== The [*fractional B-splines*]{}, [*i.e.*]{} the B-splines of fractional degree, were introduced in [@UB00] generalizing to the fractional power the classical definition for the polynomial B-splines of integer degree. Thus, the fractional B-spline $B_{\alpha}$ of degree $\alpha$ is defined as $$\label{eq:Balpha} B_{\alpha}(t) := \frac{{ \Delta}^{\alpha+1} \, t_+^\alpha} {\Gamma(\alpha+1)}\,, \qquad \alpha > -\frac 12\,,$$ where $$\label{eq:fracttruncpow} t_+^\alpha: = \left \{ \begin{array}{ll} t^\alpha\,, & \qquad t \ge 0\,, \\ \\ 0\,, & \qquad \hbox{otherwise}\,, \end{array} \right. \qquad \alpha > -1/2\,,$$ is the [*fractional truncated power function*]{}. $\Delta^{\alpha}$ is the [*generalized finite difference operator*]{} $$\label{eq:fracfinitediff} \Delta^{\alpha} \, f(t) := \sum_{k\in \NN} \, (-1)^k \, {\alpha \choose k} \, f(t-\,k)\,, \qquad \alpha \in \RR^+\,,$$ where $$\label{eq:binomfrac} {\alpha \choose k} := \frac{\Gamma(\alpha+1)}{k!\, \Gamma(\alpha-k+1)}\,, \qquad k\in \NN\,, \quad \alpha \in \RR^+\,,$$ are the [*generalized binomial coefficients*]{}. We notice that ’fractional’ actually means ’noninteger’, [*i.e.*]{} $\alpha$ can assume any real value greater than $-1/2$. For real values of $\alpha$, $B_\alpha$ does not have compact support even if it belongs to $L_2(\RR)$. When $\alpha=n$ is a nonnegative integer, Equations (\[eq:Balpha\])-(\[eq:binomfrac\]) are still valid; $\Delta^{n}$ is the usual finite difference operator so that $B_n$ is the classical polynomial B-spline of degree $n$ and compact support $[0,n+1]$ (for details on polynomial B-splines see, for instance, the monograph [@Sc07]). The fractional B-splines for different values of the parameter $\alpha$ are displayed in Figure \[fig:fractBsplines\] (top left panel). The classical polynomial B-splines are also displayed (dashed lines). The picture shows that the fractional B-splines decay very fast toward infinity so that they can be assumed compactly supported for computational purposes. Moreover, in contrast to the polynomial B-splines, fractional splines are not always positive even if the nonnegative part becomes more and more smaller as $\alpha$ increases. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Top left panel: The fractional B-splines (solid lines) and the polynomial B-splines (dashed lines) for $\alpha$ ranging from 0 to 4. Top right panel: The fractional derivatives of the linear B-spline $B_1$ for $\gamma = 0.25, 0.5, 0.75$. Bottom left panel: The fractional derivatives of the cubic B-spline $B_3$ for $\gamma$ ranging from 0.25 to 2. Bottom right panel: The fractional derivatives of the fractional B-spline $B_{3.5}$ for the $\gamma$ ranging from 0.25 to 2. Ordinary derivatives are displayed as dashed lines. []{data-label="fig:fractBsplines"}](Fig_fract_Bspline.png "fig:"){width="6cm"} ![Top left panel: The fractional B-splines (solid lines) and the polynomial B-splines (dashed lines) for $\alpha$ ranging from 0 to 4. Top right panel: The fractional derivatives of the linear B-spline $B_1$ for $\gamma = 0.25, 0.5, 0.75$. Bottom left panel: The fractional derivatives of the cubic B-spline $B_3$ for $\gamma$ ranging from 0.25 to 2. Bottom right panel: The fractional derivatives of the fractional B-spline $B_{3.5}$ for the $\gamma$ ranging from 0.25 to 2. Ordinary derivatives are displayed as dashed lines. []{data-label="fig:fractBsplines"}](Fig_fractder_Bspline_linear.png "fig:"){width="6cm"} ![Top left panel: The fractional B-splines (solid lines) and the polynomial B-splines (dashed lines) for $\alpha$ ranging from 0 to 4. Top right panel: The fractional derivatives of the linear B-spline $B_1$ for $\gamma = 0.25, 0.5, 0.75$. Bottom left panel: The fractional derivatives of the cubic B-spline $B_3$ for $\gamma$ ranging from 0.25 to 2. Bottom right panel: The fractional derivatives of the fractional B-spline $B_{3.5}$ for the $\gamma$ ranging from 0.25 to 2. Ordinary derivatives are displayed as dashed lines. []{data-label="fig:fractBsplines"}](Fig_fractder_Bspline_cubica.png "fig:"){width="6cm"} ![Top left panel: The fractional B-splines (solid lines) and the polynomial B-splines (dashed lines) for $\alpha$ ranging from 0 to 4. Top right panel: The fractional derivatives of the linear B-spline $B_1$ for $\gamma = 0.25, 0.5, 0.75$. Bottom left panel: The fractional derivatives of the cubic B-spline $B_3$ for $\gamma$ ranging from 0.25 to 2. Bottom right panel: The fractional derivatives of the fractional B-spline $B_{3.5}$ for the $\gamma$ ranging from 0.25 to 2. Ordinary derivatives are displayed as dashed lines. []{data-label="fig:fractBsplines"}](Fig_fractder_Bspline_alpha3p5.png "fig:"){width="6cm"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The fractional derivatives of the fractional B-splines can be evaluated explicitly by differentiating (\[eq:Balpha\]) and (\[eq:fracttruncpow\]) in the Caputo sense. This gives the following differentiation rule $$\label{eq:diffrule_tronc} D^{\gamma}_t \, B_{\alpha} (t)= \frac{\Delta^{\alpha+1} \, t_+^{\alpha-\gamma}} {\Gamma(\alpha-\gamma+1)}\,, \qquad 0 < \gamma < \alpha + \frac12\,,$$ which holds both for fractional and integer order $\gamma$. In particular, when $\gamma, \alpha$ are nonnegative integers, (\[eq:diffrule\_tronc\]) is the usual differentiation rule for the classical polynomial B-splines [@Sc07]. We observe that since $B_\alpha$ is a causal function with $B_\alpha^{(n)}(0)=0$ for $n\in \NN\backslash\{0\}$, the Caputo fractional derivative coincides with the Riemann-Liouville fractional derivative.\ From (\[eq:diffrule\_tronc\]) and the composition property $\Delta^{\alpha_1} \, \Delta^{\alpha_2} = \Delta^{\alpha_1+\alpha_2}$ it follows [@UB00] $$\label{eq:diffrule_2} D^{\gamma}_t \, B_{\alpha} = \Delta ^{\gamma} \, B_{\alpha-\gamma}\,,$$ [*i.e.*]{} the fractional derivative of a fractional B-spline of degree $\alpha$ is a fractional spline of degree $\alpha-\gamma$. The fractional derivatives of the classical polynomial B-splines $B_n$ are fractional splines, too. This means that $D^{\gamma}_t \, B_{n}$ is not compactly supported when $\gamma$ is noninteger reflecting the nonlocal behavior of the derivative operator of fractional order.\ In Figure \[fig:fractBsplines\] the fractional derivatives of $B_1$ (top right panel), $B_3$ (bottom left panel) and $B_{3.5}$ (bottom right panel) are displayed for different values of $\gamma$. The fractional spline approximating spaces. {#sec:app_spaces} =========================================== A property of the fractional B-splines that is useful for the construction of numerical methods for the solution of differential problems is the [*refinability*]{}. In fact, the fractional B-splines are [*refinable functions*]{}, [*i.e.*]{} they satisfy the [*refinement equation*]{} $$B_\alpha(t) = \sum_{k\in \NN} \, a^{(\alpha)}_{k} \, B_\alpha(2\,t-k)\,, \qquad t \ge 0\,,$$ where the coefficients $$a^{(\alpha)}_{k} := \frac{1}{2^{\alpha}} {\alpha+1 \choose k}\,,\qquad k\in \NN\,,$$ are the [*mask coefficients*]{}. This means that the sequence of nested approximating spaces $$V^{(\alpha)}_j(\RR) = {\rm span} \,\{B_\alpha(2^j\, t -k), k \in \ZZ\}\,, \qquad j \in \ZZ\,,$$ forms a [*multiresolution analysis*]{} of $L_2(\RR)$. As a consequence, any function $f_j(t)$ belonging to $V^{(\alpha)}_j(\RR)$ can be expressed as $$f_j(t) = \sum_{k\in \ZZ}\, \lambda_{jk} \, B_\alpha(2^j\, t -k)\,,$$ where the coefficient sequence $\{\lambda_{j,k}\} \in \ell_2(\ZZ)$. Moreover, any space $V^{(\alpha)}_j(\RR)$ reproduces polynomials up to degree $\lceil \alpha\rceil$, [*i.e.*]{} $x^d \in V^{(\alpha)}_j(\RR)$, $ 0 \le d \le \lceil \alpha\rceil$, while its approximation order is $\alpha +1$. We recall that the polynomial B-spline $B_n$ reproduces polynomial up to degree $n$ whit approximation order $n+1$ [@UB00]. To solve boundary differential problems we need to construct a multiresolution analysis on a finite interval. For the sake of simplicity in the following we will consider the interval $I=[0,1]$. A simple approach is to restrict the basis $\{B_\alpha(2^j\, t -k)\}$ to the interval $I$, [*i.e.*]{} $$\label{eq:Vj_int} V^{(\alpha)}_j(I) = {\rm span} \,\{B_\alpha(2^j\, t -k), t\in I, -N \le k \le 2^j-1\}\,, \qquad j_0 \le j\,,$$ where $N$ is a suitable index, chosen in order the significant part of $B_\alpha$ is contained in $[0,N+1]$, and $j_0$ is the starting refinement level. The drawback of this approach is its numerical instability and the difficulty in fulfilling the boundary conditions since there are $2N$ boundary functions, [*i.e.*]{} the translates of $B_\alpha$ having indexes $ -N\le k \le -1$ and $2^j-N\le k \le 2^j-1$, that are non zero at the boundaries. More suitable refinable bases can be obtained by the procedure given in [@GPP04; @GP04]. In particular, for the polynomial B-spline $B_n$ a B-basis $\{\phi_{\alpha,j,k}(t)\}$ with optimal approximation properties can be constructed. The internal functions $\phi_{\alpha,j,k}(t)=B_\alpha(2^j\, t -k)$, $0 \le k \le 2^j-1-n$, remain unchanged while the $2n$ boundary functions fulfill the boundary conditions $$\begin{array}{llcc} \phi_{\alpha,j,-1}^{(\nu)}(0) = 1\,, & \phi_{\alpha,j,k}^{(\nu)}(0) = 0\,, &\hbox{for} & 0\le \nu \le -k-2\,, -n \le k \le -2\,,\\ \\ \phi_{\alpha,j,2^j-1}^{(\nu)}(1) = 1\,, & \phi_{\alpha,j,2^j+k}^{(\nu)}(1) = 0\,, & \hbox{for} & 0\le \nu \le -k-2\,, -n \le k \le -2\,,\\ \\ \end{array}$$ Thus, the B-basis naturally fulfills Dirichlet boundary conditions.\ As we will show in the next section, the refinability of the fractional spline bases plays a crucial role in the construction of the collocation-Galerkin method. The fractional spline collocation-Galerkin method. {#sec:Galerkin} ================================================== In the collocation-Galerkin method here proposed, we look for an approximating function $u_{s,j}(t,x) \in V^{(\beta)}_s([0,T]) \otimes V^{(\alpha)}_j([0,1])$. Since just the ordinary first spatial derivative of $u_{s,j}$ is involved in the Galerkin method, we can assume $\alpha$ integer and use as basis function for the space $V^{(\alpha)}_j([0,1])$ the refinable B-basis $\{\phi_{\alpha,j,k}\}$, [*i.e.*]{} $$\label{uj} u_{s,j}(t,x) = \sum_{k \in {\cal Z}_j} \, c_{s,j,k}(t) \, \phi_{\alpha,j,k}(x)\,,$$ where the unknown coefficients $c_{s,j,k}(t)$ belong to $V^{(\beta)}_s([0,T])$. Here, ${\cal Z}_j$ denotes the set of indexes $-n\le k \le 2^j-1$.\ The approximating function $u_{s,j}(t,x)$ solves the variational problem $$\label{varform} \left \{ \begin{array}{ll} \displaystyle \left ( D_t^\gamma u_{s,j},\phi_{\alpha,j,k} \right ) -\left ( \frac {\partial^2} {\partial x^2}\,u_{s,j},\phi_{\alpha,j,k} \right ) = \left ( f,\phi_{\alpha,j,k} \right )\,, & \quad k \in {\cal Z}_j\,, \\ \\ u_{s,j}(0, x) = 0\,, & x \in [0,1]\,, \\ \\ u_{s,j}(t, 0) = 0\,, \quad u_{s,j}(t,1) = 0\,, & t \in [0,T]\,, \end{array} \right.$$ where $(f,g)= \int_0^1 \, f\,g$.\ Now, writing (\[varform\]) in a weak form and using (\[uj\]) we get the system of fractional ordinary differential equations $$\label{fracODE} \left \{ \begin{array}{ll} M_j \, D_t^\gamma\,C_{s,j}(t) + L_j\, C_{s,j}(t) = F_j(t)\,, & \qquad t \in [0,T]\,, \\ \\ C_{s,j}(0) = 0\,, \end{array} \right.$$ where $C_{s,j}(t)=(c_{s,j,k}(t))_{k\in {\cal Z}_j}$ is the unknown vector. The connecting coefficients, i.e. the entries of the mass matrix $M_j = (m_{j,k,i})_{k,i\in{\cal Z}_j}$, of the stiffness matrix $L_j = (\ell_{j,k,i})_{k,i\in{\cal Z}_j}$, and of the load vector $F_j(t)=(f_{j,k}(t))_{k\in {\cal Z}_j}$, are given by $$m_{j,k,i} = \int_0^1\, \phi_{\alpha,j,k}\, \phi_{\alpha,j,i}\,, \qquad \ell_{j,k,i} = \int_0^1 \, \phi'_{\alpha,j,k} \, \phi'_{\alpha,j,i}\,,$$ $$f_{j,k}(t) = \int_0^1\, f(t,\cdot)\, \phi_{\alpha,j,k}\,.$$ The entries of $M_j$ and $L_j$ can be evaluated explicitly using (\[eq:Balpha\]) and (\[eq:diffrule\_tronc\]), respectively, while the entries of $F_j(t)$ can be evaluated by quadrature formulas especially designed for wavelet methods [@CMP15; @GGP00]. To solve the fractional differential system (\[fracODE\]) we use the collocation method introduced in [@PP16]. For an integer value of $T$, let $t_p = p/2^q$, $0\le p \le 2^q\,T$, where $q$ is a given nonnegative integer, be a set of dyadic nodes in the interval $[0,T]$. Now, assuming $$\label{ck} c_{s,j,k}(t) = \sum_{r\in {\cal R}_s} \, \lambda_{k,r}\,\chi_{\beta,s,r}(t) \,, \qquad k \in {\cal Z}_j\,,$$ where $\chi_{\beta,s,r}(t)=B_\beta(2^s\,t-r)$ with $B_\beta$ a fractional B-spline of fractional degree $\beta$, and collocating (\[fracODE\]) on the nodes $t_p$, we get the linear system $$\label{colllinearsys} (M_j\otimes A_s + L_j\otimes G_s) \,\Lambda_{s,j} =F_j\,,$$ where $\Lambda_{s,j}=(\lambda_{k,r})_{r\in {\cal R}_s,k\in {\cal Z}_j}$ is the unknown vector, $$\begin{array}{ll} A_s= \bigl( a_{p,r} \bigr)_{p\in {\cal P}_q,r\in {\cal R}_s}\,, & \qquad a_{p,r} = D_t^\gamma \, \chi_{\beta,s,r}(t_p)\,, \\ \\ G_s=\bigl(g_{p,r}\bigr)_{p \in {\cal P}_q,r\in {\cal R}_s}\,, & \qquad g_{p,r} = \chi_{\beta,s,r}(t_p)\,, \end{array}$$ are the collocation matrices and $$F_j=(f_{j,k}(t_p))_{k\in{\cal Z}_j,p \in {\cal P}_q}\,,$$ is the constant term. Here, ${\cal R}_s$ denotes the set of indexes $-\infty < r \le 2^s-1$ and ${\cal P}_q$ denotes the set of indexes $0<p\le 2^qT$. Since the fractional B-splines have fast decay, the series (\[ck\]) is well approximated by only few terms and the linear system (\[colllinearsys\]) has in practice finite dimension so that the unknown vector $\Lambda_{s,j}$ can be recovered by solving (\[colllinearsys\]) in the least squares sense.\ We notice that the entries of $G_s$, which involve just the values of $\chi_{\beta,s,r}$ on the dyadic nodes $t_p$, can be evaluated explicitly by (\[eq:Balpha\]). On the other hand, we must pay a special attention to the evaluation of the entries of $A_s$ since they involve the values of the fractional derivative $D_t^\gamma\chi_{\beta,s,r}(t_p)$. As shown in Section \[sec:fractBspline\], they can be evaluated efficiently by the differentiation rule (\[eq:diffrule\_2\]). In the following theorem we prove that the fractional spline collocation-Galerkin method is convergent. First of all, let us introduce the Sobolev space on bounded interval $$H^\mu(I):= \{v \in L^2(I): \exists \, \tilde v \in H^\mu (\RR) \ \hbox{\rm such that} \ \tilde v|_I=v\}, \quad \mu\geq 0\,,$$ equipped with the norm $$\|v\|_{\mu,I} = \inf_{\tilde v \in H^\mu(\RR), \tilde v|_I=v} \|\tilde v\|_{\mu,\RR}\,,$$ where $$H^\mu(\RR):= \{v: v\in L^2(\RR) \mbox{ and } (1+|\omega|^2)^{\mu/2} {\cal F}(v)(\omega) \in L^2(\RR)\}, \quad \mu\geq 0\,,$$ is the usual Sobolev space with the norm $$\| v \| _{\mu,\RR} =\bigl \| (1+|\omega|^2)^{\mu/2} {\cal F}(v)(\omega) \bigr \| _{0,\RR}\,.$$ \[Convergence\] Let $$H^\mu(I;H^{\tilde \mu}(\Omega)):= \{v(t,x): \| v(t,\cdot)\|_{H^{\tilde \mu}(\Omega)} \in H^\mu(I)\}, \quad \mu, {\tilde \mu} \geq 0\,,$$ equipped with the norm $$\|v\|_{H^\mu(I;H^{\tilde \mu}(\Omega))} := \bigl \| \|v(t,\cdot)\|_{H^{\tilde \mu}(\Omega)} \bigr\|_{\mu,I}\,.$$ Assume $u$ and $f$ in (\[eq:fracdiffeq\]) belong to $H^{\mu}([0,T];H^{\tilde \mu}([0,1]))$, $0\le \mu$, $0\le \tilde \mu$, and $H^{\mu-\gamma}([0,T];$ $H^{{\tilde \mu}-2}([0,1]))$, $0\le \mu-\gamma$, $0\le \tilde \mu-2$, respectively. Then, the fractional spline collocation-Galerkin method is convergent, [*i.e.*]{}, $$\|u-u_{s,j}\|_{H^0([0,T];H^0([0,1]))} \, \to 0 \quad \hbox{as} \quad s,j \to \infty\,.$$ Moreover, for $\gamma \le \mu \le \beta+1$ and $1 \le \tilde \mu \le \alpha +1$ the following error estimate holds: $$\begin{array}{lcl} \| u-u_{s,j}\|_{H^0([0,T];H^0([0,1]))} &\leq & \left (\eta_1 \, 2^{-j\tilde \mu} + \eta_2 \, 2^{-s\mu} \right ) \| u\|_{H^\mu([0,T];H^{\tilde \mu}([0,1]))}\,, \end{array}$$ where $\eta_1$ and $\eta_2$ are two constants independent of $s$ and $j$. Let $u_j$ be the exact solution of the variational problem (\[varform\]). Following a classical line of reasoning (cf. [@Th06; @FXY11; @DPS94]) we get $$\begin{array}{l} \| u-u_{j,s}\|_{H^0([0,T];H^0([0,1]))} \leq \\ \\ \rule{2cm}{0cm} \leq \|u-u_{j}\|_{H^0([0,T];H^0([0,1]))} + \| u_j-u_{j,s}\|_{H^0([0,T];H^0([0,1]))}\, \leq \\ \\ \rule{2cm}{0cm} \leq \eta_1 \, 2^{-j\tilde \mu}\, \| u\|_{H^0([0,T];H^{\tilde \mu}([0,1]))} + \eta_2 \, 2^{-s\mu} \, \| u\|_{H^\mu([0,T];H^0([0,1]))} \leq \\ \\ \rule{2cm}{0cm} \leq \left ( \eta_1 \, 2^{-j\tilde \mu} + \eta_2 \, 2^{-s\mu} \right ) \, \|u\|_{H^\mu([0,T];H^{\tilde \mu}([0,1]))}\,. \end{array}$$ Numerical tests. {#sec:numtest} ================ To shown the effectiveness of the fractional spline collocation-Galerkin method we solved the fractional diffusion problem (\[eq:fracdiffeq\]) for two different known terms $f(t,x)$ taken from [@FXY11]. In all the numerical tests we used as approximating space for the Galerkin method the (polynomial) cubic spline space. The B-splines $B_3$, its first derivatives $B_3'$ and the B-basis $\{\phi_{3,3,k}\}$ are displayed in Figure \[fig:Bcubic\]. We notice that since the cubic B-spline is centrally symmetric in the interval $[0,4]$, the B-basis is centrally symmetric, too. All the numerical tests were performed on a laptop using a Python environment. Each test takes a few minutes. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Left panel: The cubic B-spline (red line) and its first derivative (blue line). Right panel: The B-basis $\{\phi_{3,3,k}(x)\}$.[]{data-label="fig:Bcubic"}](Fig_Bspline_n3.png "fig:"){width="45.00000%"} ![Left panel: The cubic B-spline (red line) and its first derivative (blue line). Right panel: The B-basis $\{\phi_{3,3,k}(x)\}$.[]{data-label="fig:Bcubic"}](OptBasis_alpha3.png "fig:"){width="45.00000%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Example 1 --------- In the first test we solved the time-fractional diffusion equation (\[eq:fracdiffeq\]) in the case when $$f(t,x)=\frac{2}{\Gamma(3-\gamma)}\,t^{2-\gamma}\, \sin(2\pi x)+4\pi^2\,t^2\, \sin(2\pi x)\,.$$ The exact solution is $$u(t,x)=t^2\,\sin(2\pi x).$$ We used the fractional B-spline $B_{3.5}$ as approximating function for the collocation method and solved the problem for $\gamma = 1, 0.75, 0.5, 0.25$. The fractional B-spline $B_{3.5}$, its first derivative and its fractional derivatives are shown in Figure \[fig:fract\_Basis\] along with the fractional basis $\{\chi_{3.5,3,r}\}$. The numerical solution $u_{s,j}(t,x)$ and the error $e_{s,j}(t,x) = u(t,x)-u_{s,j}(t,x)$ for $s=6$ and $j=6$ are displayed in Figure \[fig:numsol\_1\] for $\gamma = 0.5$. In all the numerical tests we set $q = s+1$. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Left panel: The fractional B-spline $B_{3.5}$ (green line), its first derivative (red line) and its fractional derivatives of order $\gamma =0.75$ (blue line), 0.5 (cyan line), 0.25 (black line). Right panel: The fractional basis $\{\chi_{3.5,3,r}\}$(right).[]{data-label="fig:fract_Basis"}](Fig_Bspline_n3p5.png "fig:"){width="45.00000%"} ![Left panel: The fractional B-spline $B_{3.5}$ (green line), its first derivative (red line) and its fractional derivatives of order $\gamma =0.75$ (blue line), 0.5 (cyan line), 0.25 (black line). Right panel: The fractional basis $\{\chi_{3.5,3,r}\}$(right).[]{data-label="fig:fract_Basis"}](FractBasis_alpha3p5.png "fig:"){width="45.00000%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Example 1. The numerical solution (left panel) and the error (right panel) for $j=6$ and $s=6$ when $\gamma = 0.5$.[]{data-label="fig:numsol_1"}](Fig_NumSol_jGK6_sref7_beta3p5_gamma0p5_ex1.png "fig:"){width="45.00000%"} ![Example 1. The numerical solution (left panel) and the error (right panel) for $j=6$ and $s=6$ when $\gamma = 0.5$.[]{data-label="fig:numsol_1"}](Fig_Error_jGK6_sref7_beta3p5_gamma0p5_ex1.png "fig:"){width="45.00000%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ We analyze the behavior of the error as the degree of the fractional B-spline $B_\beta$ increases. Figure \[fig:L2\_error\_1\] shows the $L_2$-norm of the error as a function of $s$ for $\beta$ ranging from 2 to 4; the four panels in the figure refer to different values of the order of the fractional derivative. For these tests we set $j=5$. The figure shows that for $s \le 4$ the error provided by the polynomial spline approximations is lower than the error provided by the fractional spline approximations. Nevertheless, in this latter case the error decreases reaching the same value, or even a lower one, of the polynomial spline error when $s=5$. We notice that for $\gamma=1$ the errors provided by the polynomial spline approximations of different degrees have approximatively the same values while the error provided by the polynomial spline of degree 2 is lower in case of fractional derivatives. In fact, it is well-known that fractional derivatives are better approximated by less smooth functions [@Po99]. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Example 1: The $L_2$-norm of the error as a function of $q$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_1"}](Fig_L2Error_gam1_ex1.png "fig:"){width="45.00000%"} ![Example 1: The $L_2$-norm of the error as a function of $q$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_1"}](Fig_L2Error_gam0p75_ex1.png "fig:"){width="45.00000%"} ![Example 1: The $L_2$-norm of the error as a function of $q$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_1"}](Fig_L2Error_gam0p5_ex1.png "fig:"){width="45.00000%"} ![Example 1: The $L_2$-norm of the error as a function of $q$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_1"}](Fig_L2Error_gam0p25_ex1.png "fig:"){width="45.00000%"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Then, we analyze the convergence of the method for increasing values of $j$ and $s$. Table \[tab:conv\_js\_fract\_1\] reports the $L_2$-norm of the error for different values of $j$ and $s$ when using the fractional B-spline $B_{3.5}$ and $\gamma = 0.5$. The number of degrees-of-freedom is also reported. The table shows that the error decreases when $j$ increases and $s$ is held fix. We notice that the error decreases very slightly when $j$ is held fix and $s$ increases since for these values of $s$ we reached the accuracy level we can expect for that value of $j$ (cf. Figures \[fig:L2\_error\_1\]). The higher values of the error for $s=7$ and $j=5,6$ are due to the numerical instabilities of the basis $\{\chi_{3.5,s,r}\}$ which result in a high condition number of the discretization matrix. The error has a similar behavior even in the case when we used the cubic B-spline space as approximating space for the collocation method (cf. Table \[tab:conv\_js\_cubic\_1\]). 3 4 5 6 -------------------------------- ---------------- ---------------- ---------------- ---------------- $\sharp V_j^{(\alpha)}([0,1])$ 9 17 33 65 5 0.02037 (369) 0.00449 (697) 0.00101 (1353) 0.00025 (2665) 6 0.02067 (657) 0.00417 (1241) 0.00093 (2409) 0.00024 (4745) 7 0.01946 (1233) 0.00381 (2329) 0.00115 (4521) 0.00117 (8905) : Example 1: The $L_2$-norm of the error for increasing values of $s$ and $j$ when using the fractional B-spline of degree $\beta=3.5$. The numbers in parenthesis are the degrees-of-freedom. Here, $\gamma =0.5$. \[tab:conv\_js\_fract\_1\] 3 4 5 6 -------------------------------- ---------------- ---------------- ---------------- ---------------- $\sharp V_j^{(\alpha)}([0,1])$ 9 17 33 65 5 0.02121 (315) 0.00452 (595) 0.00104 (1155) 0.00025 (2275) 6 0.02109 (603) 0.00443 (1139) 0.00097 (2211) 0.00023 (4355) 7 0.02037 (1179) 0.00399 (2227) 0.00115 (4323) 0.00115 (8515) : Example 1: The $L_2$-norm of the error for increasing values of $s$ and $j$ when using the cubic B-spline. The numbers in parenthesis are the degrees-of-freedom. Here, $\gamma =0.5$. \[tab:conv\_js\_cubic\_1\] Example 2 --------- In the second test we solved the time-fractional diffusion equation (\[eq:fracdiffeq\]) in the case when $$\begin{array}{lcl} f(t,x) & = & \displaystyle \frac{\pi t^{1-\gamma}}{2\Gamma(2-\gamma)} \left( \, {_1F_1}(1,2-\gamma,i\pi\,t) + \,{_1F_1}(1,2-\gamma,-i\pi\,t) \right) \, \sin(\pi\,x) \\ \\ & + & \pi^2 \, \sin(\pi\,t) \, \sin(\pi\,x)\,, \end{array}$$ where $_1F_1(\alpha,\beta,z)$ is the Kummer’s confluent hypergeometric function defined as $$_1F_1(\alpha,\beta, z) = \frac {\Gamma(\beta)}{\Gamma(\alpha)} \, \sum_{k\in \NN} \, \frac {\Gamma(\alpha+k)}{\Gamma(\beta+k)\, k!} \, z^k\,, \qquad \alpha \in \RR\,, \quad -\beta \notin \NN_0\,,$$ where $\NN_0 = \NN \backslash \{0\}$ (cf. [@AS65 Chapter 13]). In this case the exact solution is $$u(t,x)=\sin(\pi t)\,\sin(\pi x).$$ We performed the same set of numerical tests as in Example 1. The numerical solution $u_{s,j}(t,x)$ and the error $e_{s,j}(t,x)$ for $s=5$ and $j=6$ are displayed in Figure \[fig:numsol\_2\] in the case when $\gamma = 0.5$. Figure \[fig:L2\_error\_2\] shows the $L_2$-norm of the error as a function of $s$ for $\beta$ ranging from 2 to 4 and $j=5$; the four panels in the figure refer to different values of the order of the fractional derivative. Tables \[tab:conv\_js\_fract\_2\]-\[tab:conv\_js\_cubic\_2\] report the $L_2$-norm of the error for different values of $j$ and $s$ and $\beta = 3.5, 3$, respectively. The number of degrees-of-freedom is also reported.\ Figure \[fig:L2\_error\_2\] shows that value of the error is higher than in the previous example but it decreases as $s$ increases showing a very similar behavior as that one in Example 1. The values of the error in Tables \[tab:conv\_js\_fract\_2\]-\[tab:conv\_js\_cubic\_2\] are approximatively the same as in Tables \[tab:conv\_js\_fract\_1\]-\[tab:conv\_js\_cubic\_1\]. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Example 2: The numerical solution (left panel) and the error (right panel) when $j=6$ and $s=5$. []{data-label="fig:numsol_2"}](Fig_NumSol_jGK6_sref6_beta3p5_gamma0p5_ex2.png "fig:"){width="45.00000%"} ![Example 2: The numerical solution (left panel) and the error (right panel) when $j=6$ and $s=5$. []{data-label="fig:numsol_2"}](Fig_Error_jGK6_sref6_beta3p5_gamma0p5_ex2.png "fig:"){width="45.00000%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Example 2: The $L_2$-norm of the error as a function of $s$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_2"}](Fig_L2Error_gam1_ex2.png "fig:"){width="45.00000%"} ![Example 2: The $L_2$-norm of the error as a function of $s$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_2"}](Fig_L2Error_gam0p75_ex2.png "fig:"){width="45.00000%"} ![Example 2: The $L_2$-norm of the error as a function of $s$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_2"}](Fig_L2Error_gam0p5_ex2.png "fig:"){width="45.00000%"} ![Example 2: The $L_2$-norm of the error as a function of $s$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_2"}](Fig_L2Error_gam0p25_ex2.png "fig:"){width="45.00000%"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3 4 5 6 -------------------------------- ---------------- ---------------- ---------------- ---------------- $\sharp V_j^{(\alpha)}([0,1])$ 9 17 33 65 5 0.01938 (369) 0.00429 (697) 0.00111 (1353) 0.00042 (2665) 6 0.01809 (657) 0.00555 (1241) 0.00507 (2409) 0.00523 (4745) 7 0.01811 (1233) 0.01691 (2329) 0.01822 (4521) 0.01858 (8905) : Example 2: The $L_2$-norm of the error for increasing values of $s$ and $j$ for the fractional B-spline of degree $\beta=3.5$. The numbers in parenthesis are the degrees-of-freedom. Here, $\gamma=0.5$. \[tab:conv\_js\_fract\_2\] 3 4 5 6 -------------------------------- ---------------- ---------------- ---------------- ---------------- $\sharp V_j^{(\alpha)}([0,1])$ 9 17 33 65 5 0.01909 (315) 0.00404 (595) 0.00102 (1155) 0.00063 (2275) 6 0.01810 (603) 0.00546 (1139) 0.00495 (2211) 0.00511 (4355) 7 0.01805 (1179) 0.01671 (2227) 0.01801 (4323) 0.01838 (8515) : Example 2: The $L_2$-norm of the error for increasing values of $s$ and $j$ for the cubic B-spline. The numbers in parenthesis are the degrees-of-freedom. Here, $\gamma=0.5$. \[tab:conv\_js\_cubic\_2\] Conclusion {#sec:concl} ========== We proposed a fractional spline collocation-Galerkin method to solve the time-fractional diffusion equation. The novelty of the method is in the use of fractional spline spaces as approximating spaces so that the fractional derivative of the approximating function can be evaluated easily by an explicit differentiation rule that involves the generalized finite difference operator. The numerical tests show that the method has a good accuracy so that it can be effectively used to solve fractional differential problems. The numerical instabilities arising in the fractional basis when $s$ increases can be reduced following the approach in [@GPP04] that allows us to construct stable basis on the interval. Moreover, the ill-conditioning of the linear system (\[colllinearsys\]) can be reduced using iterative methods in Krylov spaces, such as the method proposed in [@CPSV17]. Finally, we notice that following the procedure given in [@GPP04], fractional wavelet bases on finite interval can be constructed so that the proposed method can be generalized to fractional wavelet approximating spaces. [10]{} Milton Abramowitz and Irene A. Stegun. , volume 55. Dover Publications, 1965. Dumitru Baleanu, Kai Diethelm, Enrico Scalas, and Juan J. Trujillo. Fractional calculus. models and numerical methods. , 3:10–16, 2012. Francesco Calabr[ò]{}, Carla Manni, and Francesca Pitolli. Computation of quadrature rules for integration with respect to refinable functions on assigned nodes. , 90:168–189, 2015. Daniela Calvetti, Francesca Pitolli, Erkki Somersalo, and Barbara Vantaggi. Bayes meets [K]{}rylov: preconditioning [CGLS]{} for underdetermined systems. , in press. Wolfgang Dahmen, Siegfried Pr[ö]{}ssdorf, and Reinhold Schneider. Wavelet approximation methods for pseudodifferential equations: I stability and convergence. , 215(1):583–620, 1994. Neville Ford, Jingyu Xiao, and Yubin Yan. A finite element method for time fractional partial differential equations. , 14(3):454–474, 2011. Walter Gautschi, Laura Gori, and Francesca Pitolli. Gauss quadrature for refinable weight functions. , 8(3):249–257, 2000. Laura Gori, Laura Pezza, and Francesca Pitolli. Recent results on wavelet bases on the interval generated by [GP]{} refinable functions. , 51(4):549–563, 2004. Laura Gori and Francesca Pitolli. Refinable functions and positive operators. , 49(3):381–393, 2004. Rudolf Hilfer. . World Scientific, 2000. Francesco Mainardi. . World Scientific, 2010. Arvet Pedas and Enn Tamme. Numerical solution of nonlinear fractional differential equations by spline collocation methods. , 255:216–230, 2014. Laura Pezza and Francesca Pitolli. A multiscale collocation method for fractional differential problems. , 147:210–219, 2018. Igor Podlubny. , volume 198. Academic Press, 1998. Larry L. Schumaker. . Cambridge University Press, 2007. Hari Mohan Srivastava and Juan J. Trujillo. . Elsevier, 2006. Vasily E. Tarasov. . Springer Science & Business Media, 2011. Vidar Thomée. . Springer-Verlag, 2006. Michael Unser and Thierry Blu. Fractional splines and wavelets. , 42(1):43–67, 2000. Mohsen Zayernouri and George Em Karniadakis. Fractional spectral collocation method. , 36(1):A40–A62, 2014. [^1]: [*Dept. SBAI, University of Roma ”La Sapienza”*]{}, Via A. Scarpa 16, 00161 Roma, Italy. e-mail: [laura.pezza@sbai.uniroma1.it]{} [^2]: [*Dept. SBAI, University of Roma ”La Sapienza”*]{}, Via A. Scarpa 16, 00161 Roma, Italy. e-mail:
{ "pile_set_name": "arxiv" }
Sun aims powerful flares at Earth Top: Two large sunspot groups are visible in this image of the sun obtained by the Solar and Heliospheric Observatory (SOHO). Below: This SOHO image shows a large filament eruption that occurred February 26. The disk in the center is a mask that blocks out direct sunlight. By Richard Stenger CNN Interactive Staff Writer March 1, 2000 Web posted at: 3:24 p.m. EST (2024 GMT) (CNN) -- The sun should place the Earth squarely in its sights this week as it aims its solar ray gun. Astronomers tell terrestrial dwellers not to sweat it too much, despite the fact that solar activity is approaching an 11-year peak. Two large sunspots moving across the surface of the sun are expected to directly face the Earth soon for up to several days, according to solar scientists. Such sunspots often herald powerful coronal mass ejections and solar flares, space storms that can disrupt weather and electrical systems on Earth. Solar flares are the largest explosions in the solar system. A typical one can release the energy equivalent of millions of 100-megaton hydrogen bombs exploding at once. Highly charged particles from large flares can overload power grids and damage satellites. In 1989, one space storm knocked out a major power plant in Canada, leaving millions without power for hours. Solar activity generally waxes and wanes during an 11-year cycle and astronomers expect it to peak either this or next year. But so far, the sun has produced only a "disappointing" level of fireworks, said Joseph Gurman, a solar physicist who analyzes data from the Solar and Heliospheric Observatory. Coronal mass ejections are much more likely to produce effects, Gurman said. Like flares, they send streams of highly charged particles, but they also can emit a billion tons of plasma, or ionized gas. Fortunately the Earth's magnetosphere usually bears the brunt of plasma particles. "If we were exposed to them, we literally would be fried," Gurman said.
{ "pile_set_name": "pile-cc" }
package com.android.inputmethodcommon; class InputMethodSettingsInterface { } class InputMethodSettingsImpl { int mContext; int mImi; int mImm; int mSubtypeEnablerIcon; int mSubtypeEnablerIconRes; int mSubtypeEnablerTitle; int mSubtypeEnablerTitleRes; int mInputMethodSettingsCategoryTitle; int mInputMethodSettingsCategoryTitleRes; int mSubtypeEnablerPreference; } class InputMethodSettingsFragment { int mSettings; } class InputMethodSettingsActivity { int mSettings; }
{ "pile_set_name": "github" }
No other appliance company has a wider scope of solutions, nor the experience to back them up, than Electrolux. Our long presence in people’s homes around the world means that no other appliance company ... Read more ... Easy-Flo vacuums including parts and bags. Findlay's also offers sales and service for all makes and models of sewing machines and vacuums. Please contact us for more information about our products and services. Read more ... house. The value of a property increases with the addition of a hydraulic elevator, electric home elevator, vacuum elevator or high end wheelchair lift. Hybrid Elevator Inc. makes residential elevators that look amazing in an ... Read more
{ "pile_set_name": "pile-cc" }
--- abstract: 'We study here properties of [*free Generalized Inverse Gaussian distributions*]{} (fGIG) in free probability. We show that in many cases the fGIG shares similar properties with the classical GIG distribution. In particular we prove that fGIG is freely infinitely divisible, free regular and unimodal, and moreover we determine which distributions in this class are freely selfdecomposable. In the second part of the paper we prove that for free random variables $X,Y$ where $Y$ has a free Poisson distribution one has $X\stackrel{d}{=}\frac{1}{X+Y}$ if and only if $X$ has fGIG distribution for special choice of parameters. We also point out that the free GIG distribution maximizes the same free entropy functional as the classical GIG does for the classical entropy.' author: - | Takahiro Hasebe\ Department of Mathematics,\ Hokkaido University\ thasebe@math.sci.hokudai.ac.jp - | Kamil Szpojankowski\ Faculty of Mathematics and Information Science\ Warsaw University of Technology\ k.szpojankowski@mini.pw.edu.pl title: On free generalized inverse gaussian distributions --- Introduction ============ Free probability was introduced by Voiculescu in [@Voi85] as a non-commutative probability theory where one defines a new notion of independence, so called freeness or free independence. Non-commutative probability is a counterpart of the classical probability theory where one allows random variables to be non-commutative objects. Instead of defining a probability space as a triplet $(\Omega,\mathcal{F},\mathbb{P})$ we switch to a pair $(\mathcal{A},\varphi)$ where $\mathcal{A}$ is an algebra of random variables and $\varphi\colon\mathcal{A}\to\mathbb{C}$ is a linear functional, in classical situation $\varphi=\mathbb{E}$. It is natural then to consider algebras $\mathcal{A}$ where random variables do not commute (for example $C^*$ or $W^*$–algebras). For bounded random variables independence can be equivalently understood as a rule of calculating mixed moments. It turns out that while for commuting random variables only one such rule leads to a meaningful notion of independence, the non-commutative setting is richer and one can consider several notions of independence. Free independence seems to be the one which is the most important. The precise definition of freeness is stated in Section 2 below. Free probability emerged from questions related to operator algebras however the development of this theory showed that it is surprisingly closely related with the classical probability theory. First evidence of such relations appeared with Voiculescu’s results about asymptotic freeness of random matrices. Asymptotic freeness roughly speaking states that (classically) independent, unitarily invariant random matrices, when size goes to infinity, become free.\ Another link between free and classical probability goes via infinite divisibility. With a notion of independence in hand one can consider a convolution of probability measures related to this notion. For free independence such operation is called free convolution and it is denoted by $\boxplus$. More precisely for free random variables $X,Y$ with respective distributions $\mu,\nu$ the distribution of the sum $X+Y$ is called the free convolution of $\mu$ and $\nu$ and is denoted by $\mu\boxplus\nu$. The next natural step is to ask which probability measures are infinitely divisible with respect to this convolution. We say that $\mu$ is freely infinitely divisible if for any $n \geq 1$ there exists a probability measure $\mu_n$ such that $$\mu=\underbrace{\mu_n\boxplus\ldots\boxplus\mu_n}_{\text{$n$ times}}.$$ Here we come across another striking relation between free and classical probability: there exists a bijection between classically and freely infinitely divisible probability measures, this bijection was found in [@BP99] and it is called Bercovici-Pata (BP) bijection. This bijection has number of interesting properties, for example measures in bijection have the same domains of attraction. In free probability literature it is standard approach to look for the free counterpart of a classical distribution via BP bijection. For example Wigner’s semicircle law plays the role of the Gaussian law in free analogue of Central Limit Theorem, Marchenko-Pastur distribution appears in the limit of free version of Poisson limit theorem and is often called free Poisson distribution. While BP bijection proved to be a powerful tool, it does not preserve all good properties of distributions. Consider for example Lukacs theorem which says that for classically independent random variables $X,Y$ random variables $X+Y$ and $X/(X+Y)$ are independent if and only if $X,Y$ have gamma distribution with the same scale parameter [@Luk55]. One can consider similar problem in free probability and gets the following result (see [@Szp15; @Szp16]) for free random variables $X,Y$ random variables $X+Y$ and $(X+Y)^{-1/2}X(X+Y)^{-1/2}$ are free if and only if $X,Y$ have Marchenko-Pastur (free Poisson) distribution with the same rate. From this example one can see our point - it is not the image under BP bijection of the Gamma distribution (studied in [@PAS08; @HT14]), which has the Lukacs independence property in free probability, but in this context free Poisson distribution plays the role of the classical Gamma distribution. In [@Szp17] another free independence property was studied – a free version of so called Matsumoto-Yor property (see [@MY01; @LW00]). In classical probability this property says that for independent $X,Y$ random variables $1/(X+Y)$ and $1/X-1/(X+Y)$ are independent if and only if $X$ has a Generalized Inverse Gaussian (GIG) distribution and $Y$ has a Gamma distribution. In the free version of this theorem (i.e. the theorem where one replaces classical independence assumptions by free independence) it turns out that the role of the Gamma distribution is taken again by the free Poisson distribution and the role of the GIG distribution plays a probability measure which appeared for the first time in [@Fer06]. We will refer to this measure as the free Generalized Inverse Gaussian distribution or fGIG for short. We give the definition of this distribution in Section 2. The main motivation of this paper is to study further properties of fGIG distribution. The results from [@Szp17] suggest that in some sense (but not by means of the BP bijection) this distribution is the free probability analogue of the classical GIG distribution. It is natural then to ask if fGIG distribution shares more properties with its classical counterpart. It is known that the classical GIG distribution is infinitely divisible (see [@BNH77]) and selfdecomposable (see [@Hal79; @SS79]). In [@LS83] the GIG distribution was characterized in terms of an equality in distribution, namely if we take $X,Y_1,Y_2$ independent and such that $Y_1$ and $Y_2$ have Gamma distributions with suitable parameters and we assume that $$\begin{aligned} X\stackrel{d}{=}\frac{1}{Y_2+\frac{1}{Y_1+X}}\end{aligned}$$ then $X$ necessarily has a GIG distribution. A simpler version of this theorem characterizes smaller class of fGIG distributions by equality $$\begin{aligned} \label{eq:char} X\stackrel{d}{=}\frac{1}{Y_1+X}\end{aligned}$$ for $X$ and $Y_1$ as described above. The overall result of this paper is that the two distributions GIG and fGIG indeed have many similarities. We show that fGIG distribution is freely infinitely divisible and even more that it is free regular. Moreover fGIG distribution can be characterized by the equality in distribution , where one has to replace the independence assumption by freeness and assume that $Y_1$ has free Poisson distributions. While there are only several examples of freely selfdecomposable distributions it is interesting to ask whether fGIG has this property. It turns out that selfdecomposability is the point where the symmetry between GIG and fGIG partially breaks down: not all fGIG distributions are freely selfdecomposable. We find conditions on the parameters of fGIG family for which this distributions are freely selfdecomposable. Except from the results mentioned above we prove that fGIG distribution is unimodal. We also point out that in [@Fer06] it was proved that fGIG maximizes a certain free entropy functional. An easy application of Gibbs’ inequality shows that the classical GIG maximizes the same functional of classical entropy. The paper is organized as follows: In Section 2 we shortly recall basics of free probability and next we study some properties of fGIG distributions. Section 3 is devoted to the study of free infinite divisibility, free regularity, free selfdecomposability and unimodality of the fGIG distribution. In Section 4 we show that the free counterpart of the characterization of GIG distribution by holds true, and we discuss entropy analogies between GIG and fGIG. Free GIG distributions ====================== In this section we recall the definition of free GIG distribution and study basic properties of this distribution. In particular we study in detail the $R$-transform of fGIG distribution. Some of the properties established in this section will be crucial in the subsequent sections where we study free infinite divisibility of the free GIG distribution and characterization of the free GIG distribution. The free GIG distribution appeared for the first time (not under the name free GIG) as the almost sure weak limit of empirical spectral distribution of GIG matrices (see [@Fer06]). Basics of free probability -------------------------- This paper deals mainly with properties of free GIG distribution related to free probability and in particular to free convolution. Therefore in this section we introduce notions and tools that we need in this paper. The introduction is far from being detailed, reader not familiar with free probability may find a very good introduction to the theory in [@VDN92; @NS06; @MS]. 1. A $C^*$–probability space is a pair $(\mathcal{A},\varphi)$, where $\mathcal{A}$ is a unital $C^*$-algebra and $\varphi$ is a linear functional $\varphi\colon\mathcal{A}\to\mathbb{C}$, such that $\varphi(\mathit{1}_\mathcal{A})=1$ and $\varphi(aa^*)\geq 0$. Here by $\mathit{1}_\mathcal{A}$ we understand the unit of $\mathcal{A}$. 2. Let $I$ be an index set. A family of subalgebras $\left(\mathcal{A}_i\right)_{i\in I}$ are called free if $\varphi(X_1\cdots X_n)=0$ whenever $a_i\in \mathcal{A}_{j_i}$, $j_1\neq j_2\neq \ldots \neq j_n$ and $\varphi(X_i)=0$ for all $i=1,\ldots,n$ and $n=1,2,\ldots$. Similarly, self-adjoint random variables $X,\,Y\in\mathcal{A}$ are free (freely independent) when subalgebras generated by $(X,\,\mathit{1}_\mathcal{A})$ and $(Y,\,\mathit{1}_\mathcal{A})$ are freely independent. 3. The distribution of a self-adjoint random variable is identified via moments, that is for a random variable $X$ we say that a probability measure $\mu$ is the distribution of $X$ if $$\varphi(X^n)=\int t^n\,{{\rm d}}\mu(t),\,\mbox{for all } n=1,2,\ldots$$ Note that since we assume that our algebra $\mathcal{A}$ is a $C^*$–algebra, all random variables are bounded, thus the sequence of moments indeed determines a unique probability measure. 4. The distribution of the sum $X+Y$ for free random variables $X,Y$ with respective distributions $\mu$ and $\nu$ is called the free convolution of $\mu$ and $\nu$, and is denoted by $\mu\boxplus\nu$. Free GIG distribution --------------------- In this paper we are concerned with a specific family of probability measures which we will refer to as free GIG (fGIG) distributions. The free Generalized Inverse Gaussian (fGIG) distribution is a measure $\mu=\mu(\alpha,\beta,\lambda)$, where $\lambda\in\mathbb{R}$ and $\alpha,\beta>0$ which is compactly supported on the interval $[a,b]$ with the density $$\begin{aligned} \mu({{\rm d}}x)=\frac{1}{2\pi}\sqrt{(x-a)(b-x)} \left(\frac{\alpha}{x}+\frac{\beta}{\sqrt{ab}x^2}\right){{\rm d}}x, \end{aligned}$$ where $0<a<b$ are the solution of $$\begin{aligned} \label{eq1} 1-\lambda+\alpha\sqrt{ab}-\beta\frac{a+b}{2ab}=&0\\ \label{eq2} 1+\lambda+\frac{\beta}{\sqrt{ab}}-\alpha\frac{a+b}{2}=&0. \end{aligned}$$ Observe that the system of equations for coefficients for fixed $\lambda\in \mathbb{R}$ and $\alpha,\beta>0$ has a unique solution $0<a<b$. We can easily get the following \[prop:ab\] Let $\lambda\in{\mathbb{R}}$. Given $\alpha,\beta>0$, the system of equations , has a unique solution $(a,b)$ such that $$\label{unique} 0<a<b, \qquad |\lambda| \left(\frac{\sqrt{a}-\sqrt{b}}{\sqrt{a}+\sqrt{b}}\right)^2<1.$$ Conversely, given $(a,b)$ satisfying , the set of equations – has a unique solution $(\alpha,\beta)$, which is given by $$\begin{aligned} &\alpha = \frac{2}{(\sqrt{a}-\sqrt{b})^2}\left( 1 + \lambda \left(\frac{\sqrt{a}-\sqrt{b}}{\sqrt{a}+\sqrt{b}}\right)^2 \right) >0, \label{eq3}\\ &\beta = \frac{2 a b}{ (\sqrt{a} - \sqrt{b})^2}\left( 1 - \lambda \left(\frac{\sqrt{a}-\sqrt{b}}{\sqrt{a}+\sqrt{b}}\right)^2 \right)>0. \label{eq4}\end{aligned}$$ Thus we may parametrize fGIG distribution using parameters $(a,b,\lambda)$ satisfying instead of $(\alpha,\beta,\lambda)$. We will make it clear whenever we will use a parametrization different than $(\alpha,\beta,\lambda)$. It is useful to introduce another parameterization to describe the distribution $\mu(\alpha,\beta,\lambda)$. Define $$\label{eq:AB} A=(\sqrt{b}-\sqrt{a})^2, \qquad B= (\sqrt{a}+\sqrt{b})^2,$$ observe that we have then $$\begin{aligned} \alpha =& \frac{2}{A}\left( 1 + \lambda \frac{A}{B} \right) >0, \qquad \beta = \frac{(B-A)^2}{8 A}\left( 1 - \lambda \frac{A}{B} \right)>0,\\ a =& \left(\frac{\sqrt{B}-\sqrt{A}}{2}\right)^2,\qquad b = \left(\frac{\sqrt{A}+\sqrt{B}}{2}\right)^2. \end{aligned}$$ The condition is equivalent to $$\label{eq:ABineq} 0<\max\{1,|\lambda|\}A<B.$$ Thus one can describe any measure $\mu(\alpha,\beta,\lambda)$ in terms of $\lambda,A,B$. $R$-transform of fGIG distribution {#sec:form} ---------------------------------- The $R$-transform of the measure $\mu(\alpha,\beta,\lambda)$ was calculated in [@Szp17]. Since the $R$-transform will play a crucial role in the paper we devote this section for a detailed study of its properties. We also point out some properties of fGIG distribution which are derived from properties of the $R$-transform. Before we present the $R$-transform of fGIG distribution let us briefly recall how the $R$-transform is defined and stress its importance for free probability. \[rem:Cauchy\] 1. For a probability measure $\mu$ one defines its Cauchy transform via $$G_\mu(z)=\int \frac{1}{z-x}{{\rm d}}\mu(x).$$ It is an analytic function on the upper-half plane with values in the lower half-plane. Cauchy transform determines uniquely the measure and there is an inversion formula called Stieltjes inversion formula, namely for $h_{\varepsilon}(t)=-\tfrac{1}{\pi}{\text{\normalfont Im}}\, G_\mu(t+i{\varepsilon})$ one has $${{\rm d}}\mu(t)=\lim_{{\varepsilon}\to 0^+} h_{\varepsilon}(t)\,{{\rm d}}t,$$ where the limit is taken in the weak topology. 2. For a compactly supported measure $\mu$ one can define in a neighbourhood of the origin so called $R$-transform by $$R_\mu(z)=G_\mu^{\langle -1 \rangle}(z)-\frac{1}{z},$$ where by $G_\mu^{\langle -1 \rangle}$ we denote the inverse under composition of the Cauchy transform of $\mu$.\ The relevance of the $R$-transform for free probability comes form the fact that it linearizes free convolution, that is $R_{\mu\boxplus\nu}=R_\mu+R_\nu$ in a neighbourhood of zero. The $R$-transform of fGIG distribution is given by $$\label{eq:R_F} \begin{split} r_{\alpha,\beta,\lambda}(z) &= \frac{-\alpha + (\lambda+1)z + \sqrt{f_{\alpha,\beta,\lambda}(z)}}{2z(\alpha-z)} \end{split}$$ in a neighbourhood of $0$, where the square root is the principal value, $$\label{eq:pol_fpar} f_{\alpha,\beta,\lambda}(z)=(\alpha+(\lambda-1)z)^2-4\beta z (z-\alpha)(z-\gamma),$$ and $$\begin{aligned} \gamma=\frac{\alpha^2 a b+\frac{\beta^2}{ab}-2\alpha\beta\left(\frac{a+b}{\sqrt{ab}}-1\right)-(\lambda-1)^2}{4\beta}.\end{aligned}$$ Note that $z=0$ is a removable singular point of $r_{\alpha,\beta,\lambda}$. Observe that in terms of $A,B$ defined by we have $$\begin{aligned} \gamma &= 2\frac{\lambda A^2 + A B - 2B^2}{B(B-A)^2}. \end{aligned}$$ It is straightforward to observe that implies $A (\lambda A+B)<2 A B<2B^2$, thus we have $\gamma<0$. The following remark was used in [@Szp17 Remark 2.1] without a proof. We give a proof here. \[rem:F\_lambda\_sym\] We have $f_{\alpha,\beta,\lambda}(z)=f_{\alpha,\beta,-\lambda}(z)$, where $\alpha,\beta>0,\lambda\in\mathbb{R}$. To see this one has to insert the definition of $\gamma$ into to obtain $$f_{\alpha,\beta,\lambda}(z)=\alpha z \lambda^2+\left(\left(ab\alpha^2-2\alpha\beta\frac{a+b}{\sqrt{ab}}+\frac{\beta^2}{ab}+2\alpha\beta\right)z-4 \beta z^2-\alpha\right) (z-\alpha),$$ where $a=a(\alpha,\beta,\lambda)$ and $b=b(\alpha,\beta,\lambda)$. Thus it suffices to show that the quantity $g(\alpha,\beta,\lambda):=ab\alpha^2-2\alpha\beta\frac{a+b}{\sqrt{ab}}+\frac{\beta^2}{ab}$ does not depend on the sign of $\lambda$. To see this, observe from the system of equations and that $a(\alpha,\beta,-\lambda)=\frac{\beta}{\alpha b(\alpha,\beta,\lambda)}$ and $b(\alpha,\beta,-\lambda)=\frac{\beta}{\alpha a(\alpha,\beta,\lambda)}$. It is then straightforward to check that $ g(\alpha,\beta,-\lambda) = g(\alpha,\beta,\lambda). $ The $R$-transform of the measure $\mu(\alpha,\beta,\lambda)$ can be extended to a function (still denoted by $r_{\alpha,\beta,\lambda}$) which is analytic on $\mathbb{C}^{-}$ and continuous on $({\mathbb{C}}^- \cup{\mathbb{R}})\setminus\{\alpha\}$. A direct calculation shows that using parameters $A,B$ defined by the polynomial $f_{\alpha,\beta,\lambda}$ under the square root factors as $$f_{\alpha,\beta,\lambda}(z) = \frac{(B-A)^2(B-\lambda A)}{2 A B}\left[z +\frac{2(B+\lambda A)}{B(B-A)}\right]^2 \left[ \frac{2B}{A(B-\lambda A)} - z \right].$$ Thus we can write $$\label{eq:pol_par} f_{\alpha,\beta,\lambda}(z) = 4\beta (z-\delta)^2(\eta-z),$$ where $$\begin{aligned} &\delta = - \frac{2(B+\lambda A)}{B(B-A)}<0, \label{eq5}\\ &\eta = \frac{2B}{A(B-\lambda A)} >0. \label{eq6}\end{aligned}$$ It is straightforward to verify that implies $\eta \geq \alpha$ with equality valid only when $\lambda=0$.\ Calculating $f_{\alpha,\beta,\lambda}(0)$ using first and then we get $4 \beta \eta \delta^2 = \alpha^2$, since $\eta \geq \alpha$ we see that $\delta \geq -\sqrt{\alpha/(4\beta)}$ with equality only when $\lambda=0$. Since all roots of $f_{\alpha,\beta,\lambda}$ are real, the square root $\sqrt{f_{\alpha,\beta,\lambda}(z)}$ may be defined continuously on ${\mathbb{C}}^-\cup{\mathbb{R}}$ so that $\sqrt{f_{\alpha,\beta,\lambda}(0)}=\alpha$. As noted above $\delta<0$, and continuity of $f_{\alpha,\beta,\lambda}$ implies that we have $$\label{RRR} \sqrt{f_{\alpha,\beta,\lambda}(z)} = 2(z-\delta)\sqrt{\beta(\eta-z)},$$ where we take the principal value of the square root in the expression $\sqrt{4\beta(\eta-z)}$. Thus finally we arrive at the following form of the $R$-transform $$\label{R} \begin{split} r_{\alpha,\beta,\lambda}(z) &= \frac{-\alpha + (\lambda+1)z + 2(z-\delta)\sqrt{\beta(\eta-z)}}{2z(\alpha-z)} \end{split}$$ which is analytic in ${\mathbb{C}}^-$ and continuous in $({\mathbb{C}}^- \cup{\mathbb{R}})\setminus\{\alpha\}$ as required. Next we describe the behaviour of the $R$-transform around the singular point $z=\alpha$. If $\lambda>0$ then $$\label{alpha} \begin{split} r_{\alpha,\beta,\lambda}(z) = \frac{\lambda}{\alpha-z} -\frac{1}{2\alpha} \left(1+\lambda+\frac{\sqrt{\beta}(2\eta-3\alpha+\delta)}{\sqrt{\eta-\alpha}}\right) + o(1),\qquad\mbox{as } z\to\alpha. \end{split}$$ If $\lambda<0$ then $$\label{alpha2} r_{\alpha,\beta,\lambda}(z) = - \frac{1}{2\alpha}\left(1+\lambda+\frac{\sqrt{\beta}(2\eta-3\alpha+\delta)}{\sqrt{\eta-\alpha}}\right)+ o(1), \qquad \mbox{as } z\to\alpha.$$ In the remaining case $\lambda=0$ one has $$\label{alpha3} \begin{split} r_{\alpha,\beta,0}(z) &= \frac{-\alpha + z + 2(z-\delta)\sqrt{\beta(\alpha-z)}}{2z(\alpha-z)} = -\frac{1}{2z} + \frac{\sqrt{\beta}(z-\delta)}{z\sqrt{\alpha-z}}. \end{split}$$ By the definition we have $f_{\alpha,\beta,\lambda}(\alpha) = (\lambda\alpha)^2$, substituting this in the expression we obtain that $ \alpha |\lambda| = 2(\alpha-\delta)\sqrt{\beta (\eta-\alpha)}. $ Taking the Taylor expansion around $z=\alpha$ for $\lambda \neq0$ we obtain $$\label{Taylor} \sqrt{f_{\alpha,\beta,\lambda}(z)}=\alpha|\lambda| + \frac{\sqrt{\beta}(2\eta-3\alpha+\delta)}{\sqrt{\eta-\alpha}}(z-\alpha) + o(|z-\alpha|),\qquad \mbox{as }z\to \alpha.$$ This implies and and so $r_{\alpha,\beta,\lambda}$ may be extended to a continued function on ${\mathbb{C}}^-\cup {\mathbb{R}}$. The case $\lambda=0$ follows from the fact that in this case we have $\eta=\alpha$. In the case $\lambda<0$ one can extend $r_{\alpha,\beta,\lambda}$ to an analytic function in ${\mathbb{C}}^-$ and continuous in ${\mathbb{C}}^- \cup{\mathbb{R}}$. Some properties of fGIG distribution ------------------------------------ We study here further properties of free GIG distribution. Some of them motivate Section 4 where we will characterize fGIG distribution in a way analogous to classical GIG distribution. The next remark recalls the definition and some basic facts about free Poisson distribution, which will play an important role in this paper. \[rem:freePoisson\] 1. Marchenko–Pastur (or free-Poisson) distribution $\nu=\nu(\gamma, \lambda)$ is defined by the formula $$\begin{aligned} \nu=\max\{0,\,1-\lambda\}\,\delta_0+\tilde{\nu}, \end{aligned}$$ where $\gamma,\lambda> 0$ and the measure $\tilde{\nu}$, supported on the interval $(\gamma(1-\sqrt{\lambda})^2,\,\gamma(1+\sqrt{\lambda})^2)$, has the density (with respect to the Lebesgue measure) $$\tilde{\nu}({{\rm d}}x)=\frac{1}{2\pi\gamma x}\,\sqrt{4\lambda\gamma^2-(x-\gamma(1+\lambda))^2}\,{{\rm d}}x.$$ 2. The $R$-transform of the free Poisson distribution $\nu(\gamma,\lambda)$ is of the form $$r_{\nu(\gamma, \lambda)}(z)=\frac{\gamma\lambda}{1-\gamma z}.$$ The next proposition was proved in [@Szp17 Remark 2.1] which is the free counterpart of a convolution property of classical Gamma and GIG distribution. The proof is a straightforward calculation of the $R$-transform with the help of Remark \[rem:F\_lambda\_sym\]. \[GIGPoissConv\] Let $X$ and $Y$ be free, $X$ free GIG distributed $\mu(\alpha,\beta,-\lambda)$ and $Y$ free Poisson distributed $\nu(1/\alpha,\lambda)$ respectively, for $\alpha,\beta,\lambda>0$. Then $X+Y$ is free GIG distributed $\mu(\alpha,\beta,\lambda)$. We also quote another result from [@Szp17 Remark 2.2] which is again the free analogue of a property of classical GIG distribution. The proof is a simple calculation of the density. \[GIGInv\] If $X$ has the free GIG distribution $\mu(\alpha,\beta,\lambda)$ then $X^{-1}$ has the free GIG distribution $\mu(\beta,\alpha,-\lambda)$. The two propositions above imply some distributional properties of fGIG distribution. In the Section 4 we will study characterization of the fGIG distribution related to these properties. \[rem:prop\] 1. Fix $\lambda,\alpha>0$. If $X$ has fGIG distribution $\mu(\alpha,\alpha,-\lambda)$ and $Y$ has the free Poisson distribution $\nu(1/\alpha,\lambda)$ and $X,Y$ are free then $X\stackrel{d}{=}(X+Y)^{-1}$. Indeed by Proposition \[GIGPoissConv\] we get that $X+Y$ has fGIG distribution $\mu(\alpha,\alpha,\lambda)$ and now Proposition \[GIGInv\] implies that $(X+Y)^{-1}$ has the distribution $\mu(\alpha,\alpha,-\lambda)$. 2. One can easily generalize the above observation. Take $\alpha,\beta,\lambda>0$, and $X,Y_1,Y_2$ free, such that $X$ has fGIG distribution $\mu(\alpha,\beta,-\lambda)$, $Y_1$ is free Poisson distributed $\nu(1/\beta,\lambda)$ and $Y_2$ is distributed $\nu(1/\alpha,\lambda)$, then $X\stackrel{d}{=}(Y_1+(Y_2+X)^{-1})^{-1}$. Similarly as before we have that $X+Y_2$ has distribution $\mu(\alpha,\beta,\lambda)$, then by Proposition \[GIGInv\] we get that $(X+Y_2)^{-1}$ has distribution $\mu(\beta,\alpha,-\lambda)$. Then we have that $Y_1+(Y_2+X)^{-1}$ has the distribution $\mu(\beta,\alpha,\lambda)$ and finally we get $(Y_1+(Y_2+X)^{-1})^{-1}$ has the desired distribution $\mu(\alpha,\beta,-\lambda)$. 3. Both identities above can be iterated finitely many times, so that one obtains that $X\stackrel{d}{=}\left(Y_1+\left(Y_2+\cdots\right)^{-1}\right)^{-1}$, where $Y_1,Y_2,\ldots$ are free, for $k$ odd $Y_k$ has the free Poisson distribution $\nu(1/\beta,\lambda)$ and for $k$ even $Y_k$ has the distribution $\nu(1/\alpha,\lambda)$. For the case described in $1^o$ one simply has to take $\alpha=\beta$. We are not sure if infinite continued fractions can be defined. Next we study limits of the fGIG measure $\mu(\alpha,\beta,\lambda)$ when $\alpha\to 0$ and $\beta\to 0$. This was stated with some mistake in [@Szp17 Remark 2.3]. As $\beta\downarrow 0$ we have the following weak limits of the fGIG distribution $$\begin{aligned} \label{eq:limits} \lim_{\beta\downarrow 0}\mu(\alpha,\beta,\lambda) = \begin{cases} \nu(1/\alpha, \lambda), & \lambda \geq1, \\ \frac{1-\lambda}{2}\delta_0 + \frac{1+\lambda}{2}\nu(\frac{1+\lambda}{2\alpha},1), & |\lambda|<1,\\ \delta_0, & \lambda\leq -1. \end{cases} \end{aligned}$$ Taking into account Proposition \[GIGInv\] one can also describe limits when $\alpha\downarrow 0$ for $\lambda \geq1$. This result reflects the fact that GIG matrix generalizes the Wishart matrix for $\lambda \geq1$, but not for $\lambda <1$ (see [@Fer06] for GIG matrix and [@HP00] for the Wishart matrix). We will find the limit by calculating limits of the $R$-transform, since convergence of the $R$-transform implies weak convergence. Observe that from Remark \[rem:F\_lambda\_sym\] we can consider only $\lambda\geq0$, however we decided to present all cases, as the consideration will give asymptotic behaviour of support of fGIG measure. In view of , the only non-trivial part is limits of $\beta\gamma$ when $\beta\to0$. Observe that if we define $F(a,b,\alpha,\beta,\lambda)$ by $$\begin{aligned} \left(1-\lambda+\alpha\sqrt{ab}-\beta\frac{a+b}{2ab},1+\lambda+\frac{\beta}{\sqrt{ab}}-\alpha\frac{a+b}{2}\right)^T &=\left(f(a,b,\alpha,\beta,\lambda),g((a,b,\alpha,\beta,\lambda))\right)^T\\ &=F(a,b,\alpha,\beta,\lambda). \end{aligned}$$ Then the solution to the system , are functions $(a(\alpha,\beta,\lambda),b(\alpha,\beta,\lambda))$, such that $F(a(\alpha,\beta,\lambda),b(\alpha,\beta,\lambda),\alpha,\beta,\lambda)=(0,0)$. To use Implicit Function Theorem, by calculating the Jacobian with respect to $(a,b)$, we observe that $a(\alpha,\beta,\lambda)$ and $b(\alpha,\beta,\lambda)$ are continuous (even differentiable) functions of $\alpha,\beta>0$ and $\lambda\in \mathbb{R}$. **Case 1.** $\lambda>1$\ Observe if we take $\beta=0$ then a real solution $0<a<b$ for the system , $$\begin{aligned} \label{Case1} 1-\lambda+\alpha\sqrt{ab}&=0\\ 1+\lambda-\alpha\frac{a+b}{2}&=0 \end{aligned}$$ still exists. Moreover, because at $\beta=0$ Jacobian is non-zero, Implicit Function Theorem says that solutions are continuous at $\beta=0$. Thus using we get $$\begin{aligned} \beta \gamma=\frac{\alpha^2 ab+\tfrac{\beta^2}{ab}-2\alpha\beta (\tfrac{a+b}{\sqrt{ab}}-1)-(\lambda-1)^2}{4}=\frac{\tfrac{\beta^2}{ab}-2\alpha\beta (\tfrac{a+b}{\sqrt{ab}}-1)}{4}. \end{aligned}$$ The above implies that $\beta\gamma\to 0$ when $\beta\to 0$ since $a,b$ have finite and non-zero limit when $\beta\to 0$, as explained above. **Case 2.** $\lambda<-1$\ In that case we see that setting $\beta=0$ in leads to an equation with no real solution for $(a,b)$. In this case the part $\beta\tfrac{a+b}{2ab}$ has non-zero limit when $\beta\to 0$. To be precise substitute $a=\beta a^\prime$ and $b=\beta b^\prime$ in , , and then we get $$\begin{aligned} 1-\lambda+\alpha\beta\sqrt{a^\prime b^\prime}-\frac{a^\prime+b^\prime}{2a^\prime b^\prime}=&0\\ 1+\lambda+\frac{1}{\sqrt{a^\prime b^\prime}}-\alpha\beta\frac{a^\prime+b^\prime}{2}=&0. \end{aligned}$$ The above system is equivalent to the system , with $\alpha:=\alpha\beta$ and $\beta:=1$. If we set $\beta=0$ as in Case 1 we get $$\begin{aligned} 1-\lambda-\frac{a^\prime+b^\prime}{2a^\prime b^\prime}=&0\\ \label{Case2} 1+\lambda+\frac{1}{\sqrt{a^\prime b^\prime}}=&0. \end{aligned}$$ The above system has solution $0<a^\prime<b^\prime$ for $\lambda<-1$. Calculating the Jacobian we see that it is non-zero at $\beta=0$, so Implicit Function Theorem implies that $a^\prime$ and $b^\prime$ are continuous functions at $\beta=0$ in the case $\lambda<-1$.\ This implies that in the case $\lambda<-1$ the solutions of , are $a(\beta)= \beta a^\prime+o(\beta)$ and $b(\beta)= \beta b^\prime +o(\beta)$. Thus we have $$\begin{aligned} \lim_{\beta\to 0}\beta \gamma&=\frac{\alpha^2 ab+\tfrac{\beta^2}{ab}-2\alpha\beta (\tfrac{a+b}{\sqrt{ab}}-1)-(\lambda-1)^2}{4}\\&=\frac{\tfrac{1}{a^\prime b^\prime}-(\lambda-1)^2 }{4}=\frac{(\lambda+1)^2-(\lambda-1)^2 }{4}=\lambda, \end{aligned}$$ where in the equation one before the last we used . **Case 3.** $|\lambda|< 1$\ Observe that neither nor has a real solution in the case $|\lambda|< 1$. This is because in this case asymptotically $a(\beta)=a^\prime \beta +o(\beta)$ and $b$ has a finite positive limit as $\beta\to0$. Similarly as in Case 2 let us substitute $a=\beta a^\prime $ in , , which gives $$\begin{aligned} 1-\lambda+\alpha\sqrt{\beta a^\prime b}-\frac{\beta a^\prime+b}{2a^\prime b}&=0\\ 1+\lambda+\frac{\sqrt{\beta}}{a^\prime b}-\alpha\frac{\beta a^\prime+b}{2}&=0. \end{aligned}$$ If we set $\beta=0$ we get $$\begin{aligned} 1-\lambda-\frac{1}{2a^\prime}&=0,\\ 1+\lambda-\alpha\frac{b}{2}&=0, \end{aligned}$$ which obviously has positive solution $(a^\prime,b)$ when $|\lambda|<1$. As before the Jacobian is non-zero at $\beta=0$, so $a^\prime$ and $b$ are continuous at $\beta=0$.\ Now we go back to the limit $\lim_{\beta\to 0}\beta\gamma$. We have $a(\beta)=\beta a^\prime+o(\beta)$, thus $$\begin{aligned} \lim_{\beta\to 0}\beta \gamma=\lim_{\beta\to 0}\frac{\alpha^2 ab+\tfrac{\beta^2}{ab}-2\alpha\beta (\tfrac{a+b}{\sqrt{ab}}-1)-(\lambda-1)^2}{4}=-\frac{(\lambda-1)^2}{4}. \end{aligned}$$ **Case 4.** $|\lambda|= 1$\ An analysis similar to the above cases shows that in the case $\lambda=1$ we have $a(\beta)=a^\prime \beta^{2/3}+o(\beta^{2/3})$ and $b$ has positive limit when $\beta\to 0$. In the case $\lambda=-1$ one gets $a(\beta)=a^\prime \beta+o(\beta)$ and $b(\beta)=b^\prime\beta^{1/3}+o(\beta^{1/3})$ as $\beta\to 0$. Thus we can calculate the limit of $f_{\alpha,\beta,\lambda}$ as $\beta\to 0$ , $$\lim_{\beta\downarrow0} f_{\alpha,\beta,\lambda}(z) = \begin{cases} (\alpha+(\lambda-1)z)^2, & \lambda>1, \\ \alpha^2+(\lambda^2-1)\alpha z, & |\lambda| \leq 1, \\ (\alpha-(\lambda+1)z)^2, & \lambda<-1. \end{cases}$$ The above allows us to calculate limiting $R$-transform and hence the Cauchy transform which implies . Considering the continuous dependence of roots on parameters shows the following asymptotic behaviour of the double root $\delta<0$ and the simple root $\eta \geq \alpha$. 1. If $|\lambda|>1$ then $\delta\to\alpha/(1-|\lambda|)$ and $\eta\to +\infty$ as $\beta \downarrow0$. 2. If $|\lambda|<1$ then $\delta\to-\infty$ and $\eta \to \alpha/(1-\lambda^2)$ as $\beta \downarrow0$. 3. If $\lambda=\pm1$ then $\delta\to-\infty$ and $\eta \to +\infty$ as $\beta \downarrow0$. Regularity of fGIG distribution under free convolution ====================================================== In this section we study in detail regularity properties of the fGIG distribution related to the operation of free additive convolution. In the next theorem we collect all the results proved in this section. The theorem contains several statements about free GIG distributions. Each subsection of the present section proves a part of the theorem. \[thm:sec3\] The following holds for the free GIG measure $\mu(\alpha,\beta,\lambda)$: 1. It is freely infinitely divisible for any $\alpha,\beta>0$ and $\lambda\in\mathbb{R}.$ 2. The free Levy measure is of the form $$\label{FLM} \tau_{\alpha,\beta,\lambda}({{\rm d}}x)=\max\{\lambda,0\} \delta_{1/\alpha}({{\rm d}}x) + \frac{(1-\delta x) \sqrt{\beta (1-\eta x)}}{\pi x^{3/2} (1-\alpha x)} 1_{(0,1/\eta)}(x)\, {{\rm d}}x.$$ 3. It is free regular with zero drift for all $\alpha,\beta>0$ and $\lambda\in\mathbb{R}$. 4. It is freely self-decomposable for $\lambda \leq -\frac{B^{\frac{3}{2}}}{A\sqrt{9B-8A}}.$ 5. It is unimodal. Free infinite divisibility and free Lévy measure ------------------------------------------------ As we mentioned before, having the operation of free convolution defined, it is natural to study infinite divisibility with respect to $\boxplus$. We say that $\mu$ is freely infinitely divisible if for any $n\geq 1$ there exists a probability measure $\mu_n$ such that $$\mu=\underbrace{\mu_n\boxplus\ldots\boxplus\mu_n}_{\text{$n$ times}}.$$ It turns out that free infinite divisibility of compactly supported measures can by described in terms of analytic properties of the $R$-transform. In particular it was proved in [@Voi86 Theorem 4.3] that the free infinite divisibility is equivalent to the inequality ${\text{\normalfont Im}}(r_{\alpha,\beta,\lambda}(z)) \leq0$ for all $z\in{\mathbb{C}}^-$. As in the classical case, for freely infinitely divisible probability measures, one can represent its free cumulant transform with a Lévy–Khintchine type formula. For a probability measure $\mu$ on ${\mathbb{R}}$, the *free cumulant transform* is defined by $${\mathcal{C}^\boxplus}_\mu(z) = z r_\mu(z).$$ Then $\mu$ is FID if and only if ${\mathcal{C}^\boxplus}_\mu$ can be analytically extended to ${\mathbb{C}}^-$ via the formula $$\label{FLK2} {\mathcal{C}^\boxplus}_{\mu}(z)=\xi z+\zeta z^2+\int_{{\mathbb{R}}}\left( \frac{1}{1-z x}-1-z x \,1_{[-1,1] }(x) \right) \tau({{\rm d}}x) ,\qquad z\in {\mathbb{C}}^-,$$ where $\xi \in {\mathbb{R}},$ $\zeta\geq 0$ and $\tau$ is a measure on ${\mathbb{R}}$ such that $$\tau (\{0\})=0,\qquad \int_{{\mathbb{R}}}\min \{1,x^2\}\tau ({{\rm d}}x) <\infty.$$ The triplet $(\xi,\zeta,\tau)$ is called the *free characteristic triplet* of $\mu$, and $\tau$ is called the *free Lévy measure* of $\mu$. The formula is called the *free Lévy–Khintchine formula*. The above form of free Lévy–Khintchine formula was obtained by Barndorff-Nielsen and Thorbj[ø]{}rnsen [@BNT02b] and it has a probabilistic interpretation (see [@Sat13]). Another form was obtained by Bercovici and Voiculescu [@BV93], which is more suitable for limit theorems. In order to prove that all fGIG distributions are freely infinitely divisible we will use the following lemma. \[lem:32\] Let $f\colon (\mathbb C^{-} \cup \mathbb R) \setminus \{x_0\} \to \mathbb C$ be a continuous function, where $x_0\in\mathbb{R}$. Suppose that $f$ is analytic in $\mathbb{C}^{-}$, $f(z)\to 0$ uniformly with $z\to \infty$ and ${\text{\normalfont Im}}(f(x))\leq 0$ for $x \in \mathbb R \setminus \{x_0\}$. Suppose moreover that ${\text{\normalfont Im}}(f(z))\leq 0$ for ${\text{\normalfont Im}}(z)\leq 0$ in neighbourhood of $x_0$ then ${\text{\normalfont Im}}(f(z))\leq 0$ for all $z\in \mathbb{C}^{-}$. Since $f$ is analytic the function ${\text{\normalfont Im}}f$ is harmonic and thus satisfies the maximum principle. Fix ${\varepsilon}>0$. Since $f(z)\to 0$ uniformly with $z\to \infty,$ let $R>0$ be such that ${\text{\normalfont Im}}f(z)<{\varepsilon}$. Consider a domain $D_{\varepsilon}$ with the boundary $$\partial D_{\varepsilon}=[-R,x_0-{\varepsilon}] \cup \{x_0+{\varepsilon}e^{{{\rm i}}\theta}: \theta \in[-\pi,0]\} \cup[x_0+{\varepsilon},R]\cup\{R e^{{{\rm i}}\theta}: \theta \in[-\pi,0]\}$$ Observe that on $\partial D_{\varepsilon}$ ${\text{\normalfont Im}}f(z)<{\varepsilon}$ by assumptions, and hence by the maximum principle we have ${\text{\normalfont Im}}f(z)<{\varepsilon}$ on whole $D_{\varepsilon}$. Letting ${\varepsilon}\to 0$ we get that ${\text{\normalfont Im}}f(z) \leq 0$ on $\mathbb{C}^{-}$. Next we proceed with the proof of free infinite divisibility of fGIG distributions. $ $ [**Case 1.**]{} $\lambda<1$. Observe that we have $$\label{hypo} {\text{\normalfont Im}}(r_{\alpha,\beta,\lambda}(x)) \leq 0, \qquad x\in{\mathbb{R}}\setminus\{\alpha\}.$$ From we see that ${\text{\normalfont Im}}(r_{\alpha,\beta,\lambda}(x))=0$ for $x \in(-\infty,\alpha) \cup (\alpha,\eta]$, and $$\label{r} {\text{\normalfont Im}}(r_{\alpha,\beta,\lambda}(x)) = \frac{(x-\delta)\sqrt{\beta (x-\eta)}}{x (\alpha-x)}<0,\qquad x>\eta$$ since $\eta>\alpha>0>\delta$.\ Moreover observe that by for ${\varepsilon}>0$ small enough we have ${\text{\normalfont Im}}(r_{\alpha,\beta,\lambda}(\alpha+{\varepsilon}e^{i \theta}))<0$, for $\theta\in[-\pi,0]$. Now Lemma \[lem:32\] implies that free GIG distribution if freely ID in the case $\lambda>0$. [**Case 2**]{} $\lambda>0$. In this case similar argument shows that $\mu(\alpha,\beta,\lambda)$ is FID. Moreover by $\eqref{alpha2}$ point $z=\alpha$ is a removable singularity and $r_{\alpha,\beta,\lambda}$ extends to a continuous function on ${\mathbb{C}}^-\cup{\mathbb{R}}$. Thus one does not need to take care of the behaviour around $z=\alpha$. [**Case 3**]{} $\lambda=0$. For $\lambda=0$ one can adopt a similar argumentation using . It also follows from the fact that free GIG family $\mu(\alpha,\beta,\lambda)$ is weakly continuous with respect to $\lambda$. Since free infinite divisibility is preserved by weak limits, then the case $\lambda=0$ may be deduced from the previous two cases. Next we will determine the free Lévy measure of free GIG distribution $\mu(\alpha,\beta,\lambda)$. Let $(\xi_{\alpha,\beta,\lambda}, \zeta_{\alpha,\beta,\lambda},\tau_{\alpha,\beta,\lambda})$ be the free characteristic triplet of the free GIG distribution $\mu(\alpha,\beta,\lambda)$. By the Stieltjes inversion formula mentioned in Remark \[rem:Cauchy\], the absolutely continuous part of the free Lévy measure has the density $$-\lim_{{\varepsilon}\to0}\frac{1}{\pi x^2}{\text{\normalfont Im}}(r_{\alpha,\beta,\lambda}(x^{-1}+{{\rm i}}{\varepsilon})), \qquad x \neq 0,$$ atoms are at points $1/p~(p\neq0)$, such that the weight is given by $$\tau_{\alpha,\beta,\lambda}(\{1/p\})=\lim_{z\to p} (p - z) r_{\alpha,\beta,\lambda}(z),$$ is non-zero, where $z$ tends to $p$ non-tangentially from ${\mathbb{C}}^-$. In our case the free Lévy measure does not have a singular continuous part since $r_{\alpha,\beta,\lambda}$ is continuous on ${\mathbb{C}}^-\cup{\mathbb{R}}\setminus\{\alpha\}$. Considering – and we obtain the free Lévy measure $$\tau_{\alpha,\beta,\lambda}({{\rm d}}x)=\max\{\lambda,0\} \delta_{1/\alpha}({{\rm d}}x) + \frac{(1-\delta x) \sqrt{\beta (1-\eta x)}}{\pi x^{3/2} (1-\alpha x)} 1_{(0,1/\eta)}(x)\, {{\rm d}}x.$$ Recall that $\eta \geq \alpha >0>\delta$ holds, and $\eta=\alpha$ if and only if $\lambda=0$. The other two parameters $\xi_{\alpha,\beta,\lambda}$ and $\zeta_{\alpha,\beta,\lambda}$ in the free characteristic triplet will determined in Section \[sec:FR\]. Free regularity {#sec:FR} --------------- In this subsection we will deal with a property stronger than free infinite divisibility, so called free regularity.\ Let $\mu$ be a FID distribution with the free characteristic triplet $(\xi,\zeta,\tau)$. When the semicircular part $\zeta$ is zero and the free Lévy measure $\tau$ satisfies a stronger integrability property $\int_{{\mathbb{R}}}\min\{1,|x|\}\tau({{\rm d}}x) < \infty$, then the free Lévy-Khintchine representation reduces to $$\label{FR} {\mathcal{C}^\boxplus}_\mu(z)=\xi' z+\int_{{\mathbb{R}}}\left( \frac{1}{1-z x}-1\right) \tau({{\rm d}}x) ,\qquad z\in {\mathbb{C}}^-,$$ where $\xi' =\xi -\int_{[-1,1]}x \,\tau({{\rm d}}x) \in {\mathbb{R}}$ is called a *drift*. The distribution $\mu$ is said to be *free regular* [@PAS12] if $\xi'\geq0$ and $\tau$ is supported on $(0,\infty)$. A probability measure $\mu$ on ${\mathbb{R}}$ is free regular if and only if the free convolution power $\mu^{\boxplus t}$ is supported on $[0,\infty)$ for every $t>0$, see [@AHS13]. Examples of free regular distributions include positive free stable distributions, free Poisson distributions and powers of free Poisson distributions [@Has16]. A general criterion in [@AHS13 Theorem 4.6] shows that some boolean stable distributions [@AH14] and many probability distributions [@AHS13; @Has14; @AH16] are free regular. A recent result of Ejsmont and Lehner [@EL Proposition 4.13] provides a wide class of examples: given a nonnegative definite complex matrix $\{a_{ij}\}_{i,j=1}^n$ and free selfadjoint elements $X_1,\dots, X_n$ which have symmetric FID distributions, the polynomial $\sum_{i,j=1}^n a_{ij} X_i X_j$ has a free regular distribution with zero drift. For the free GIG distributions, the semicircular part can be found by $\displaystyle\zeta_{\alpha,\beta,\lambda}= \lim_{z\to \infty} z^{-1} r_{\alpha,\beta,\lambda}(z)=0$. The free Lévy measure satisfies $${{\rm supp}}(\tau_{\alpha,\beta,\lambda}) \subset (0,\infty), \qquad \int_0^\infty \min\{1,x\} \tau_{\alpha,\beta,\lambda}({{\rm d}}x)<\infty$$ and so we have the reduced formula . The drift is given by $\displaystyle \xi_{\alpha,\beta,\lambda}'=\lim_{u \to -\infty} r_{\alpha,\beta,\lambda}(u)=0$. Free selfdecomposability ------------------------ Classical GIG distribution is selfdecomposable [@Hal79; @SS79] (more strongly, hyperbolically completely monotone [@Bon92 p. 74]), and hence it is natural to ask whether free GIG distribution is freely selfdecomposable.\ A distribution $\mu$ is said to be *freely selfdecomposable* (FSD) [@BNT02a] if for any $c\in(0,1)$ there exists a probability measure $\mu_c$ such that $\mu= (D_c\mu) \boxplus \mu_c $, where $D_c\mu$ is the dilation of $\mu$, namely $(D_c\mu)(B)=\mu(c^{-1}B)$ for Borel sets $B \subset {\mathbb{R}}$. A distribution is FSD if and only if it is FID and its free Lévy measure is of the form $$\label{SD Levy} \frac{k(x)}{|x|}\, {{\rm d}}x,$$ where $k\colon {\mathbb{R}}\to [0,\infty)$ is non-decreasing on $(-\infty,0)$ and non-increasing on $(0,\infty)$. Unlike the free regular distributions, there are only a few known examples of FSD distributions: the free stable distributions, some free Meixner distributions, the classical normal distributions and a few other distributions (see [@HST Example 1.2, Corollary 3.4]). The free Poisson distribution is not FSD. In view of , the free GIG distribution $\mu(\alpha,\beta,\lambda)$ is not FSD if $\lambda > 0$. Suppose $\lambda\leq0$, then $\mu(\alpha,\beta,\lambda)$ is FSD if and only if the function $$k_{\alpha,\beta,\lambda}(x)=\frac{(1-\delta x) \sqrt{\beta (1-\eta x)}}{\pi \sqrt{x} (1-\alpha x)}$$ is non-increasing on $(0,1/\eta)$. The derivative is $$k_{\alpha,\beta,\lambda}'(x) = - \frac{\sqrt{\beta} [1+(\delta-3\alpha)x +(2 \alpha \eta -2\eta\delta + \alpha \delta )x^2]}{2\pi x^{3/2} (1-\alpha x)^2 \sqrt{1-\eta x}}.$$ Hence FSD is equivalent to $$g(x):=1+(\delta-3\alpha)x +(2 \alpha \eta -2\eta\delta + \alpha \delta )x^2 \geq 0,\qquad 0\leq x \leq 1/\eta.$$ Using $\eta \geq\alpha >0>\delta$, one can show that $2 \alpha \eta -2\eta\delta + \alpha \delta >0$, a straightforward calculation shows that the function $g$ takes a minimum at a point in $(0,1/\eta)$. Thus FSD is equivalent to $$D:=(\delta-3\alpha)^2 - 4 (2 \alpha \eta -2\eta\delta + \alpha \delta ) \leq 0.$$ In order to determine when the above inequality holds, it is convenient to switch to parameters $A,B$ defined by . Using formulas derived in Section \[sec:form\] we obtain $$D= \frac{4(B+\lambda A)(8\lambda^2 A^3 -9\lambda^2 A^2 B +B^3)}{A^2 B (A-B)^2 (B-\lambda A)}.$$ Calculating $\lambda$ for which $D$ is non-positive we obtain that $$\lambda\leq-\frac{B^{\frac{3}{2}}}{A\sqrt{9B-8A}}.$$ One can easily find that the maximum of the function $-\frac{B^{\frac{3}{2}}}{A\sqrt{9B-8A}}$ over $A,B\geq 0$ equals $-\frac{4}{9}\sqrt{3}$. Thus the set of parameters $(A,B)$ that give FSD distributions is nonempty if and only if $\lambda \leq -\frac{4}{9}\sqrt{3}$. In the critical case $\lambda = -\frac{4}{9}\sqrt{3}$ only the pairs $(A, \frac{4}{3}A), A>0$ give FSD distributions. If one puts $A=12 t, B= 16 t$ then $a=(2-\sqrt{3})^2t,b=(2+\sqrt{3})^2t$, $\alpha = \frac{3-\sqrt{3}}{18 t}$, $\beta = \frac{3+\sqrt{3}}{18}t, \delta= -\frac{3-\sqrt{3}}{6t}=- 2\eta$. One can easily show that $\mu(\alpha,\beta,-1)$ is FSD if and only if $(0<A<)~B\leq \frac{-1+\sqrt{33}}{2} A$. Finally note that the above result is in contrast to the fact that classical GIG distributions are all selfdecomposable. Unimodality ----------- Since relations of free infinite divisibility and free self decomposability were studied in the literature, we decided to determine whether measures from the free GIG family are unimodal. A measure $\mu$ is said to be *unimodal* if for some $c\in{\mathbb{R}}$ $$\label{UM} \mu({{\rm d}}x)=\mu(\{c\})\delta_c({{\rm d}}x)+f(x)\, {{\rm d}}x,$$ where $f\colon{\mathbb{R}}\to[0,\infty)$ is non-decreasing on $(-\infty,c)$ and non-increasing on $(c,\infty)$. In this case $c$ is called the *mode*. Hasebe and Thorbj[ø]{}rnsen [@HT16] proved that FSD distributions are unimodal. Since some free GIG distributions are not FSD, the result from [@HT16] does not apply. However it turns out that free GIG measures are unimodal. Calculating the derivative of the density of $\mu(\alpha,\beta,\lambda)$ one obtains $$\frac{ x (a + b - 2 x)(x \alpha + \tfrac{\beta}{\sqrt{a b}}) - 2 (b - x) (x-a) (x \alpha + \tfrac{2 \beta}{\sqrt{ a b}})}{2 x^3 \sqrt{(b - x) (x-a)}}$$ Denoting by $f(x)$ the quadratic polynomial in the numerator, one can easily see from the shape of the density that $f(a)>0>f(b)$ and hence the derivative vanishes at a unique point in $(a,b)$ (since $f$ is quadratic). Characterizations the free GIG distribution =========================================== In this section we show that the fGIG distribution can be characterized similarly as classical GIG distribution. In [@Szp17] fGIG was characterized in terms of free independence property, the classical probability analogue of this result characterizes classical GIG distribution. In this section we find two more instances where such analogy holds true, one is a characterization by some distributional properties related with continued fractions, the other is maximization of free entropy. Continued fraction characterization ----------------------------------- In this section we study a characterization of fGIG distribution which is analogous to the characterization of GIG distribution proved in [@LS83]. Our strategy is different from the one used in [@LS83]. We will not deal with continued fractions, but we will take advantage of subordination for free convolutions, which allows us to prove the simpler version of ”continued fraction” characterization of fGIG distribution. \[thm:char\] Let $Y$ have the free Poisson distribution $\nu(1/\alpha,\lambda)$ and let $X$ be free from $Y$, where $\alpha,\lambda>0$ and $X>0$, then we have $$\begin{aligned} \label{eq:distr_char} X\stackrel{d}{=}\left(X+Y\right)^{-1}\end{aligned}$$ if and only if $X$ has free GIG distribution $\mu(\alpha,\alpha,-\lambda)$. Observe that the “if” part of the above theorems is contained in the remark \[rem:prop\]. We only have to show that if $\eqref{eq:distr_char}$ holds where $Y$ has free Poisson distribution $\nu(1/\alpha,\lambda)$, then $X$ has free GIG distribution. As mentioned above our proof of the above theorem uses subordination of free convolution. This property of free convolution was first observed by Voiculescu [@Voi93] and then generalized by Biane [@Bia98]. Let us shortly recall what we mean by subordination of free additive convolution. Subordination of free convolution states that for probability measures $\mu,\nu$, there exists an analytic function defined on $\mathbb{C}\setminus\mathbb{R}$ with the property $F(\overline{z})=\overline{F(z)}$ such that for $z\in\mathbb{C}^+$ we have ${\text{\normalfont Im}}F(z)>{\text{\normalfont Im}}z$ and $$G_{\mu\boxplus\nu}(z)=G_\mu(\omega(z)).$$ Now if we denote by $\omega_1$ and $\omega_2$ subordination functions such that $G_{\mu\boxplus\nu}=G_\mu(\omega_1)$ and $G_{\mu\boxplus\nu}=G_\nu(\omega_2)$, then $\omega_1(z)+\omega_2(z)=1/G_{\mu\boxplus\nu}(z)+z$. Next we proceed with the proof of Theorem \[thm:char\] which is the main result of this section. First note that is equivalent to $$\frac{1}{X}\stackrel{d}{=}X+Y,$$ Which may be equivalently stated in terms of Cauchy transforms of both sides as $$\begin{aligned} \label{eqn:CharCauch} G_{X^{-1}}(z)=G_{X+Y}(z). \end{aligned}$$ Subordination allows as to write the Cauchy transform of $X+Y$ in two ways $$\begin{aligned} \label{Sub1} G_{X+Y}(z)&=G_X(\omega_X(z)),\\ \label{Sub2} G_{X+Y}(z)&=G_Y(\omega_Y(z)). \end{aligned}$$ Moreover $\omega_X$ and $\omega_Y$ satisfy $$\begin{aligned} \omega_X(z)+\omega_Y(z)=1/G_{X+Y}(z)+z. \end{aligned}$$ From the above we get $$\begin{aligned} \label{subrel} \omega_X(z)=1/G_{X+Y}(z)+z-\omega_Y(z), \end{aligned}$$ this together with and gives $$\begin{aligned} G_{X^{-1}}(z)&=G_X\left(\frac{1}{G_{X^{-1}}}(z)+z-\omega_Y(z)\right). \end{aligned}$$ Since we know that $Y$ has free Poisson distribution $\nu(\lambda,1/\alpha)$ we can calculate $\omega_Y$ in terms of $G_{X^{-1}}$ using . To do this one has to use the identity $G_Z^{\langle -1\rangle}(z)=r_Z(z)+1/z$ for any self-adjoint random variable $Z$ and the form of the $R$-transform of free Poisson distribution recalled in Remark \[rem:freePoisson\]. $$\begin{aligned} \label{omegax} \omega_Y(z)=\frac{\lambda }{\alpha-G_{X^{-1}}(z) }+\frac{1}{G_{X^{-1}}(z)} \end{aligned}$$ Now we can use , where we substitute $G_{X+Y}(z)=G_{X^{-1}}(z)$ to obtain $$\begin{aligned} \label{FE} G_{X^{-1}}(z)=G_{X}\left(\frac{\lambda }{G_{X^{-1}}(z)-\alpha }+z\right). \end{aligned}$$ Next we observe that we have $$\begin{aligned} \label{CauchInv} G_{X^{-1}}(z)=\frac{1}{z}\left(-\frac{1}{z}G_X\left(\frac{1}{z}\right)+1\right), \end{aligned}$$ which allows to transform to an equation for $G_X$. It is enough to show that this equation has a unique solution. Indeed from Remark \[rem:prop\] we know that free GIG distribution $\mu(\alpha,\alpha,\lambda)$ has the desired property, which in particular means that for $X$ distributed $\mu(\alpha,\alpha,\lambda)$ equation is satisfied. Thus if there is a unique solution it has to be the Cauchy transform of the free GIG distribution. To prove uniqueness of the Cauchy transform of $X$, we will prove that coefficients of the expansion of $G_X$ at a special “good” point, are uniquely determined by $\alpha$ and $\lambda$. First we will determine the point at which we will expand the function. Observe that with our assumptions $G_{X^{-1}}$ is well defined on the negative half-line, moreover $G_{X^{-1}}(x)<0$ for any $x<0$, and we have $G_{X^{-1}}(x)\to0$ with $x\to-\infty$. On the other hand the function $f(x)=1/x-x$ is decreasing on the negative half-line, and negative for $x\in(-1,0)$. Thus there exist a unique point $c\in(-1,0)$ such that $$\label{key eq} \frac{1}{c} = \frac{\lambda}{G_{X^{-1}}(c)-\alpha}+c.$$ Let us denote $$M(z):= G_X\left(\frac{1}{z}\right)$$ and $$\label{eqn:funcN} N(z):=\left(\frac{\lambda}{G_{X^{-1}}(z)-\alpha}+z\right)^{-1} = \frac{-z+ \alpha z^2 +M(z)}{-(1+\lambda)z^2 +\alpha z^3 + z M(z)},$$ where the last equality follows from .\ One has $N(c)=c$, and our functional equation may be rewritten (with the help of ) as $$\label{FE2} -M(z) +z = z^2 M(N(z)).$$ Functions $M$ and $N$ are analytic around any $x<0$. Consider the expansions $$\begin{aligned} M(z) &= \sum_{n=0}^\infty \alpha_n (z-c)^n, \\ N(z) &= \sum_{n=0}^\infty \beta_n (z-c)^n. \end{aligned}$$ Observe that $\beta_0=c$ since $N(c)=c$. Differentiating we observe that any $\beta_n,\, n\geq1$ is a rational function of $\alpha, \lambda, c, \alpha_0,\alpha_1,\dots, \alpha_n$. Moreover any $\beta_n,\,n\geq 1 $ is a degree one polynomial in $\alpha_n$. We have $$\begin{aligned} \beta_n = \frac{-\lambda}{[\alpha_0 -(1+\lambda)c+\alpha c^2]^2} \alpha_n + R_n, \end{aligned}$$ where $R_n$ is a rational function of $n+3$ variables evaluated at $(\alpha,\lambda,c,\alpha_0,\alpha_1,\dots, \alpha_{n-1})$, which does not depend on the distribution of $X$. For example $\beta_1$ is given by $$\label{eq:beta1} \begin{split} \beta_1 &=N'(c) =\left. \left(\frac{-z+ \alpha z^2 +M(z)}{-(1+\lambda)z^2 +\alpha z^3 + z M(z)}\right)' \right|_{z=c} \\ &=\frac{-\lambda c^2 \alpha_1 + c^2(-1-\lambda +2\alpha c -\alpha^2 c^2)+2c(1+\lambda-\alpha c)\alpha_0- \alpha_0^2 }{c^2[\alpha_0-(1+\lambda)c+\alpha c^2]^2}. \end{split}$$ Next we investigate some properties of $c, \alpha_0$ and $\alpha_1$. Evaluating both sides of at $z=c$ yields $$-M(c)+c = c^2 M(N(c)) = c^2M(c),$$ since $M(c)=\alpha_0$ we get $$\label{eq1c} \alpha_0=\frac{c}{1+c^2}.$$ Observe that $\alpha_0= M(c) = G_X(1/c)$ and $\alpha_1=M'(c)=-c^{-2} G_X'(1/c)$ hence we have $$\frac{1}{1+c^2}= \int_{0}^\infty \frac{1}{1-c x} {{\rm d}}\mu_X( x), \qquad \alpha_1 = \int_{0}^\infty \frac{1}{(1-c x)^2} {{\rm d}}\mu_X( x),$$ where $\mu_X$ is the distribution of $X$. Using the Schwarz inequality for the first estimate and a simple observation that $0\leq1/(1-cx) \leq1$ for $x>0$, for the latter estimate we obtain $$\label{eq:alpha1} \frac{1}{(1+c^2)^{2}}=\left(\int_{0}^\infty \frac{1}{1-c x} \mu_X({{\rm d}}x)\right)^2 \leq \int_{0}^\infty \frac{1}{(1-c x)^2} \mu_X({{\rm d}}x)= \alpha_1 \leq \frac{1}{1+c^2}.$$ The equation together with gives $$\label{eq2c} \frac{1}{c} = \frac{\lambda c^2}{-\alpha_0 + c - \alpha c^2} +c.$$ Substituting to after simple calculations we get $$\label{eq:c} \alpha c^4 - (1+\lambda)c^3+(1-\lambda)c -\alpha = 0.$$ We start by showing that $\alpha_0$ is determined only by $\alpha$ and $\lambda$. We will show that $c$, which we showed before is a unique number, depends only on $\alpha$ and $\lambda$ and thus shows that $\alpha_0$ is determined by $\alpha$ and $\lambda$. Since the polynomial $c^4 - (1+\lambda)c^3$ is non-negative for $c<0$ and has a root at $c=0$, and the polynomial $(\lambda-1)c +\alpha$ equals $\alpha>0$ at $c=0$ it follows that there is only one negative $c$, such that the two polynomials are equal and thus the number $c$ is uniquely determined by $(\alpha,\lambda)$. From we see that $\alpha_0$ is also uniquely determined by $(\alpha,\lambda)$. Next we will prove that $\alpha_1$ only depends on $\alpha$ and $\lambda$. Differentiating and evaluating at $z=c$ we obtain $$\label{eq3c} 1-\alpha_1 = 2 c \alpha_0 + c^2 \alpha_1 \beta_1.$$ Substituting $\alpha_0$ and $\lambda$ from the equations and we simplify and we get $$\beta_1 = \frac{(1-c^4)\alpha_1 -1+2c^2-\alpha c^3 -\alpha c^5}{c(\alpha-c+\alpha c^2)}$$ and then equation may be expressed in the form $$\label{eq:alpha2} c(1+c^2)^2 \alpha_1^2 + (\alpha(1+c^2)^2-2c )(1+c^2)\alpha_1 -(\alpha-c + \alpha c^2) =0.$$ The above is a degree 2 polynomial in $\alpha_1$, denote this polynomial by $f$, we have then $$f(0) <0,\qquad f\left(\frac{1}{1+c^2}\right) = \alpha c^2 (1+c^2)>0.$$ Where the first inequality follows from the fact that $c<0$. Since the coefficient $c(1+c^2)^2$ is negative we conclude that $f$ has one root in the interval $(0,1/(1+c^2))$ and the other in $(1/(1+c^2),\infty)$. The inequality implies that $\alpha_1$ is the smaller root of $f$, which is a function of $ \alpha$ and $c$ and hence of $\alpha$ and $\lambda$. In order to prove that $\alpha_n$ depends only on $(\alpha,\lambda)$ for $n\geq2$, first we estimate $\beta_1$. Note that and imply that $$\label{eq4c} \beta_1 = \frac{1-c^2}{\alpha_1 c^2(1+c^2)} -\frac{1}{c^2}.$$ Combining this with the inequality we easily get that $$\label{eq:beta} -1 \leq \beta_1 \leq -c^2.$$ Now we prove by induction on $n$ that $\alpha_n$ only depends on $\alpha$ and $\lambda$. For $n\geq2$ differentiating $n$-times and evaluating at $z=c$ we arrive at $$\label{eq5c} -\alpha_n = c^2(\alpha_n \beta_1^n + \alpha_1 \beta_n) + Q_n,$$ where $Q_n$ is a universal polynomial (which means that the polynomial does not depend on the distribution of $X$) in $2n+1$ variables evaluated at $(\alpha,\lambda,c,\alpha_1,\dots, \alpha_{n-1}, \beta_1,\cdots, \beta_{n-1})$. According to the inductive hypothesis, the polynomials $R_n$ and $Q_n$ depend only on $\alpha$ and $\lambda$. We also have that $\beta_n=p \alpha_n + R_n$, where $$p := \frac{-\lambda}{[\alpha_0 -(1+\lambda)c+\alpha c^2]^2} = \frac{1-c^4}{c(\alpha-c+\alpha c^2)}.$$ The last formula is obtained by substituting $\alpha_0$ and $\lambda$ from and . The equation then becomes $$(1+c^2\beta_1^n + c^2 p \alpha_1)\alpha_n + c^2 \alpha_1 R_n + Q_n=0.$$ The inequalities and show that $$\begin{split} 1+c^2\beta_1^n + c^2 p \alpha_1 &\geq 1-c^2 +\frac{c^2(1-c^4)}{c(\alpha-c+\alpha c^2) (1+c^2)} =\frac{\alpha(1-c^4)}{\alpha-c+\alpha c^2}>0, \end{split}$$ thus $1+c^2\beta_1^n + c^2 p \alpha_1$ is non-zero. Therefore, the number $\alpha_n$ is uniquely determined by $\alpha$ and $\lambda$. Thus we have shown that, if a random variable $X>0$ satisfies the functional equation for fixed $\alpha>0$ and $\lambda>0$, then the point $c$ and all the coefficients $\alpha_0,\alpha_1,\alpha_2,\dots$ of the series expansion of $M(z)$ at $z=c$ are determined only by $\alpha$ and $\lambda$. By analytic continuation, the Cauchy transform $G_X$ is determined uniquely by $\alpha$ and $\lambda$, so there is only one distribution of $X$ for which this equation is satisfied. Remarks on free entropy characterization ---------------------------------------- Féral [@Fer06] proved that fGIG $\mu(\alpha,\beta,\lambda)$ is a unique probability measure which maximizes the following free entropy functional with potential $$\begin{aligned} I_{\alpha,\beta,\lambda}(\mu)=\int\!\!\!\int \log|x-y|\, {{\rm d}}\mu(x) {{\rm d}}\mu(y)-\int V_{\alpha,\beta,\lambda}(x)\, {{\rm d}}\mu(x), \end{aligned}$$ among all the compactly supported probability measures $\mu$ on $(0,\infty)$, where $\alpha, \beta>0$ and $\lambda \in {\mathbb{R}}$ are fixed constants, and $$V_{\alpha,\beta,\lambda}(x)=(1-\lambda) \log x+\alpha x+\frac{\beta}{x}.$$ Here we point out the classical analogue. The (classical) GIG distribution is the probability measure on $(0,\infty)$ with the density $$\label{C-GIG} \frac{(\alpha/\beta)^{\lambda/2}}{2K_\lambda(2\sqrt{\alpha\beta})} x^{\lambda-1} e^{-(\alpha x + \beta /x)}, \qquad \alpha,\beta>0, \lambda\in{\mathbb{R}},$$ where $K_\lambda$ is the modified Bessel function of the second kind. Note that this density is proportional to $\exp(-V_{\alpha,\beta,\lambda}(x))$. Kawamura and Iwase [@KI03] proved that the GIG distribution is a unique probability measure which maximizes the classical entropy with the same potential $$H_{\alpha,\beta,\lambda}(p) = - \int p(x) \log p(x)\, {{\rm d}}x -\int V_{\alpha,\beta,\lambda}(x)p(x)\, {{\rm d}}x$$ among all the probability density functions $p$ on $(0,\infty)$. This statement is slightly different from the original one [@KI03 Theorem 2], and for the reader’s convenience a short proof is given below. The proof is a straightforward application of the Gibbs’ inequality $$\label{Gibbs} -\int p(x)\log p(x)\,{{\rm d}}x \leq -\int p(x)\log q(x)\,{{\rm d}}x,$$ for all probability density functions $p$ and $q$, say on $(0,\infty)$. Taking $q$ to be the density of the classical GIG distribution and computing $\log q(x)$, we obtain the inequality $$\label{C-entropy} H_{\alpha,\beta,\lambda}(p) \leq -\log \frac{(\alpha/\beta)^{\lambda/2}}{2K_\lambda(2\sqrt{\alpha\beta})}.$$ Since the Gibbs inequality becomes equality if and only if $p=q$, the equality in holds if and only if $p=q$, as well. From the above observation, it is tempting to investigate the map $$C e^{-V(x)}\, {{\rm d}}x \mapsto \text{~the maximizer $\mu_V$ of the free entropy functional $I_V$ with potential $V$},$$ where $C>0$ is a normalizing constant. Under some assumption on $V$, the free entropy functional $I_V$ is known to have a unique maximizer (see [@ST97]) and so the above map is well defined. Note that the density function $C e^{-V(x)}$ is the maximizer of the classical entropy functional with potential $V$, which follows from the same arguments as above. This map sends Gaussian to semicircle, gamma to free Poisson (when $\lambda \geq1$), and GIG to free GIG. More examples can be found in [@ST97]. Acknowledgement {#acknowledgement .unnumbered} =============== The authors would like to thank BIRS, Banff, Canada for hospitality during the workshop “Analytic versus Combinatorial in Free Probability” where we started to work on this project. TH was supported by JSPS Grant-in-Aid for Young Scientists (B) 15K17549. KSz was partially supported by the NCN (National Science Center) grant 2016/21/B/ST1/00005. [100]{} O. Arizmendi and T. Hasebe, Classical and free infinite divisibility for Boolean stable laws, Proc. Amer. Math. Soc. 142 (2014), no. 5, 1621–1632. O. Arizmendi and T. Hasebe, Classical scale mixtures of boolean stable laws, Trans. Amer. Math. Soc. 368 (2016), 4873–4905. O. Arizmendi, T. Hasebe and N. Sakuma, On the law of free subordinators, ALEA Lat. Am. J. Probab. Math. Stat. 10 (2013), no. 1, 271–291. O.E. Barndorff-Nielsen and Ch. Halgreen, Infinite divisibility of the hyperbolic and generalized inverse Gaussian distributions, Z. Wahrsch. Verw. Gebiete 38 (1977), no. 4, 309–311. O.E. Barndorff-Nielsen and S. Thorbj[ø]{}rnsen, Self-decomposability and Lévy processes in free probability, Bernoulli 8(3) (2002), 323–366. O.E. Barndorff-Nielsen and S. Thorbj[ø]{}rnsen, Lévy laws in free probability, Proc. Nat. Acad. Sci. 99 (2002), 16568–16575. H. Bercovici and V. Pata, Stable laws and domains of attraction in free probability theory. With an appendix by Philippe Biane, Ann. of Math. (2) 149 (1999), no. 3, 1023–1060. H. Bercovici and D. Voiculescu, Free convolution of measures with unbounded support, Indiana Univ. Math. J. 42, no. 3 (1993), 733–773. P. Biane, Processes with free increments, Math. Z. 227 (1998), no. 1, 143–174 L. Bondesson, Generalized gamma convolutions and related classes of distributions and densities, Lecture Notes in Stat. 76, Springer, New York, 1992. W. Ejsmont and F. Lehner, Sample variance in free probability, J. Funct. Anal. 273, Issue 7 (2017), 2488–2520. D. Féral, The limiting spectral measure of the generalised inverse Gaussian random matrix model, C. R. Math. Acad. Sci. Paris 342 (2006), no. 7, 519–522. U. Haagerup and S. Thorbj[ø]{}rnsen, On the free gamma distributions, Indiana Univ. Math. J. 63 (2014), no. 4, 1159–1194. C. Halgreen, Self-decomposability of the generalized inverse gaussian and hyperbolic distributions, Z. Wahrsch. verw. Gebiete 47 (1979), 13–17. T. Hasebe, Free infinite divisibility for beta distributions and related ones, Electron. J. Probab. 19, no. 81 (2014), 1–33. T. Hasebe, Free infinite divisibility for powers of random variables, ALEA Lat. Am. J. Probab. Math. Stat. 13 (2016), no. 1, 309–336. T. Hasebe, N. Sakuma and S. Thorbj[ø]{}rnsen, The normal distribution is freely self-decomposable, Int. Math. Res. Notices, available online. arXiv:1701.00409 T. Hasebe and S. Thorbj[ø]{}rnsen, Unimodality of the freely selfdecomposable probability laws, J. Theoret. Probab. 29 (2016), Issue 3, 922–940. F. Hiai and D. Petz, *The semicircle law, free random variables and entropy*, Mathematical Surveys and Monographs 77, American Mathematical Society, Providence, RI, 2000. T. Kawamura and K. Iwase, Characterizations of the distributions of power inverse Gaussian and others based on the entropy maximization principle, J. Japan Statist. Soc. 33, no. 1 (2003), 95–104. G. Letac and V. Seshadri, A characterization of the generalized inverse Gaussian distribution by continued fractions, Z. Wahrsch. Verw. Gebiete 62 (1983), 485-489. G. Letac and J. Weso[ł]{}owski, An independence property for the product of [GIG]{} and gamma laws, Ann. Probab. 28 (2000) 1371-1383. E. Lukacs, A characterization of the gamma distribution, Ann. Math. Statist. 26 (1955) 319–324. H. Matsumoto and M. Yor, An analogue of [P]{}itman’s [$2M-X$]{} theorem for exponential [W]{}iener functionals. [II]{}. [T]{}he role of the generalized inverse [G]{}aussian laws, Nagoya Math. J. 162 (2001) 65–86. J. A. Mingo and R. Speicher, [*Free Probability and Random Matrices*]{}. Springer, 2017. A. Nica and R. Speicher, [*Lectures on the Combinatorics of Free Probability.*]{} London Mathematical Society Lecture Note Series, 335. Cambridge University Press, Cambridge, 2006. V. Pérez-Abreu and N. Sakuma, Free generalized gamma convolutions, Electron. Commun. Probab. 13 (2008), 526–539. V. Pérez-Abreu and N. Sakuma, Free infinite divisibility of free multiplicative mixtures of the Wigner distribution, J. Theoret. Probab. 25, No. 1 (2012), 100–121. E.B. Saff and V. Totic, [*Logarithmic Potentials with External Fields*]{}, Springer-Verlag, Berlin, Heidelberg, 1997. K. Sato, [*Lévy Processes and Infinitely Divisible Distributions*]{}, corrected paperback edition, Cambridge Studies in Advanced Math. 68, Cambridge University Press, Cambridge, 2013. D.N. Shanbhag and M. Sreehari, An extension of Goldie’s result and further results in infinite divisibility, Z. Wahrsch. verw. Gebiete 47 (1979), 19–25. K. Szpojankowski, On the Lukacs property for free random variables, Studia Math. 228 (2015), no. 1, 55–72. K. Szpojankowski, A constant regression characterization of the Marchenko-Pastur law. Probab. Math. Statist. 36 (2016), no. 1, 137–145. K. Szpojankowski, On the Matsumoto-Yor property in free probability, J. Math. Anal. Appl. 445(1) (2017), 374–393. D. Voiculescu, Symmetries of some reduced free product $C^\ast$-algebras, in: Operator Algebras and their Connections with Topology and Ergodic Theory, 556–588, Lecture Notes in Mathematics, Vol. 1132, Springer-Verlag, Berlin/New York, 1985. D. Voiculescu, Addition of certain noncommuting random variables, J. Funct. Anal. 66 (1986), no. 3, 323–346. D. Voiculescu. The analogues of entropy and of Fisher’s information measure in free probability theory, I. Comm. Math. Phys. 155 (1993), no. 1, 71–92. D. Voiculescu, K. Dykema and A. Nica, [*Free random variables*]{}. A noncommutative probability approach to free products with applications to random matrices, operator algebras and harmonic analysis on free groups. CRM Monograph Series, 1. American Mathematical Society, Providence, RI, 1992.
{ "pile_set_name": "arxiv" }
My new global trends book is out now: The Future of Almost Everything. But it takes at least 20 years to evaluate how good a trends analyst was / is – so what about forecasts made by me in previous books, about what to expect over the following decade or two or three? How did those forecasts measure up? I had to answer that question for myself by re-reading what I wrote in the past about the future, before writing my latest book. Read FREE SAMPLE of The Truth about Almost Everything. So you can judge for yourself, here are loads of predictions I made in the past - and the book in which they were made plus date. Every one of these is already a reality or looks like it soon could be, as of August 2015....And yes I got some things wrong - not many fortunately. I am going to publish the entire text of Futurewise 1998 edition on this website for a complete picture - but of course on this site already are hundreds of posts and videos going back in some cases over 20 years. Here's one I got wrong: I thought video streaming would take off - but with more use of live video using smartphones that we have actually seen. Most personal video streaming is of course things like YouTube clips. Most forecasting errors in my experience are not over WHAT is going to happen as the trend is usually fairly obvious, but by WHEN with questions about real impact. So here goes: how did I see the future of banking, global economy, mobile, smartphones, Internet of Things, Big Data and so on...? Banking Banking as it is in 1998 will never survive and will fall in profitability. 1998 - Futurewise Internet of Things and Smart Homes All new homes in developed nations will be intelligent by 2010, Smart homes will boast 15-20% energy savings. 1998 - Futurewise Every device with a power socket will be online. Washing machine will call engineer. Garage doors will open automatically, alarm will be turned off, lights will go on, coffee machine will begin to pour a coffee and so on. 1998 - Futurewise Power generation in many homes from solar cells and wind in rural areas 1998 – Futurewise Retail and e-commerce Basic shopping will be done online and the rest will become a recreational activity – so shopping centres will have to learn from theme parks to reinvent themselves as a leisure experience. 1998 - Futurewise Loss of millions of small retailers by 2005 as huge chains expand market share, These global chains will increase own brand sales from 15% to 30% by 2010. However many corner shops will survive because of convenience, parking restrictions and so on. We will see a big reversal of the trend to build more out of town superstores, with rapid growth of smaller outlets of the same chains. 1998 - Futurewise Millions of people will buy and sell from each other directly at cut throat prices for new and second-hand goods, by posting information online, with instant matching of buyers and sellers, creating virtual “street markets”. 1998 – Futurewise Europe and global economy Major economic disruptions will occur affecting many nations from a series of very low probability but very high impact events, with combined impact. 1998 - Futurewise Expect increasingly complicated financial instruments to be developed, which will add to risks of economic instability. 1998 – Futurewise The next global economic shock is likely to be triggered by events relating to complex financial products (derivatives) and hedge funds, overwhelming markets and governments - 2003 Futurewise Speed of change will be a fundamental and rapidly growing global risk, with sudden collapse of economies in different nations, related to loss of market confidence. 1998 – Futurewise Massive future economic tensions in Eurozone in next two decades, which may threaten the Euro project. 1998 – Futurewise Expect more rioting on the streets as workers untie to vent their anger and frustration at leaders, global institutions and wealthy ethnic minorities. 1998 – Futurewise Interest rate targeting set at 2% will turn out to be too low because no room to manage the deflationary economic shocks that we are going to see, without risk of tipping over into deflation. Expect central banks to begin relaxing such low targets – 2003 Futurewise 2nd edition Expect a growing backlash against globalization, blamed by workers in many nations for lack of jobs and economic decline. 1998 – Futurewise Expect growing anger and resentment against market speculators, blamed for price instabilities of commodities and currencies, and for destabilizing entire nations by over-trading complex financial products, which very few people fully understand. 1998 – Futurewise More governments will take refuge in larger trading blocs, with more grouped, linked of fused economies by 2020, particularly in Asia, but it will not be entirely effective as speculators also grow rapidly in global power. ASEAN will become stronger as part of this process. 1998 – Futurewise Outsourcing of manufacturing and service jobs to China and India will go into reverse, as inflation in Asia wipes out the economic argument for doing so, and as companies look to become more agile and reduce risk. 2003 - Futurewise Many people will be surprised at how rapidly China overtakes the US as the world’s largest economy. Futurewise 1998 Health AIDS will become a global pandemic which will require massive community mobilization over more than 20 years. Prevention programmes will prove effective, but a vaccine will be almost impossible to make, and certainly will not be developed before 2003. 1987 – Truth about AIDS Life expectancy will go on increasing rapidly with official forecasts revised every 12 months, over the next two decades and beyond, each of which will create added pressures on pension fund solvency. 1998 - Futurewise Viruses will be used to treat or prevent diseases such as cancer or cystic fibrosis. Viruses will be used to infect cancer cells and teach them to manufacture chemotherapy agents to poison cancer cells directly – 1993 Genetic Revolution We will repair damaged tissue using a person’s own cells, grown in the laboratory. – 1993 Genetic Revolution GM foods – new crops and improved animals - will be important and widely grown. – 1993 Genetic Revolution Genetically engineered animal and human cells will be used to manufacture next-generation pharma products including new types of vaccines. – 1993 Genetic Revolution Huge numbers of human genes will be linked to patterns of disease, enabling very accurate predictions to be made about future medical problems in an individual. – 1993 Genetic Revolution Monoclonal antibodies will become a very important treatment in future for cancer and other conditions. – 1993 Genetic Revolution Technology to create human cloned embryos in the laboratory for research purposes will become routine. 1993 – Genetic Revolution We will see many new virus threats emerge around the world by 2020, of which several will trigger global containment efforts. 1998 – Futurewise= Pensions crisis Pensions crisis will hit Germany and Italy, while France will be shaken by riots and demonstrations on the streets when governments try to increase retirement age. 1998 – Futurewise In 20 years time, many older people will carry on working to 75, or 85, or until they drop, with virtually no pension. 1998 – Futurewise Calls to legalise euthanasis will grow far stronger, with many high profile cases where doctors or family have taken the law into their own hands. 1998 – Futurewise Personal pension plans and investment funds will be growth markets for those approaching retirement, with increasing questions about charges and performance of actively managed funds. 1998 – Futurewise Future of Europe Eurozone will not be sustainable without huge pain. Economic conditions that enable some countries to swim will cause other economies to drown. 1998 - Futurewise Tribalism will be the downfall of Europe and will feed terrorism. 1998 - Futurewise Many former Eastern Bloc nations will not stabilize economically until beyond 2008, and even then there will still be a huge gap compared to Western Europe. 1998 - Futurewise It will take until beyond 2018 for new democratic traditions to take root in former Soviet bloc nations. Economic crisis in these nations may lead to riots, civil disobedience, internal military action or worse. Expect the EU to try to reduce these risks by early inclusion into an enlarged community. 1998 - Futurewise Enlarging the EU from15 to 25 nations will change it forever, adding to paralysis in decision-making. 1998 - Futurewise People tribes will sometimes be very hostile to the emerging mega-state. 1998 - Futurewise One of the final destructions of the United States of Europe will be high unemployment caused by rapidly changing economic conditions and labour force immobility. 1998 - Futurewise However Europe will benefit in the short term from instability elsewhere. 1998 – Futurewise Future of the UK UK will continue to disintegrate in the final death pangs of the English imperialistic dream. Scots will look to Europe as a way of staying together in a broader alliance, rather than close rule from London. 1998 - Futurewise The English will become increasingly resentful of rulings imposed by the EU. 1998 - Futurewise As Scotland asserts its own right to govern itself, the English will become more strident about being English. 1998 - Futurewise UK home ownership will prove a good long term investment, despite market collapse and many pundits claiming the end of the sector as a sound investment. Web posts 2007-2008 on globalchange.com and YouTube. London will remain very popular and powerful London will continue to be one of the most popular cities in the world for the next 30 years, and will continue to be dominated by financial service despite aggressive global competition. 1998 - Futurewise The City will keep top position or near top in cross-border lending, and will fight to remain the largest global centre for Forex. 1998 - Futurewise Cities will be popular places to live Hundreds of millions will migrate to large cities: city life will be increasingly popular, despite forecasts by some that cities will decay and die as wealthy people move out to escape crime, congestion, pollution and chaos. 1998 - Futurewise Politics, tribalism, new patterns of war and rapid rise of new terrorist groups Political whirlwinds will affect whole continents. 1998 – Futurewise Rise of new, sinister radical people movements, which are totally convinced of their moral cause and use tribalism, social networking and terrorism – these groups will seize great powers. 1998 – Futurewise Terrorist groups will multiply rapidly in the third millennium, taking advantage of new technologies to frighten, sabotage and attack for the sake of a cause. 1998 – Futurewise Security forces will use ever more sophisticated tools for surveillance, violating privacy of hundreds of millions. 1998 – Futurewise Traditional left-right political divides will be swept away in many nations by polarized debates over things like sustainability, or the application of Islamic laws, or whether a nation should be in or out of the Eurozone. 1998 - Futurewise Ever present video cameras will make large-scale traditional wars harder to fight, because horrors will be seen very clearly. 1998 – Futurewise World military spending will fall and then rise – with investment in drones, cruise missiles etc but experience show that wars are won by house to house fighting not by remote control smart weapons. 1998 – Futurewise Sustainability and single issues Left-right politics will give way to single issue politics eg should we be in or out of the Eurozone, or become independent from the UK, or spend more on carbon taxes. 1998 - Futurewise The environment will be the number 1 dominant single issue for decades to come. 1998 - Futurewise Marriage and children Marriage will become less fashionable but the dominant household pattern for middle aged people will still be having children and bringing them up together. 1998 - Futurewise A new generation of teenagers will emerge in the early Third Millennium, the M Generation: more conservative than in the past – less sex, less drunkenness, less drugs, more study, more concerned about issues like environment. They will still follow traditional romantic dreams… of one day finding a wonderful partner for a very long term relationship. 1998 - Futurewise Drugs and smoking We will see widespread drug testing – at work and in prisons, greater investment in drug rehab. 1998 - Futurewise We will see progressive criminalisation of smoking with ever stricter regulations on tobacco, and a bitter fight in some nations over decriminalization of cannabis. 1998 - Futurewise We will see hundreds of new designer drugs that fall outside government legal powers, of which some will enhance memory and intelligence, becoming widely abused by students. 1998 - Futurewise Feminisation Feminisation of workplace and wider society with men in retreat, labeled as testosterone addicts, dangerous, ill-behaved variants of human species prone to violence and sexual predatory acts. 1998 – Futurewise
{ "pile_set_name": "pile-cc" }
--- abstract: 'In this paper we review the recent results on strangeness production measured by HADES in the Ar+KCl system at a beam energy of 1.756 AGeV. A detailed comparison of the measured hadron yields with the statistical model is also discussed.' address: - | GSI Helmholtz Centre for Heavy Ion Research GmbH Planckstrasse 1,\ D-64291 Darmstadt, GERMANY - 'Excellence Cluster ’Universe’, Technische Universität München, Boltzmannstr. 2, D-85748 Garching Germany.' author: - 'J. Pietraszko [^1] L. Fabbietti (for the HADES collaboration)' title: Strangeness Production at SIS measured with HADES --- Hadron production in Ar+KCl collisions ====================================== The study of nuclear matter properties at high densities and temperatures is one of the main objectives in relativistic heavy-ion physics. In this context, the paramount aim of measuring the particle yields emitted from heavy-ion collisions is to learn about the K-N potential, production mechanism of strange particles or the nuclear equation of state. Strangeness production in relativistic heavy-ion collisions at SIS/Bevelac energy range has been extensively studied by various groups including experiments at the Bevelac[@bevelac_1] and at SIS, KaoS[@kaos_1], FOPI[@fopi_1], and in recent years also HADES.\ Recently, the HADES[@hades_nim] collaboration measured charged particle production in the Ar+KCl system[@hades_kaons] at 1.756 AGeV. Although HADES was designed primarily for di-electron measurements, it has also shown an excellent capability for the identification of a wide range of hadrons like $K^-$, $K^+$, K$^0$$\rightarrow$$\pi^+\pi^-$, $\Lambda$$\rightarrow$p$\pi^-$, $\phi$$\rightarrow$$K$$^+$$K$$^-$ and even $\Xi$$\rightarrow$$\Lambda$$\pi^-$. The measured yields and slopes of the transverse mass of kaons and $\Lambda$ particles have been found to be in good agreement with the results obtained by KaoS[@kaos_0] and FOPI[@fopi_0]. The dependencies of the measured $K^-/K^+$ ratio on the centrality and on the collision energy follow the systematics measured by KaoS [@hades_kaons] too.\ A combined and inclusive identification of $K^+K^-$ and $\phi$ mesons was performed for the first time in the same experimental setup at subthreshold beam energy for $K^-$ and $\phi$ production. The obtained $\phi/K^-$ ratio of 0.37$\pm$0.13$\%$ indicates that 18$\pm$7$\%$ of kaons stem from $\phi$ decays. Since the $\phi$ mesons reconstructed via $K^+K^-$ channel are those coming mainly from decays happening outside the nuclear medium, this value should be considered as a lower limit. In addition the non-resonant $K^+K^-$ production can contribute to the measured $K^-$ yield. Unfortunately, this part is not known in heavy-ion collisions, but it has been measured to be about 50$\%$ of the overall $K^+K^-$ yield in elementary p+p collisions[@anke_1]. In this view, the $K^-$ production in heavy-ion collisions at SIS energies can not be explained exclusively by the strangeness exchange mechanism and the processes mentioned above must also be taken into account to achieve a complete description. Comparison to statistical models ================================ The yields of reconstructed hadron species have been extrapolated to the full solid angle and compared to the result from a fit with the statistical model THERMUS[@thermus_1] as shown in Fig. \[fig:thermus\]. The measured yields nicely agree with the results of the model, except for the $\Xi^-$. One should note that in this approach a good description of the $\phi$ meson yield is obtained without assuming any strangeness suppression (net strangeness content of the $\phi$ is S=0). This is very different as compared to higher energies where $\phi$ meson does not behave as a strangeness neutral object but rather as an object with net strangeness between 1 and 2 [@strgns_phi].\ (200,180)(0,0) (-32,-2.6)[ ![Chemical freeze-out parameters obtained in the statistical thermal model (for details see [@cleymans_1]). The HADES point corresponds to Ar+KCl collisions at 1.756 AGeV.[]{data-label="fig:statmodel"}](thermus_fits1.10_publ.eps "fig:"){width="110.00000%" height=".915\textwidth"}]{} (200,180)(0,0) (0,-15)[ ![Chemical freeze-out parameters obtained in the statistical thermal model (for details see [@cleymans_1]). The HADES point corresponds to Ar+KCl collisions at 1.756 AGeV.[]{data-label="fig:statmodel"}](eovern_2009_to_be_publ.eps "fig:"){width=".97\textwidth" height=".97\textwidth"}]{} The $\Xi^-$ baryon yields measured in heavy-ion collisions above the production threshold at RHIC[@rhic_1], SPS[@sps_1] and AGS[@ags_1] nicely agree with the statistical model predictions. On the contrary the result of the first $\Xi^-$ measurement below the production threshold published by HADES[@hades_ksi], shows a deviation of about an order of magnitude from the calculations (Fig. \[fig:thermus\]). Using the measured hadron multiplicities the statistical model predicts that the chemical freeze-out of the Ar+KCl collision at 1.765 AGeV occurs at a temperature of T=73$\pm$6MeV and at chemical baryon potential of $\mu$=770$\pm$43MeV. The strangeness correlation radius[@hades_kaons] of R$_c=$2.4$\pm$0.8fm was used which is significantly smaller than the radius of fireball R$_{fireball}$=4.9$\pm$1.4fm.\ This result nicely follows the striking regularity shown by particle yields at all beam energies [@cleymans_1], as presented in Fig. \[fig:statmodel\]. For all available energies, starting from the highest at RHIC down to the lowest at SIS, the measured particle multiplicities are consistent with the assumption of chemical equilibrium which sets in at the end of the collision phase. Only two parameters (the temperature and baryon chemical potential) are needed within a thermal-statistical model to describe particle yields in a very systematic way at a given collision energy [@cleymans_1]. As one can also see, all experimental results are in good agreement with a fixed-energy-per-particle condition $<E>$$/$$<N>$$\approx1GeV$, which is one of the available freeze-out criteria[@cleymans_1]. The new HADES results on strangeness production shed new light on the understanding of kaon production mechanisms in HI collisions, namely the results have provided compelling evidence that the contribution from the $\phi$ decay to the $K^-$ yield has to be also taken into account. The measured hadron yields have been found in general to be in good agreement with statistical model predictions, besides the $\Xi^-$, which is produced far below the production threshold and shows a considerable deviation from the statistical model. The already performed experiments p+p at 3.5 GeV and p+Nb at 3.5 GeV and further planned HADES experiments with heavier systems, like Au+Au will deliver new valuable data on strangeness production. The on-going upgrade of the HADES spectrometer will increase its performance and capability and the installed Forward Wall detector will allow for reaction plane reconstruction in all upcoming runs, allowing to study kaon flow observables as well. [9]{} S. Schnetzer et al. Phys. Rev. Lett. 49 (1982) 989; Phys. Rev. C, 40 (1989) 640 C. Sturm et al. (KAOS) Phys. Rev. Lett. 86 (2001) 39 J. L. Ritman et al. (FOPI), Z. Phys. A 352 (1995) 355 G. Agakichiev et al. (HADES), Eur. Phys. J. A 41 (2009) 243 G.Agakichev et al., (HADES), in press Phys. Rev. C and arXiv:0902.3487 A. Förster et al. (KAOS) Phys. Rev. C 75 (2007) 024906. M. Merschmeyer et al. (FOPI) Phys. Rev. C 76 (2007) 024906 Y.Maeda et al. (ANKE), Phys. Rev. C 77 (2008) 015204 S. Wheaton and J. Cleymans, hep-ph/0407174\ S. Wheaton and J. Cleymans, J. Phys. G31 (2005) S1069 I. Kraus et al., Phys. Rev. C76, 064903 (2007) J. Adams et al. (STAR), Phys. Rev. Lett. bold[98]{}, 062301 (2007) F. Antinori et al. (NA57) Phys. Lett. B 595, 68 (2004)\ C. Alt et al. (NA49), Phys. Rev. C 78, 034918 (2008) P. Chung et al. (E895), Phys. Rev. Lett. 91, 202301 (2003) G. Agakishiev et al. (HADES) Phys. Rev. Lett. 1003 (2009) 132301 J. Cleymans at al. Phys. Rev. C 73, 034905 (2006) and J. Cleymans private communication [^1]: e-mail: j.pietraszko@gsi.de
{ "pile_set_name": "arxiv" }
<?php /** * Specialized implementation of hook_page_manager_task_tasks(). See api-task.html for * more information. */ function page_manager_contact_user_page_manager_tasks() { if (!module_exists('contact')) { return; } return array( // This is a 'page' task and will fall under the page admin UI 'task type' => 'page', 'title' => t('User contact'), 'admin title' => t('User contact'), 'admin description' => t('When enabled, this overrides the default Drupal behavior for displaying the user contact form at <em>user/%user/contact</em>. If no variant is selected, the default Drupal user contact form will be used.'), 'admin path' => 'user/%user/contact', // Callback to add items to the page managertask administration form: 'task admin' => 'page_manager_contact_user_task_admin', 'hook menu alter' => 'page_manager_contact_user_menu_alter', // This is task uses 'context' handlers and must implement these to give the // handler data it needs. 'handler type' => 'context', // handler type -- misnamed 'get arguments' => 'page_manager_contact_user_get_arguments', 'get context placeholders' => 'page_manager_contact_user_get_contexts', // Allow this to be enabled or disabled: 'disabled' => variable_get('page_manager_contact_user_disabled', TRUE), 'enable callback' => 'page_manager_contact_user_enable', ); } /** * Callback defined by page_manager_contact_user_page_manager_tasks(). * * Alter the user view input so that user view comes to us rather than the * normal user view process. */ function page_manager_contact_user_menu_alter(&$items, $task) { if (variable_get('page_manager_contact_user_disabled', TRUE)) { return; } // Override the user view handler for our purpose. if ($items['user/%user/contact']['page callback'] == 'contact_user_page' || variable_get('page_manager_override_anyway', FALSE)) { $items['user/%user/contact']['page callback'] = 'page_manager_contact_user'; $items['user/%user/contact']['file path'] = $task['path']; $items['user/%user/contact']['file'] = $task['file']; } else { // automatically disable this task if it cannot be enabled. variable_set('page_manager_contact_user_disabled', TRUE); if (!empty($GLOBALS['page_manager_enabling_contact_user'])) { drupal_set_message(t('Page manager module is unable to enable user/%user/contact because some other module already has overridden with %callback.', array('%callback' => $items['user/%user/contact']['page callback'])), 'error'); } } } /** * Entry point for our overridden user view. * * This function asks its assigned handlers who, if anyone, would like * to run with it. If no one does, it passes through to Drupal core's * user view, which is user_page_view(). */ function page_manager_contact_user($account) { // Load my task plugin: $task = page_manager_get_task('contact_user'); // Load the account into a context. ctools_include('context'); ctools_include('context-task-handler'); $contexts = ctools_context_handler_get_task_contexts($task, '', array($account)); $output = ctools_context_handler_render($task, '', $contexts, array($account->uid)); if ($output !== FALSE) { return $output; } module_load_include('inc', 'contact', 'contact.pages'); $function = 'contact_user_page'; foreach (module_implements('page_manager_override') as $module) { $call = $module . '_page_manager_override'; if (($rc = $call('contact_user')) && function_exists($rc)) { $function = $rc; break; } } // Otherwise, fall back. return $function($account); } /** * Callback to get arguments provided by this task handler. * * Since this is the node view and there is no UI on the arguments, we * create dummy arguments that contain the needed data. */ function page_manager_contact_user_get_arguments($task, $subtask_id) { return array( array( 'keyword' => 'user', 'identifier' => t('User being viewed'), 'id' => 1, 'name' => 'uid', 'settings' => array(), ), ); } /** * Callback to get context placeholders provided by this handler. */ function page_manager_contact_user_get_contexts($task, $subtask_id) { return ctools_context_get_placeholders_from_argument(page_manager_contact_user_get_arguments($task, $subtask_id)); } /** * Callback to enable/disable the page from the UI. */ function page_manager_contact_user_enable($cache, $status) { variable_set('page_manager_contact_user_disabled', $status); // Set a global flag so that the menu routine knows it needs // to set a message if enabling cannot be done. if (!$status) { $GLOBALS['page_manager_enabling_contact_user'] = TRUE; } }
{ "pile_set_name": "github" }
Newbould Newbould is a surname. Notable people with the name include: Alfred Ernest Newbould (1873–1952), British cinematographer and politician Brian Newbould (born 1936), British composer, conductor and author Frank Newbould (1887–1951), English poster artist Harry Newbould (1861–1928), English football manager Julieanne Newbould (born 1957), Australian actress Thomas Newbould (1880–1964), English rugby player See also Newbold (name) Newbolt
{ "pile_set_name": "wikipedia_en" }
Running Stat Dinner with people is always better than eating alone, especially when the food is good. Good food tastes even better when enjoyed with people. Tonight Amy came over to try my second attempt at the Brussels Sprouts Veggie Soup to which I have made some changes (see recipe below in previous post) for a better result, I believe. We were at the store earlier and saw some nice looking haricot verts and heirloom tomatoes, so we decide to assemble a simple salad from those. Of course while I’m at the market, I can’t not get some five peppercorn salami. Our simple dinner of soup, salami, bread, cheese, salad, and wine was on the table in 15 minutes.
{ "pile_set_name": "pile-cc" }
--- "Missing document with catch": - do: catch: missing get: index: test_1 id: 1 --- "Missing document with ignore": - do: get: index: test_1 id: 1 ignore: 404
{ "pile_set_name": "github" }
Burry Port railway station Burry Port railway station served the town of Burry Port (). It continued to serve the inhabitants of the area near Llanelli between 1909 and 1953 and was one of several basic halts opened on the Burry Port and Gwendraeth Valley Railway in Carmarthenshire, Wales (). History The station was opened as Burry Port in 1898 but regular passenger services began on 02 August 1909 by the Burry Port and Gwendraeth Valley Railway on the Kidwelly and Burry Port section of the line and was closed by the British Transport Commission in 1953 with the last passenger train running on Saturday 19 September 1953. It was on the southern section of the Burry Port and Gwendraeth Valley Railway with Pembrey to the north and Burry Port as the termuinus of the passenger line. The line had been built on the course of an old canal with resulting tight curves, low bridge clearance and a tendency to flooding. The freight service continued for coal traffic on the Cwmmawr branch to Kidwelly until 1996 by which time the last of the local collieries had closed down and the washery closure followed. Pembrey amd Burry Port on the West wales line lies to the east. Infrastructure The station had a single short platform, a brick built toilet block and a substantial corrugated iron ticket office and waiting room with a canopy on the northern side of the single line. The station had a run round passing loop and two carriage sidings, one of which also served a goods shed. Signalling was present. The Kidwelly route was used for coal trains, resulting in the lifting of track between Trimsaran Road and Burry Port by 2005. Burry Port railway station on the West Wales line stood close to the site of the old Burry Port and Gwendraeth Valley Railway. Services The station was open for use by the general public by 1909. Remnants The section of the old line between Burry Port and Craiglon Bridge Halt is now a footpath and the NCN 4 cyclepath. The station site is now part of a roundabout. Routes See also West Wales lines References Category:Disused railway stations in Carmarthenshire Category:Railway stations opened in 1898 Category:Railway stations opened in 1909 Category:Railway stations closed in 1953
{ "pile_set_name": "wikipedia_en" }
require([ 'gitbook' ], function (gitbook) { gitbook.events.bind('page.change', function () { mermaid.init(); }); });
{ "pile_set_name": "github" }
{ "images" : [ { "idiom" : "watch", "scale" : "2x", "screen-width" : "<=145" }, { "idiom" : "watch", "scale" : "2x", "screen-width" : ">161" }, { "idiom" : "watch", "scale" : "2x", "screen-width" : ">145" }, { "idiom" : "watch", "scale" : "2x", "screen-width" : ">183" } ], "info" : { "version" : 1, "author" : "xcode" } }
{ "pile_set_name": "github" }
IFUP-TH 2013/21 1.4truecm [**Background Field Method,**]{} .5truecm [**Batalin-Vilkovisky Formalism And**]{} .5truecm [**Parametric Completeness Of Renormalization**]{} 1truecm *Damiano Anselmi* .2truecm *Dipartimento di Fisica “Enrico Fermi”, Università di Pisa,* *and INFN, Sezione di Pisa,* *Largo B. Pontecorvo 3, I-56127 Pisa, Italy,* .2truecm damiano.anselmi@df.unipi.it 1.5truecm **Abstract** We investigate the background field method with the Batalin-Vilkovisky formalism, to generalize known results, study the parametric completeness of general gauge theories and achieve a better understanding of several properties. In particular, we study renormalization and gauge dependence to all orders. Switching between the background field approach and the usual approach by means of canonical transformations, we prove parametric completeness without making use of cohomological theorems; namely we show that if the starting classical action is sufficiently general all divergences can be subtracted by means of parameter redefinitions and canonical transformations. Our approach applies to renormalizable and nonrenormalizable theories that are manifestly free of gauge anomalies and satisfy the following assumptions: the gauge algebra is irreducible and closes off shell, the gauge transformations are linear functions of the fields, and closure is field independent. Yang-Mills theories and quantum gravity in arbitrary dimensions are included, as well as effective and higher-derivative versions of them, but several other theories, such as supergravity, are left out. 1truecm Introduction ============ The background field method [@dewitt; @abbott] is a convenient tool to quantize gauge theories and make explicit calculations, particularly when it is used in combination with the dimensional-regularization technique. It amounts to choosing a nonstandard gauge fixing in the conventional approach and, among its virtues, it keeps the gauge transformations intact under renormalization. However, it takes advantage of properties that only particular classes of theories have. The Batalin-Vilkovisky formalism [@bata] is also useful for quantizing general gauge theories, especially because it collects all ingredients of infinitesimal gauge symmetries in a single identity, the master equation, which remains intact through renormalization, at least in the absence of gauge anomalies. Merging the background field method with the Batalin-Vilkovisky formalism is not only an interesting theoretical subject *per se*, but can also offer a better understanding of known results, make us appreciate aspects that have been overlooked, generalize the validity of crucial theorems about the quantization of gauge theories and renormalization, and help us address open problems. For example, an important issue concerns the generality of the background field method. It would be nice to formulate a unique treatment for all gauge theories, renormalizable and nonrenormalizable, unitary and higher derivative, with irreducible or reducible gauge algebras that close off shell or only on shell. However, we will see that at this stage it is not possible to achieve that goal, due to some intrinsic features of the background field method. Another important issue that we want to emphasize more than has been done so far is the problem of *parametric completeness* in general gauge theories [@regnocoho]. To ensure renormalization-group (RG) invariance, all divergences must be subtracted by redefining parameters and making canonical transformations. When a theory contains all independent parameters necessary to achieve this goal, we say that it is parametrically complete. The RG-invariant renormalization of divergences may require the introduction of missing Lagrangian terms, multiplied by new physical constants, or even deform the symmetry algebra in nontrivial ways. However, in nonrenormalizable theories such as quantum gravity and supergravity it is not obvious that the action can indeed be adjusted to achieve parametric completeness. One way to deal with this problem is to classify the whole cohomology of invariants and hope that the solution satisfies suitable properties. This method requires lengthy technical proofs that must be done case by case [@coho], and therefore lacks generality. Another way is to let renormalization build the new invariants automatically, as shown in ref. [@regnocoho], with an algorithm that is able to iteratively extend the classical action converting divergences into finite counterterms. However, that procedure is mainly a theoretical tool, because although very general and conceptually minimal, it is practically unaffordable. Among the other things, it leaves the possibility that renormalization may dynamically deform the gauge symmetry in physically observable ways. A third possibility is the one we are going to treat here, taking advantage of the background field method. Where it applies, it makes cohomological classifications unnecessary and excludes that renormalization may dynamically deform the symmetry in observable ways. Because of the intrinsic properties of the background field method, the approach of this paper, although general enough, is not exhaustive. It is general enough because it includes the gauge symmetries we need for physical applications, namely Abelian and non-Abelian Yang-Mills symmetries, local Lorentz symmetry and invariance under general changes of coordinates. At the same time, it is not exhaustive because it excludes other potentially interesting symmetries, such as local supersymmetry. To be precise, our results hold for every gauge symmetry that satisfies the following properties: the algebra of gauge transformations ($i$) closes off shell and ($ii$) is irreducible; moreover ($iii$) there exists a choice of field variables where the gauge transformations $\delta _{\Lambda }\phi $ of the physical fields $\phi $ are linear functions of $\phi $ and the closure $[\delta _{\Lambda },\delta _{\Sigma }]=\delta _{[\Lambda ,\Sigma ]}$ of the algebra is $\phi $ independent. We expect that with some technical work it will be possible to extend our results to theories that do not satisfy assumption ($ii$), but our impression is that removing assumptions ($i$) and ($iii$) will be much harder, if not impossible. In this paper we also assume that the theory is manifestly free of gauge anomalies. Our results apply to renormalizable and nonrenormalizable theories that satisfy the assumptions listed so far, among which are QED, Yang-Mills theories, quantum gravity and Lorentz-violating gauge theories [@lvgauge], as well as effective [@weinberg], higher-derivative [@stelle] and nonlocal [@tombola] versions of such theories, in arbitrary dimensions, and extensions obtained including any set of composite fields. We recall that Stelle’s proof [@stelle] that higher-derivative quantum gravity is renormalizable was incomplete, because it assumed without proof a generalization of the Kluberg-Stern–Zuber conjecture [@kluberg] for the cohomological problem satisfied by counterterms. Even the cohomological analysis of refs. [@coho] does not directly apply to higher-derivative quantum gravity, because the field equations of higher-derivative theories are not equal to perturbative corrections of the ordinary field equations. These remarks show that our results are quite powerful, because they overcome a number of difficulties that otherwise need to be addressed case by case. Strictly speaking, our results, in their present form, do not apply to chiral theories, such as the Standard Model coupled to quantum gravity, where the cancellation of anomalies is not manifest. Nevertheless, since all other assumptions we have made concern just the forms of gauge symmetries, not the forms of classical actions, nor the limits around which perturbative expansions are defined, we expect that our results can be extended to all theories involving the Standard Model or Lorentz-violating extensions of it [@kostelecky; @LVSM]. However, to make derivations more easily understandable it is customary to first make proofs in the framework where gauge anomalies are manifestly absent, and later extend the results by means of the Adler-Bardeen theorem [@adlerbardeen]. We follow the tradition on this, and plan to devote a separate investigation to anomaly cancellation. Although some of our results are better understandings or generalizations of known properties, we do include them for the sake of clarity and self-consistence. We think that our formalism offers insight on the issues mentioned above and gives a more satisfactory picture. In particular, the fact that background field method makes cohomological classifications unnecessary is something that apparently has not been appreciated enough so far. Moreover, our approach points out the limits of applicability of the background field method. To achieve parametric completeness we proceed in four basic steps. First, we study renormalization to all orders subtracting divergences “as they come”, which means without worrying whether the theory contains enough independent parameters for RG invariance or not. Second, we study how the renormalized action and the renormalized $\Gamma $ functional depend on the gauge fixing, and work out how the renormalization algorithm maps a canonical transformation of the classical theory into a canonical transformation of the renormalized theory. Third, we renormalize the canonical transformation that continuously interpolates between the background field approach and the conventional approach. Fourth, comparing the two approaches we show that if the classical action $S_{c}(\phi ,\lambda )$ contains all gauge invariant terms determined by the starting gauge symmetry, then there exists a canonical transformation $\Phi ,K\rightarrow \hat{\Phi},\hat{K}$ such that $$S_{R\hspace{0.01in}\text{min}}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)=S_{c}(\hat{\phi}+{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\tau (\lambda ))-\int R^{\alpha }(\hat{\phi}+{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\hat{C})\hat{K}_{\alpha }, \label{key0}$$ where $S_{R\hspace{0.01in}\text{min}}$ is the renormalized action with the gauge-fixing sector switched off, $\Phi ^{\alpha }=\{\phi ,C\}$ are the fields ($C$ being the ghosts), $K_{\alpha }$ are the sources for the $\Phi^{\alpha}$ transformations $R^{\alpha }(\Phi )$, ${\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }$ are the background fields, $\lambda $ are the physical couplings and $\tau (\lambda )$ are $\lambda $ redefinitions. Identity (\[key0\]) shows that all divergences can be renormalized by means of parameter redefinitions and canonical transformations, which proves parametric completeness. Power counting may or may not restrict the form of $S_{c}(\phi ,\lambda )$. Basically, under the assumptions we have made the background transformations do not renormalize, and the quantum fields $\phi $ can be switched off and then restored from their background partners ${\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }$. Nevertheless, the restoration works only up to a canonical transformation, which gives (\[key0\]). The story is a bit more complicated than this, but this simplified version is enough to appreciate the main point. However, when the assumptions we have made do not hold, the argument fails, which shows how peculiar the background field method is. Besides giving explicit examples where the construction works, we address some problems that arise when the assumptions listed above are not satisfied. A somewhat different approach to the background field method in the framework of the Batalin-Vilkovisky formalism exists in the literature. In refs. [@quadri] Binosi and Quadri considered the most general variation $\delta {\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu }=\Omega $ of the background gauge field ${\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu }$ in Yang-Mills theory, and obtained a modified Batalin-Vilkovisky master equation that controls how the functional $\Gamma $ depends on ${\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu } $. Instead, here we introduce background copies of both physical fields and ghosts, which allows us to split the symmetry transformations into “quantum transformations” and “background transformations”. The master equation is split into the three identities (\[treide\]), which control invariances under the two types of transformations. The paper is organized as follows. In section 2 we formulate our approach and derive its basic properties, emphasizing the assumptions we make and why they are necessary. In section 3 we renormalize divergences to all orders, subtracting them “as they come”. In section 4 we derive the basic differential equations of gauge dependence and integrate them, which allows us to show how a renormalized canonical transformation emerges from its tree-level limit. In section 5 we derive (\[key0\]) and prove parametric completeness. In section 6 we give two examples, non-Abelian Yang-Mills theory and quantum gravity. In section 7 we make remarks about parametric completeness and recapitulate where we stand now on this issue. Section 8 contains our conclusions, while the appendix collects several theorems and identities that are used in the paper. We use the dimensional-regularization technique and the minimal subtraction scheme. Recall that the functional integration measure is invariant with respect to perturbatively local changes of field variables. Averages $\langle \cdots \rangle $ always denote the sums of *connected* Feynman diagrams. We use the Euclidean notation in theoretical derivations and switch to Minkowski spacetime in the examples. Background field method and Batalin-Vilkovisky formalism ======================================================== In this section we formulate our approach to the background field method with the Batalin-Vilkovisky formalism. To better appreciate the arguments given below it may be useful to jump back and forth between this section and section 6, where explicit examples are given. If the gauge algebra closes off shell, there exists a canonical transformation that makes the solution $S(\Phi ,K)$ of the master equation $(S,S)=0$ depend linearly on the sources $K$. We write $$S(\Phi ,K)=\mathcal{S}(\Phi )-\int R^{\alpha }(\Phi )K_{\alpha }. \label{solp}$$ The fields $\Phi ^{\alpha }=\{\phi ^{i},C^{I},\bar{C}^{I},B^{I}\}$ are made of physical fields $\phi ^{i}$, ghosts $C^{I}$ (possibly including ghosts of ghosts and so on), antighosts $\bar{C}^{I}$ and Lagrange multipliers $B^{I}$ for the gauge fixing. Moreover, $K_{\alpha }=\{K_{\phi }^{i},K_{C}^{I},K_{\bar{C}}^{I},K_{B}^{I}\}$ are the sources associated with the symmetry transformations $R^{\alpha}(\Phi)$ of the fields $\Phi ^{\alpha }$, while $$\mathcal{S}(\Phi )=S_{c}(\phi )+(S,\Psi )$$ is the sum of the classical action $S_{c}(\phi )$ plus the gauge fixing, which is expressed as the antiparenthesis of $S$ with a $K$-independent gauge fermion $\Psi (\Phi )$. We recall that the antiparentheses are defined as $$(X,Y)=\int \left\{ \frac{\delta _{r}X}{\delta \Phi ^{\alpha }}\frac{\delta _{l}Y}{\delta K_{\alpha }}-\frac{\delta _{r}X}{\delta K_{\alpha }}\frac{\delta _{l}Y}{\delta \Phi ^{\alpha }}\right\} ,$$ where the summation over the index $\alpha $ is understood. The integral is over spacetime points associated with repeated indices. The non-gauge-fixed action $$S_{\text{min}}(\Phi ,K)=S_{c}(\phi )-\int R_{\phi }^{i}(\phi ,C)K_{\phi }^{i}-\int R_{C}^{I}(\phi ,C)K_{C}^{I}, \label{smin}$$ obtained by dropping antighosts, Lagrange multipliers and their sources, also solves the master equation, and is called the minimal solution. Antighosts $\bar{C}$ and Lagrange multipliers $B$ form trivial gauge systems, and typically enter (\[solp\]) by means of the gauge fixing $(S,\Psi )$ and a contribution $$\Delta S_{\text{nm}}=-\int B^{I}K_{\bar{C}}^{I}, \label{esto}$$ to $-\int R^{\alpha }K_{\alpha }$. Let $\mathcal{R}^{\alpha }(\Phi ,C)$ denote the transformations the fields $\Phi ^{\alpha }$ would have if they were matter fields. Each function $\mathcal{R}^{\alpha }(\Phi ,C)$ is a bilinear form of $\Phi ^{\alpha }$ and $C$. Sometimes, to be more explicit, we also use the notation $\mathcal{R}_{\bar{C}}^{I}(\bar{C},C)$ and $\mathcal{R}_{B}^{I}(B,C)$ for $\bar{C}$ and $B$, respectively. It is often convenient to replace (\[esto\]) with the alternative nonminimal extension $$\Delta S_{\text{nm}}^{\prime }=-\int \left( B^{I}+\mathcal{R}_{\bar{C}}^{I}(\bar{C},C)\right) K_{\bar{C}}^{I}-\int \mathcal{R}_{B}^{I}(B,C)K_{B}^{I}. \label{estobar}$$ For example, in Yang-Mills theories we have $$\Delta S_{\text{nm}}^{\prime }=-\int \left( B^{a}-gf^{abc}C^{b}\bar{C}^{c}\right) K_{\bar{C}}^{a}+g\int f^{abc}C^{b}B^{c}K_{B}^{a}$$ and in quantum gravity $$\Delta S_{\text{nm}}^{\prime }=-\int \left( B_{\mu }+\bar{C}_{\rho }\partial _{\mu }C^{\rho }-C^{\rho }\partial _{\rho }\bar{C}_{\mu }\right) K_{\bar{C}}^{\mu }+\int \left( B_{\rho }\partial _{\mu }C^{\rho }+C^{\rho }\partial _{\rho }B_{\mu }\right) K_{B}^{\mu }, \label{nmqg}$$ where $C^{\mu }$ are the ghosts of diffeomorphisms. Observe that (\[estobar\]) can be obtained from (\[esto\]) making the canonical transformation generated by $$F_{\text{nm}}(\Phi ,K^{\prime })=\int \Phi ^{\alpha }K_{\alpha }^{\prime }+\int \mathcal{R}_{\bar{C}}^{I}(\bar{C},C)K_{B}^{I\hspace{0.01in}\prime }.$$ Requiring that $F_{\text{nm}}$ indeed give (\[estobar\]) we get the identities $$\mathcal{R}_{B}^{I}(B,C)=-\int B^{J}\frac{\delta _{l}}{\delta \bar{C}^{J}}\mathcal{R}_{\bar{C}}^{I}(\bar{C},C),\qquad \int \left( R_{C}^{J}\frac{\delta _{l}}{\delta C^{J}}+\mathcal{R}_{\bar{C}}^{J}(\bar{C},C)\frac{\delta _{l}}{\delta \bar{C}^{J}}\right) \mathcal{R}_{\bar{C}}^{I}(\bar{C},C)=0, \label{iddo}$$ which can be easily checked both for Yang-Mills theories and gravity. In this paper the notation $R^{\alpha }(\Phi )$ refers to the field transformations of (\[smin\]) plus those of the nonminimal extension (\[esto\]), while $\bar{R}^{\alpha }(\Phi )$ refers to the transformations of (\[smin\]) plus (\[estobar\]). Background field action ----------------------- To apply the background field method, we start from the gauge invariance of the classical action $S_{c}(\phi )$, $$\int R_{c}^{i}(\phi ,\Lambda )\frac{\delta _{l}S_{c}(\phi )}{\delta \phi ^{i}}=0, \label{lif}$$ where $\Lambda $ are the arbitrary functions that parametrize the gauge transformations $\delta \phi ^{i}=R_{c}^{i}$. Shifting the fields $\phi $ by background fields ${\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }$, and introducing arbitrary background functions ${\mkern2mu\underline{\mkern-2mu\smash{\Lambda }\mkern-2mu}\mkern2mu }$ we can write the identity $$\int \left[ R_{c}^{i}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\Lambda )+X^{i}\right] \frac{\delta _{l}S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })}{\delta \phi ^{i}}+\int \left[ R_{c}^{i}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{\Lambda }\mkern-2mu}\mkern2mu })-X^{i}\right] \frac{\delta _{l}S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })}{\delta {\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }^{i}}=0,$$ which is true for arbitrary functions $X^{i}$. If we choose $$X^{i}=R_{c}^{i}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{\Lambda }\mkern-2mu}\mkern2mu })-R_{c}^{i}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{\Lambda }\mkern-2mu}\mkern2mu }),$$ the transformations of the background fields contain only background fields and coincide with $R_{c}^{i}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{\Lambda }\mkern-2mu}\mkern2mu })$. We find $$\int \left[ R_{c}^{i}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\Lambda +{\mkern2mu\underline{\mkern-2mu\smash{\Lambda }\mkern-2mu}\mkern2mu })-R_{c}^{i}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{\Lambda }\mkern-2mu}\mkern2mu })\right] \frac{\delta _{l}S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })}{\delta \phi ^{i}}+\int R_{c}^{i}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{\Lambda }\mkern-2mu}\mkern2mu })\frac{\delta _{l}S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })}{\delta {\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }^{i}}=0. \label{bas}$$ Thus, denoting background quantities by means of an underlining, we are led to consider the action $$S(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })-\int R^{\alpha }(\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })K_{\alpha }-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })({\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }-K_{\alpha }), \label{sback}$$ which solves the master equation $\llbracket S,S\rrbracket =0$, where the antiparentheses are defined as $$\llbracket X,Y\rrbracket =\int \left\{ \frac{\delta _{r}X}{\delta \Phi ^{\alpha }}\frac{\delta _{l}Y}{\delta K_{\alpha }}+\frac{\delta _{r}X}{\delta {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }}\frac{\delta _{l}Y}{\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }}-\frac{\delta _{r}X}{\delta K_{\alpha }}\frac{\delta _{l}Y}{\delta \Phi ^{\alpha }}-\frac{\delta _{r}X}{\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }}\frac{\delta _{l}Y}{\delta {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }}\right\} .$$ More directly, if $S(\Phi ,K)=S_{c}(\phi )-\int R^{\alpha }(\Phi )K_{\alpha } $ solves $(S,S)=0$, the background field can be introduced with a canonical transformation. Start from the action $$S(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=S_{c}(\phi )-\int R^{\alpha }(\Phi )K_{\alpha }-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }){\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }, \label{sback0}$$ which obviously satisfies two master equations, one in the variables $\Phi ,K $ and the other one in the variables ${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$. [*A fortiori*]{}, it also satisfies $\llbracket S,S\rrbracket =0$. Relabeling fields and sources with primes and making the canonical transformation generated by the functional $$F_{\text{b}}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=\int (\Phi ^{\alpha }+{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha })K_{\alpha }^{\prime }+\int {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }^{\prime }, \label{casbac}$$ we obtain (\[sback\]), and clearly preserve $\llbracket S,S\rrbracket =0$. The shift ${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$ is called background field, while $\Phi $ is called quantum field. We also have quantum sources $K$ and background sources ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$. Finally, we have background transformations, those described by the background ghosts ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$ or the functions ${\mkern2mu\underline{\mkern-2mu\smash{\Lambda }\mkern-2mu}\mkern2mu }$ in (\[bas\]), and quantum transformations, those described by the quantum ghosts $C$ and (\[esto\]) or the functions $\Lambda $ in (\[bas\]). The action (\[sback\]) is not the most convenient one to study renormalization. It is fine in the minimal sector (the one with antighosts and Lagrange multipliers switched off), but not in the nonminimal one. Now we describe the improvements we need to make. #### Non-minimal sector So far we have introduced background copies of all fields. Nevertheless, strictly speaking we do not need to introduce copies of the antighosts $\bar{C}$ and the Lagrange multipliers $B$, since we do not need to gauge-fix the background. Thus we drop ${\mkern2mu\underline{\mkern-2mu\smash{\bar{C}}\mkern-2mu}\mkern2mu }$, ${\mkern2mu\underline{\mkern-2mu\smash{B}\mkern-2mu}\mkern2mu }$ and their sources from now on, and define ${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }=\{{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }^{i},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{I},0,0\}$, ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }=\{{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\phi }^{i},{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{C}^{I},0,0\}$. Observe that then we have $R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })=\bar{R}^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })=\{R_{\phi }^{i}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }),R_{C}^{I}({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }),0,0\}$. Let us compare the nonminimal sectors (\[esto\]) and (\[estobar\]). If we choose (\[esto\]), $\bar{C}$ and $B$ do not transform under background transformations. Since (\[esto\]) are the only terms that contain $K_{\bar{C}}$, they do not contribute to one-particle irreducible diagrams and do not receive radiative corrections. Moreover, $K_{B}$ does not appear in the action. Instead, if we choose the nonminimal sector (\[estobar\]), namely if we start from $$S(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=S_{c}(\phi )-\int \bar{R}^{\alpha }(\Phi )K_{\alpha }-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }){\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha } \label{sback1}$$ instead of (\[sback0\]), the transformation (\[casbac\]) gives the action $$S(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })-\int (\bar{R}^{\alpha }(\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })-\bar{R}^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }))K_{\alpha }-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }){\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }. \label{sback2}$$ In particular, using the linearity of $\mathcal{R}_{\bar{C}}^{I}$ and $\mathcal{R}_{B}^{I}$ in $C$, we see that (\[estobar\]) is turned into itself plus $$-\int \mathcal{R}_{\bar{C}}^{I}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{\bar{C}}^{I}-\int \mathcal{R}_{B}^{I}(B,{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{B}^{I}. \label{bacca}$$ Because of these new terms, $\bar{C}$ and $B$ now transform as ordinary matter fields under background transformations. This is the correct background transformation law we need for them. On the other hand, the nonminimal sector (\[estobar\]) also generates nontrivial quantum transformations for $\bar{C}$ and $B$, which are renormalized and complicate our derivations. It would be better to have (\[estobar\]) in the background sector and (\[esto\]) in the nonbackground sector. To achieve this goal, we make the canonical transformation generated by $$F_{\text{nm}}^{\prime }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=\int \Phi ^{\alpha }K_{\alpha }^{\prime }+\int {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }^{\prime }+\int \mathcal{R}_{\bar{C}}^{I}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{B}^{I\hspace{0.01in}\prime } \label{casbacca}$$ on (\[sback\]). Using (\[iddo\]) again, the result is $$\begin{aligned} S(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }) &=&S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })-\int (R^{\alpha }(\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })-R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }))K_{\alpha } \nonumber \\ &&-\int \mathcal{R}_{\bar{C}}^{I}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{\bar{C}}^{I}-\int \mathcal{R}_{B}^{I}(B,{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{B}^{I}-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }){\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }. \label{sbacca}\end{aligned}$$ This is the background field action we are going to work with. It is straightforward to check that (\[sbacca\]) satisfies $\llbracket S,S\rrbracket =0$. #### Separating the background and quantum sectors Now we separate the background sector from the quantum sector. To do this properly we need to make further assumptions. First, we assume that there exists a choice of field variables where the functions $R^{\alpha }(\Phi )$ are at most quadratic in $\Phi $. We call it *linearity assumption*. It is equivalent to assume that the gauge transformations $\delta _{\Lambda }\phi ^{i}=R_{c}^{i}(\phi ,\Lambda )$ of (\[lif\]) are linear functions of the fields $\phi $ and closure is expressed by $\phi $-independent identities $[\delta _{\Lambda },\delta _{\Sigma }]=\delta _{[\Lambda ,\Sigma ]}$. The linearity assumption is satisfied by all gauge symmetries of physical interest, such as those of QED, non-Abelian Yang-Mills theory, quantum gravity and the Standard Model. On the other hand, it is not satisfied by other important symmetries, among which is supergravity, where the gauge transformations either close only on shell or are not linear in the fields. Second, we assume that the gauge algebra is irreducible, which ensures that the set $\Phi $ contains only ghosts and not ghosts of ghosts. Under these assumptions, we make the canonical transformation generated by $$F_{\tau }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=\int \Phi ^{\alpha }K_{\alpha }^{\prime }+\int {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }^{\prime }+(\tau -1)\int {\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{I}{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{C}^{I\hspace{0.01in}\prime } \label{backghost}$$ on the action (\[sbacca\]). This transformation amounts to rescaling the background ghosts ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{I}$ by a factor $\tau $ and their sources ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{C}^{I}$ by a factor $1/\tau $. Since we do not have background antighosts, (\[backghost\]) is the background-ghost-number transformation combined with a rescaling of the background sources. The action (\[sbacca\]) is not invariant under (\[backghost\]). Using the linearity assumption it is easy to check that the transformed action $S_{\tau }$ is linear in $\tau $. Writing $S_{\tau }=\hat{S}+\tau \bar{S}$ we can split the total action $S$ into the sum $\hat{S}+\bar{S}$ of a *quantum action* $\hat{S}$ and a *background action* $\bar{S}$. Precisely, the quantum action $\hat{S}$ does not depend on the background sources ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$ and the background ghosts ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$, but only on the background copies ${\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }$ of the physical fields. We have $$\hat{S}=\hat{S}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)=S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })-\int R^{\alpha }(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },C,\bar{C},B)K_{\alpha }. \label{deco}$$ Note that, in spite of the notation, the functions $R^{\alpha }(\Phi )$ are actually $\bar{C}$ independent. Moreover, we find $$\bar{S}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=-\int \mathcal{R}^{\alpha }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{\alpha }-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }){\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }, \label{sbar}$$ where, for $\phi $ and $C$, $$\mathcal{R}^{\alpha }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })=R^{\alpha }(\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })-R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })-R^{\alpha }(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },C,\bar{C},B). \label{batra}$$ These functions transform $\phi $ and $C$ as if they were matter fields and are of course linear in $\Phi $ and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$. Note that formula (\[batra\]) does not hold for antighosts and Lagrange multipliers. In the end all quantum fields transform as matter fields under background transformations. The master equation $\llbracket S,S\rrbracket =0$ decomposes into the three identities $$\llbracket \hat{S},\hat{S}\rrbracket =\llbracket \hat{S},\bar{S}\rrbracket =\llbracket \bar{S},\bar{S}\rrbracket =0, \label{treide}$$ which we call *background field master equations*. The quantum transformations are described by $\hat{S}$ and the background ones are described by $\bar{S}$. Background fields are inert under quantum transformations, because $\llbracket \hat{S},{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }\rrbracket =0$. Note that $$\llbracket \hat{S},\llbracket \bar{S},X\rrbracket \rrbracket +\llbracket \bar{S},\llbracket \hat{S},X\rrbracket \rrbracket =0, \label{uso}$$ where $X$ is an arbitrary local functional. This property follows from the Jacobi identity of the antiparentheses and $\llbracket \hat{S},\bar{S}\rrbracket =0$, and states that background and quantum transformations commute. #### Gauge-fixing Now we come to the gauge fixing. In the usual approach, the theory is typically gauge-fixed by means of a canonical transformation that amounts to replacing the action $S$ by$\ S+(S,\Psi )$, where $\Psi $ is a local functional of ghost number $-1$ and depends only on the fields $\Phi $. Using the background field method it is convenient to search for a ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$-independent gauge-fixing functional $\Psi (\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })$ that is also invariant under background transformations, namely such that $$\llbracket \bar{S},\Psi \rrbracket =0. \label{backgf}$$ Then we fix the gauge with the usual procedure, namely we make a canonical transformation generated by $$F_{\text{gf}}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=\int \ \Phi ^{\alpha }K_{\alpha }^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }^{\prime }+\Psi (\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }). \label{backgfgen}$$ Because of (\[backgf\]) the gauge-fixed action reads $$S_{\text{gf}}=\hat{S}+\bar{S}+\llbracket \hat{S},\Psi \rrbracket . \label{fgback}$$ Defining $\hat{S}_{\text{gf}}=\hat{S}+\llbracket \hat{S},\Psi \rrbracket $, identities (\[treide\]), (\[uso\]) and (\[backgf\]) give $\llbracket \hat{S}_{\text{gf}},\hat{S}_{\text{gf}}\rrbracket =\llbracket \hat{S}_{\text{gf}},\bar{S}\rrbracket =0$, so it is just like gauge-fixing $\hat{S}$. Since both $\hat{S}$ and $\Psi $ are ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$ and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$ independent, $\hat{S}_{\text{gf}}$ is also ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$ and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$ independent. Observe that the canonical transformations (\[backghost\]) and (\[backgfgen\]) commute; therefore we can safely apply the transformation (\[backghost\]) to the gauge-fixed action. A gauge fixing satisfying (\[backgf\]) is called *background-preserving gauge fixing*. In some derivations of this paper the background field master equations (\[treide\]) are violated in intermediate steps; therefore we need to prove properties that hold more generally. Specifically, consider an action $$S(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=\hat{S}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)+\bar{S}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }), \label{assu}$$ equal to the sum of a ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$- and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$-independent “quantum action” $\hat{S}$, plus a “background action” $\bar{S}$ that satisfies the following requirements: ($i$) it is a linear function of the quantum fields $\Phi $, ($ii$) it gets multiplied by $\tau $ when applying the canonical transformation (\[backghost\]), and ($iii$) $\delta _{l}\bar{S}/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }$ is $\Phi $ independent. In particular, requirement ($ii$) implies that $\bar{S}$ vanishes at ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }=0$. Since $\bar{S}$ is a linear function of $\Phi $, it does not contribute to one-particle irreducible diagrams. Since $\hat{S}$ does not depend on ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$, while $\bar{S}$ vanishes at ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }=0$, $\bar{S}$ receives no radiative corrections. Thus the $\Gamma$ functional associated with the action (\[assu\]) satisfies $$\Gamma (\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=\hat{\Gamma}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)+\bar{S}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }). \label{becco}$$ Moreover, thanks to theorem \[thb\] of the appendix we have the general identity $$\llbracket \Gamma ,\Gamma \rrbracket =\langle \llbracket S,S\rrbracket \rangle , \label{univ}$$ under the sole assumption that $\delta _{l}S/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }$ is $\Phi $ independent. Applying the canonical transformation (\[backghost\]) to $\Gamma $ we find $\Gamma _{\tau }=\hat{\Gamma}+\tau \bar{S}$, so (\[univ\]) gives the identities $$\llbracket \hat{\Gamma},\hat{\Gamma}\rrbracket =\langle \llbracket \hat{S},\hat{S}\rrbracket \rangle ,\qquad \llbracket \bar{S},\hat{\Gamma}\rrbracket =\langle \llbracket \bar{S},\hat{S}\rrbracket \rangle . \label{give}$$ When $\llbracket S,S\rrbracket =0$ we have $$\llbracket \Gamma ,\Gamma \rrbracket =\llbracket \hat{\Gamma},\hat{\Gamma}\rrbracket =\llbracket \bar{S},\hat{\Gamma}\rrbracket =0. \label{msb}$$ Observe that, thanks to the linearity assumption, an $\bar{S}$ equal to (\[sbar\]) satisfies the requirements of formula (\[assu\]). Now we give details about the background-preserving gauge fixing we pick for the action (\[sbacca\]). It is convenient to choose gauge-fixing functions $G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )\phi ^{i}$ that are linear in the quantum fields $\phi $, where $G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )$ may contain derivative operators. Precisely, we choose the gauge fermion $$\Psi (\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })=\int \bar{C}^{I}G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )\phi ^{i}, \label{psiback}$$ and assume that it satisfies (\[backgf\]). A more common choice would be (see (\[seeym\]) for Yang-Mills theory) $$\Psi (\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })=\int \bar{C}^{I}\left( G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )\phi ^{i}+\xi _{IJ}B^{J}\right) ,$$ where $\xi _{IJ}$ are gauge-fixing parameters. In this case, when we integrate the $B$ fields out the expressions $G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )\phi ^{i}$ get squared. However, (\[psiback\]) is better for our purposes, because it makes the canonical transformations (\[casbacca\]) and (\[backgfgen\]) commute with each other. We call the choice (\[psiback\]) *regular Landau gauge*. The gauge-field propagators coincide with the ones of the Landau gauge. Nevertheless, while the usual Landau gauge (with no $B$’s around) is singular, here gauge fields are part of multiplets that include the $B$’s, therefore (\[psiback\]) is regular. In the regular Landau gauge, using (\[backgf\]) and applying (\[backgfgen\]) to (\[sbacca\]) we find $$S_{\text{gf}}=\hat{S}_{\text{gf}}+\bar{S}=S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })-\int R^{\alpha }(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },C,\bar{C},B)\tilde{K}_{\alpha }-\int \mathcal{R}^{\alpha }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{\alpha }-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }){\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }, \label{sbaccagf}$$ where the tilde sources $\tilde{K}_{\alpha }$ coincide with $K_{\alpha }$ apart from $\tilde{K}_{\phi }^{i}$ and $\tilde{K}_{\bar{C}}^{I}$, which are $$\tilde{K}_{\phi }^{i}=K_{\phi }^{i}-\bar{C}^{I}G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },-\overleftarrow{\partial }),\qquad \tilde{K}_{\bar{C}}^{I}=K_{\bar{C}}^{I}-G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )\phi ^{i}. \label{chif}$$ Recalling that the functions $R^{\alpha }(\Phi )$ are $\bar{C}$ independent, we see that $\hat{S}_{\text{gf}}$ does not depend on $K_{\phi }^{i}$ and $\bar{C}$ separately, but only through the combination $\tilde{K}_{\phi }^{i}$. Every one-particle irreducible diagram with $\bar{C}^{I}$ external legs actually factorizes a $-\bar{C}^{I}G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },-\overleftarrow{\partial })$ on those legs. Replacing one or more such objects with $K_{\phi }^{i}$s, we obtain other contributing diagrams. Conversely, replacing one or more $K_{\phi }^{i}$-external legs with $-\bar{C}^{I}G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },-\overleftarrow{\partial })$ we also obtain contributing diagrams. Therefore, all radiative corrections, as well as the renormalized action $\hat{S}_{R}$ and the $\Gamma $ functionals $\hat{\Gamma}$ and $\hat{\Gamma}_{R}$ associated with the action (\[sbaccagf\]), do not depend on $K_{\phi }^{i}$ and $\bar{C}$ separately, but only through the combination $\tilde{K}_{\phi }^{i}$. The only $B$-dependent terms of $\hat{S}_{\text{gf}}$, provided by $\llbracket S,\Psi \rrbracket $ and (\[esto\]), are $$\Delta S_{B}\equiv -\int B^{I}\tilde{K}_{\bar{C}}^{I}=\int B^{I}\left( G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )\phi ^{i}-K_{\bar{C}}^{I}\right) , \label{chio}$$ and are quadratic or linear in the quantum fields. For this reason, no one-particle irreducible diagrams can contain external $B$ legs, therefore $\Delta S_{B}$ is nonrenormalized and goes into $\hat{S}_{R}$, $\hat{\Gamma}$ and $\hat{\Gamma}_{R}$ unmodified. We thus learn that using linear gauge-fixing functions we can set $\bar{C}=B=0$ and later restore the correct $\bar{C}$ and $B$ dependencies in $\hat{S}_{\text{gf}}$, $\hat{S}_{R}$, $\hat{\Gamma}$ and $\hat{\Gamma}_{R}$ just by replacing $K_{\phi }^{i}$ with $\tilde{K}_{\phi }^{i}$ and adding $\Delta S_{B}$. From now on when no confusion can arise we drop the subscripts of $S_{\text{gf}}$ and $\hat{S}_{\text{gf}}$ and assume that the background field theory is gauge-fixed in the way just explained. Background-preserving canonical transformations ----------------------------------------------- It is useful to characterize the most general canonical transformations $\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }\rightarrow \Phi ^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\prime },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime }$ that preserve the background field master equations (\[treide\]) and the basic properties of $\hat{S}$ and $\bar{S}$. By definition, all canonical transformations preserve the antiparentheses, so (\[treide\]) are turned into $$\llbracket \hat{S}^{\prime },\hat{S}^{\prime }\rrbracket ^{\prime }=\llbracket \hat{S}^{\prime },\bar{S}^{\prime }\rrbracket ^{\prime }=\llbracket \bar{S}^{\prime },\bar{S}^{\prime }\rrbracket ^{\prime }=0. \label{tretre}$$ Moreover, $\hat{S}^{\prime }$ should be ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime }$ and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{\prime }$ independent, while $\bar{S}$ should be invariant, because it encodes the background transformations. This means $$\bar{S}^{\prime }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=\bar{S}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }). \label{thesisback}$$ We prove that a canonical transformation defined by a generating functional of the form $$F(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=\int \ \Phi ^{\alpha }K_{\alpha }^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }^{\prime }+Q(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime }), \label{cannonaback}$$ where $Q$ is a ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime }$- and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$-independent local functional such that $$\llbracket \bar{S},Q(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)\rrbracket =0, \label{assumback}$$ satisfies our requirements. Since $Q$ is ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime }$ and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$ independent, the background fields and the sources ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{C}$ do not transform: ${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\prime }={\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$, ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{C}^{\prime }={\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{C}$. Moreover, the action $\hat{S}^{\prime }$ is clearly ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime }$ and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{\prime }$ independent, as desired, so we just need to prove (\[thesisback\]). For convenience, multiply $Q$ by a constant parameter $\zeta $ and consider the canonical transformations generated by $$F_{\zeta }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=\int \ \Phi ^{\alpha }K_{\alpha }^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }^{\prime }+\zeta Q(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime }). \label{fg}$$ Given a functional $X(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })$ it is often useful to work with the tilde functional $$\tilde{X}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=X(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }),{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })). \label{tildedfback}$$ obtained by expressing the primed sources in terms of unprimed fields and sources. Assumption (\[assumback\]) tells us that $Q(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)$ is invariant under background transformations. Since $\Phi ^{\alpha }$ and $K_{\beta }$ transform as matter fields under such transformations, it is clear that $\delta Q/\delta K_{\alpha }$ and $\delta Q/\delta \Phi ^{\beta }$ transform precisely like them, as well as $\Phi ^{\alpha \hspace{0.01in}\prime }$ and $K_{\beta }^{\prime }$. Moreover, we have $\llbracket \bar{S},\tilde{Q}\rrbracket =0$ for every $\zeta $. Applying theorem \[theorem5\] to $\chi =\bar{S}$ we obtain $$\frac{\partial ^{\prime }\bar{S}^{\prime }}{\partial \zeta }=\frac{\partial \bar{S}}{\delta \zeta }-\llbracket \bar{S},\tilde{Q}\rrbracket =\frac{\partial \bar{S}}{\delta \zeta }, \label{tocback}$$ where $\partial ^{\prime }/\partial \zeta $ is taken at constant primed variables and $\partial /\partial \zeta $ is taken at constant unprimed variables. If we treat the unprimed variables as $\zeta $ independent, and the primed variables as functions of them and $\zeta $, the right-hand side of (\[tocback\]) vanishes. Varying $\zeta $ from 0 to 1 we get $$\bar{S}^{\prime }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=\bar{S}^{\prime }(\Phi ^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\prime },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=\bar{S}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }),$$ where now the relations among primed and unprimed variables are those specified by (\[cannonaback\]). We call the canonical transformations just defined *background-preserving canonical transformations*. We stress once again that they do not just preserve the background field (${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\prime }={\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$), but also the background transformations ($\bar{S}^{\prime }=\bar{S}$) and the ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$ and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$ independence of $\hat{S}$. The gauge-fixing canonical transformation (\[backgfgen\]) is background preserving. Canonical transformations may convert the sources $K$ into functions of both fields and sources. However, the sources are external, while the fields are integrated over. Thus, canonical transformations must be applied at the level of the action $S$, not at the levels of generating functionals. In the functional integral they must be meant as mere replacements of integrands. Nevertheless, we recall that there exists a way [@fieldcov; @masterf; @mastercan] to upgrade the formalism of quantum field theory and overcome these problems. The upgraded formalism allows us to implement canonical transformations as true changes of field variables in the functional integral, and closely track their effects inside generating functionals, as well as throughout the renormalization algorithm. Renormalization =============== In this section we give the basic algorithm to subtract divergences to all orders. As usual, we proceed by induction in the number of loops and use the dimensional-regularization technique and the minimal subtraction scheme. We assume that gauge anomalies are manifestly absent, i.e. that the background field master equations (\[treide\]) hold exactly at the regularized level. We first work on the classical action $S=\hat{S}+\bar{S}$ of (\[sbaccagf\]) and define a background-preserving subtraction algorithm. Then we generalize the results to non-background-preserving actions. Call $S_{n}$ and $\Gamma _{n}$ the action and the $\Gamma $ functional renormalized up to $n$ loops included, with $S_{0}=S$, and write the loop expansion as $$\Gamma _{n}=\sum_{k=0}^{\infty }\hbar ^{n}\Gamma _{n}^{(k)}.$$ The inductive assumptions are that $S_{n}$ has the form (\[assu\]), with $\bar{S}$ given by (\[sbar\]), and $$\begin{aligned} S_{n} &=&S+\text{poles},\qquad \Gamma _{n}^{(k)}<\infty ~~\forall k\leqslant n, \label{assu1} \\ \llbracket S_{n},S_{n}\rrbracket &=&\mathcal{O}(\hbar ^{n+1}),\qquad \llbracket \bar{S},S_{n}\rrbracket =0, \label{assu2}\end{aligned}$$ where “poles” refers to the divergences of the dimensional regularization. Clearly, the assumptions (\[assu1\]) and (\[assu2\]) are satisfied for $n=0$. Using formulas (\[give\]) and recalling that $\llbracket S_{n},S_{n}\rrbracket $ is a local insertion of order $\mathcal{O}(\hbar ^{n+1})$, we have $$\llbracket \Gamma _{n},\Gamma _{n}\rrbracket =\langle \llbracket S_{n},S_{n}\rrbracket \rangle =\llbracket S_{n},S_{n}\rrbracket +\mathcal{O}(\hbar ^{n+2}),\qquad \llbracket \bar{S},\Gamma _{n}\rrbracket =\langle \llbracket \bar{S},S_{n}\rrbracket \rangle =0. \label{gnback2}$$ By $\llbracket S,S\rrbracket =0$ and the first of (\[assu1\]), $\llbracket S_{n},S_{n}\rrbracket $ is made of pure poles. Now, take the order $\hbar ^{n+1}$ of equations (\[gnback2\]) and then their divergent parts. The second of (\[assu1\]) tells us that all subdivergences are subtracted away, so the order-$\hbar ^{n+1}$ divergent part $\Gamma _{n\text{ div}}^{(n+1)}$ of $\Gamma _{n}$ is a local functional. We obtain $$\llbracket S,\Gamma _{n\text{ div}}^{(n+1)}\rrbracket =\frac{1}{2}\llbracket S_{n},S_{n}\rrbracket +\mathcal{O}(\hbar ^{n+2}),\qquad \llbracket \bar{S},\Gamma _{n\text{ div}}^{(n+1)}\rrbracket =0. \label{gn2back}$$ Define $$S_{n+1}=S_{n}-\Gamma _{n\text{ div}}^{(n+1)}. \label{snp1back}$$ Since $S_{n}$ has the form (\[assu\]), $\Gamma _{n}$ has the form (\[becco\]), therefore both $\hat{\Gamma}_{n}$ and $\Gamma _{n\text{ div}}^{(n+1)}$ are ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$ and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$ independent, which ensures that $S_{n+1}$ has the form (\[assu\]) (with $\bar{S}$ given by (\[sbar\])). Moreover, the first inductive assumption of (\[assu1\]) is promoted to $S_{n+1}$. The diagrams constructed with the vertices of $S_{n+1} $ are the diagrams of $S_{n}$, plus new diagrams containing vertices of $-\Gamma _{n\text{ div}}^{(n+1)}$; therefore $$\Gamma _{n+1}^{(k)}=\Gamma _{n}^{(k)}<\infty ~~\forall k\leqslant n,\qquad \Gamma _{n+1}^{(n+1)}=\Gamma _{n}^{(n+1)}-\Gamma _{n\text{ div}}^{(n+1)}<\infty ,$$ which promotes the second inductive assumption of (\[assu1\]) to $n+1$ loops. Finally, formulas (\[gn2back\]) and (\[snp1back\]) give $$\llbracket S_{n+1},S_{n+1}\rrbracket =\llbracket S_{n},S_{n}\rrbracket -2\llbracket S,\Gamma _{n\text{ div}}^{(n+1)}\rrbracket +\mathcal{O}(\hbar ^{n+2})=\mathcal{O}(\hbar ^{n+2}),\qquad \llbracket \bar{S},S_{n+1}\rrbracket =0,$$ so (\[assu2\]) are also promoted to $n+1$ loops. We conclude that the renormalized action $S_{R}=S_{\infty }$ and the renormalized generating functional $\Gamma _{R}=\Gamma _{\infty }$ satisfy the background field master equations $$\llbracket S_{R},S_{R}\rrbracket =\llbracket \bar{S},S_{R}\rrbracket =0,\qquad \llbracket \Gamma _{R},\Gamma _{R}\rrbracket =\llbracket \bar{S},\Gamma _{R}\rrbracket =0. \label{finback}$$ For later convenience we write down the form of $S_{R}$, which is $$S_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=\hat{S}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)+\bar{S}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=\hat{S}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)-\int \mathcal{R}^{\alpha }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{\alpha }-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }){\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }. \label{sr1}$$ In the usual (non-background field) approach the results just derived hold if we just ignore background fields and sources, as well as background transformations, and use the standard parentheses $(X,Y)$ instead of $\llbracket X,Y\rrbracket $. Then the subtraction algorithm starts with a classical action $S(\Phi ,K)$ that satisfies the usual master equation $(S,S)=0$ exactly at the regularized level and ends with a renormalized action $S_{R}(\Phi ,K)=S_{\infty }(\Phi ,K)$ and a renormalized generating functional $\Gamma _{R}(\Phi ,K)=\Gamma _{\infty }(\Phi ,K)$ that satisfy the usual master equations $(S_{R},S_{R})=(\Gamma _{R},\Gamma _{R})=0$. In the presence of background fields ${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$ and background sources ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$, ignoring invariance under background transformations (encoded in the parentheses $\llbracket \bar{S},S\rrbracket $, $\llbracket \bar{S},S_{n}\rrbracket $, $\llbracket \bar{S},S_{R}\rrbracket $ and similar ones for the $\Gamma $ functionals), we can generalize the results found above to any classical action $S(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })$ that satisfies $\llbracket S,S\rrbracket =0$ at the regularized level and is such that $\delta _{l}S/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }$ is $\Phi $ independent. Indeed, these assumptions allow us to apply theorem \[thb\], instead of formulas (\[give\]), which is enough to go through the subtraction algorithm ignoring the parentheses $\llbracket \bar{S},X\rrbracket $. We have $\delta _{l}S/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }=\delta _{l}\Gamma /\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }=\delta _{l}S_{n}/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }$ for every $n$. Thus, we conclude that a classical action $S(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })$ that satisfies $\llbracket S,S\rrbracket =0$ at the regularized level and is such that $\delta _{l}S/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }$ is $\Phi $ independent gives a renormalized action $S_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })$ and a $\Gamma $ functional $\Gamma _{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })$ that satisfy $\llbracket S_{R},S_{R}\rrbracket =\llbracket \Gamma _{R},\Gamma _{R}\rrbracket =0$ and $\delta _{l}S_{R}/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }=\delta _{l}\Gamma _{R}/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }=\delta _{l}S/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }$. The renormalization algorithm of this section is a generalization to the background field method of the procedure first given in ref. [@lavrov]. Since it subtracts divergences just as they come, as emphasized by formula (\[snp1back\]), we use to call it “raw” subtraction [@regnocoho], to distinguish it from algorithms where divergences are subtracted away at each step by means of parameter redefinitions and canonical transformations. The raw subtraction does not ensure RG invariance [@regnocoho], because it subtracts divergent terms even when there is no (running) parameter associated with them. For the same reason, it tells us very little about parametric completeness. In power-counting renormalizable theories the raw subtraction is satisfactory, since we can start from a classical action $S_{c}$ that already contains all gauge-invariant terms that are generated back by renormalization. Nevertheless, in nonrenormalizable theories, such as quantum gravity, effective field theories and nonrenormalizable extensions of the Standard Model, in principle renormalization can modify the symmetry transformations in physically observable ways (see ref. [@regnocoho] for a discussion about this possibility). In section 5 we prove that this actually does not happen under the assumptions we have made in this paper; namely when gauge anomalies are manifestly absent, the gauge algebra is irreducible and closes off shell, and $R^{\alpha }(\Phi )$ are quadratic functions of the fields $\Phi $. Precisely, renormalization affects the symmetry only by means of canonical transformations and parameter redefinitions. Then, to achieve parametric completeness it is sufficient to include all gauge-invariant terms in the classical action $S_{c}(\phi )$, as classified by the starting gauge symmetry. The background field method is crucial to prove this result without advocating involved cohomological classifications. Gauge dependence ================ In this section we study the dependence on the gauge fixing and the renormalization of canonical transformations. We first derive the differential equations that govern gauge dependence; then we integrate them and finally use the outcome to describe the renormalized canonical transformation that switches between the background field approach and the conventional approach. These results will be useful in the next section to prove parametric completeness. The parameters of a canonical transformation are associated with changes of field variables and changes of gauge fixing. For brevity we call all of them “gauge-fixing parameters” and denote them with $\xi $. Let (\[cannonaback\]) be a tree-level canonical transformation satisfying (\[assumback\]). We write $Q(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime },\xi )$ to emphasize the $\xi $ dependence of $Q$. We prove that for every gauge-fixing parameter $\xi $ there exists a local ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$- and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$-independent functional $Q_{R,\xi }$ such that $$Q_{R,\xi }=\widetilde{Q_{\xi }}+\mathcal{O}(\hbar )\text{-poles},\qquad \langle Q_{R,\xi }\rangle <\infty , \label{babaoback}$$ and $$\frac{\partial S_{R}}{\partial \xi }=\llbracket S_{R},Q_{R,\xi }\rrbracket ,\qquad \llbracket \bar{S},Q_{R,\xi }\rrbracket =0,\qquad \frac{\partial \Gamma _{R}}{\partial \xi }=\llbracket \Gamma _{R},\langle Q_{R,\xi }\rangle \rrbracket , \label{backgind}$$ where $Q_{\xi }=\partial Q/\partial \xi $, $\widetilde{Q_{\xi }}$ is defined as shown in (\[tildedfback\]) and the average is calculated with the action $S_{R}$. We call the first and last equations of the list (\[backgind\]) *differential equations of* *gauge dependence*. They ensure that renormalized functionals depend on gauge-fixing parameters in a cohomologically exact way. Later we integrate equations (\[backgind\]) and move every gauge dependence inside a (renormalized) canonical transformation. A consequence is that physical quantities are gauge independent. We derive (\[backgind\]) proceeding inductively in the number of loops, as usual. The inductive assumption is that there exists a ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$- and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$-independent local functional $Q_{n,\xi }=\widetilde{Q_{\xi }}+\mathcal{O}(\hbar )$-poles such that $\langle Q_{n,\xi }\rangle $ is convergent up to the $n$th loop included (the average being calculated with the action $S_{n}$) and $$\frac{\partial S_{n}}{\partial \xi }=\llbracket S_{n},Q_{n,\xi }\rrbracket +\mathcal{O}(\hbar ^{n+1}),\qquad \llbracket \bar{S},Q_{n,\xi }\rrbracket =0. \label{inda1back}$$ Applying the identity (\[thesis\]), which here holds with the parentheses $\llbracket X,Y\rrbracket $, we easily see that $Q_{0,\xi }=\widetilde{Q_{\xi }}$ satisfies (\[inda1back\]) for $n=0$. Indeed, taking $\chi =S$ and noting that $\left. \partial S^{\prime }/\partial \xi \right| _{\Phi ^{\prime },K^{\prime }}=0$, since the parameter $\xi $ is absent before the transformation (a situation that we describe using primed variables), we get the first relation of (\[inda1back\]), without $\mathcal{O}(\hbar )$ corrections. Applying (\[thesis\]) to $\chi =\bar{S}$ and recalling that $\bar{S}$ is invariant, we get the second relation of (\[inda1back\]). Let $Q_{n,\xi \hspace{0.01in}\text{div}}^{(n+1)}$ denote the $\mathcal{O}(\hbar ^{n+1})$ divergent part of $\langle Q_{n,\xi }\rangle $. The inductive assumption ensures that all subdivergences are subtracted away, so $Q_{n,\xi \hspace{0.01in}\text{div}}^{(n+1)}$ is local. Define $$Q_{n+1,\xi }=Q_{n,\xi }-Q_{n,\xi \hspace{0.01in}\text{div}}^{(n+1)}. \label{refdback}$$ Clearly, $Q_{n+1,\xi }$ is ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$ and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$ independent and equal to $\widetilde{Q_{\xi }}+\mathcal{O}(\hbar )$-poles. Moreover, by construction $\langle Q_{n+1,\xi }\rangle $ is convergent up to the $(n+1)$-th loop included, where the average is calculated with the action $S_{n+1}$. Now, corollary \[corolla\] tells us that $\llbracket \bar{S},Q_{n,\xi }\rrbracket =0$ and $\llbracket \bar{S},S_{n}\rrbracket =0$ imply $\llbracket \bar{S},\langle Q_{n,\xi }\rangle \rrbracket =0$. Taking the $\mathcal{O}(\hbar ^{n+1})$ divergent part of this formula we obtain $\llbracket \bar{S},Q_{n,\xi \hspace{0.01in}\text{div}}^{(n+1)}\rrbracket =0$; therefore the second formula of (\[inda1back\]) is promoted to $n+1$ loops. Applying corollary \[cora\] to $\Gamma _{n}$ and $S_{n}$, with $X=Q_{n,\xi }$, we have the identity $$\frac{\partial \Gamma _{n}}{\partial \xi }=\llbracket \Gamma _{n},\langle Q_{n,\xi }\rangle \rrbracket +\left\langle \frac{\partial S_{n}}{\partial \xi }-\llbracket S_{n},Q_{n,\xi }\rrbracket \right\rangle +\frac{1}{2}\left\langle \llbracket S_{n},S_{n}\rrbracket \hspace{0.01in}Q_{n,\xi }\right\rangle _{\Gamma }, \label{provef}$$ where $\left\langle AB\right\rangle _{\Gamma }$ denotes the one-particle irreducible diagrams with one $A$ insertion and one $B$ insertion. Now, observe that if $A=\mathcal{O}(\hbar ^{n_{A}})$ and $B=\mathcal{O}(\hbar ^{n_{B}})$ then $\left\langle AB\right\rangle _{\Gamma }=\mathcal{O}(\hbar ^{n_{A}+n_{B}+1})$, since the $A,B$ insertions can be connected only by loops. Let us take the $\mathcal{O}(\hbar ^{n+1})$ divergent part of (\[provef\]). By the inductive assumption (\[assu2\]), the last term of (\[provef\]) can be neglected. By the inductive assumption (\[inda1back\]) we can drop the average in the second-to-last term. We thus get $$\frac{\partial \Gamma _{n\ \text{div}}^{(n+1)}}{\partial \xi }=\llbracket \Gamma _{n\ \text{div}}^{(n+1)},Q_{0,\xi }\rrbracket +\llbracket S,Q_{n,\xi \hspace{0.01in}\text{div}}^{(n+1)}\rrbracket +\frac{\partial S_{n}}{\partial \xi }-\llbracket S_{n},Q_{n,\xi }\rrbracket +\mathcal{O}(\hbar ^{n+2}).$$ Using this fact, (\[snp1back\]) and (\[refdback\]) we obtain $$\frac{\partial S_{n+1}}{\partial \xi }=\llbracket S_{n+1},Q_{n+1,\xi }\rrbracket +\mathcal{O}(\hbar ^{n+2}), \label{sunpi}$$ which promotes the first inductive hypothesis of (\[inda1back\]) to order $\hbar ^{n+1}$. When $n$ is taken to infinity, the first two formulas of (\[backgind\]) follow, with $Q_{R,\xi }=Q_{\infty ,\xi }$. The third identity of (\[backgind\]) follows from the first one, using (\[provef\]) with $n=\infty $ and $\llbracket \hat{S}_{R},\hat{S}_{R}\rrbracket =0$. This concludes the derivation of (\[backgind\]). Integrating the differential equations of gauge dependence {#integrating} ---------------------------------------------------------- Now we integrate the first two equations of (\[backgind\]) and find the renormalized canonical transformation that corresponds to a tree-level transformation (\[cannonaback\]) satisfying (\[assumback\]). Specifically, we prove that There exists a background-preserving canonical transformation $$F_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime },\xi )=\int \ \Phi ^{A}K_{A}^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{A}{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{A}^{\prime }+Q_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime },\xi ), \label{finalcanback}$$ where $Q_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime },\xi )=Q(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime },\xi )+\mathcal{O}(\hbar )$ is a ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$- and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$-independent local functional, such that the transformed action $S_{f}(\Phi ^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\prime },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=S_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu },\xi )$ is $\xi $ independent and invariant under background transformations: $$\frac{\partial S_{f}}{\partial \xi }=0,\qquad \llbracket \bar{S},S_{f}\rrbracket =0. \label{gindep2back}$$ *Proof*. To prove this statement we introduce a new parameter $\zeta $ multiplying the whole functional $Q$ of (\[cannonaback\]), as in (\[fg\]). We know that $\llbracket \bar{S},Q\rrbracket =0$ implies $\llbracket \bar{S},\tilde{Q}\rrbracket =0$. If we prove that the $\zeta $ dependence can be reabsorbed into a background-preserving canonical transformation we also prove the same result for every gauge-fixing parameter $\xi $ and also for all of them together. The differential equations of gauge dependence found above obviously apply with $\xi \rightarrow \zeta $. Specifically, we show that the $\zeta $ dependence can be reabsorbed in a sequence of background-preserving canonical transformations $S_{R\hspace{0.01in}n}\rightarrow S_{R\hspace{0.01in}n+1}$ (with $S_{R\hspace{0.01in}0}=S_{R}$), generated by $$F_{n}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=\int \ \Phi ^{A}K_{A}^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{A}{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{A}^{\prime }+H_{n}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime },\zeta ), \label{fn}$$ where $H_{n}=\mathcal{O}(\hbar ^{n})$, and such that $$\frac{\partial S_{R\hspace{0.01in}n}}{\partial \zeta }=\llbracket S_{R\hspace{0.01in}n},T_{n}\rrbracket ,\qquad T_{n}=\mathcal{O}(\hbar ^{n}). \label{tn}$$ The functionals $T_{n}$ and $H_{n}$ are determined by the recursive relations $$\begin{aligned} T_{n+1}(\Phi ^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime },\zeta ) &=&T_{n}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K,\zeta )-\widetilde{\frac{\partial H_{n}}{\partial \zeta }}, \label{d1} \\ H_{n}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime },\zeta ) &=&\int_{0}^{\zeta }d\zeta ^{\prime }\hspace{0.01in}T_{n,n}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime },\zeta ^{\prime }), \label{d2}\end{aligned}$$ with the initial conditions $$T_{0}=Q_{R,\zeta },\qquad H_{0}=\zeta Q.$$ In formula (\[d1\]) the tilde operation (\[tildedfback\]) on $\partial H_{n}/\partial \zeta $ and the canonical transformation $\Phi ,K\rightarrow \Phi ^{\prime },K^{\prime }$ are the ones defined by $F_{n}$. In formula (\[d2\]) $T_{n,n}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime })$ denotes the contributions of order $\hbar ^{n}$ to $T_{n}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime }))$, the function $K(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime })$ also being determined by $F_{n}$. Note that for $n>0$ we have $T_{n}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime }))=T_{n}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime })+\mathcal{O}(\hbar ^{n+1})$, therefore formula (\[d2\]), which determines $H_{n}$ (and so $F_{n}$), does not really need $F_{n}$ on the right-hand side. Finally, (\[d2\]) is self-consistent for $n=0$. Formula (\[thesis\]) of the appendix describes how the dependence on parameters is modified by a canonical transformation. Applying it to (\[tn\]), we get $$\frac{\partial S_{R\hspace{0.01in}n+1}}{\partial \zeta }=\frac{\partial S_{R\hspace{0.01in}n}}{\partial \zeta }-\llbracket S_{R\hspace{0.01in}n},\widetilde{\frac{\partial H_{n}}{\partial \zeta }}\rrbracket =\llbracket S_{R\hspace{0.01in}n},T_{n}-\widetilde{\frac{\partial H_{n}}{\partial \zeta }}\rrbracket ,$$ whence (\[d1\]) follows. For $n=0$ the first formula of (\[babaoback\]) gives $T_{0}=\widetilde{Q}+\mathcal{O}(\hbar )$, therefore $T_{1}=\mathcal{O}(\hbar )$. Then (\[d2\]) gives $H_{1}=\mathcal{O}(\hbar )$. For $n>0$ the order $\hbar ^{n}$ of $T_{n+1}$ vanishes by formula (\[d2\]); therefore $T_{n+1}=\mathcal{O}(\hbar ^{n+1})$ and $H_{n+1}=\mathcal{O}(\hbar ^{n+1})$, as desired. Consequently, $S_{f}\equiv S_{R\hspace{0.01in}\infty }$ is $\zeta $ independent, since (\[tn\]) implies $\partial S_{R\hspace{0.01in}\infty }/\partial \zeta =0$. Observe that ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$ and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$ independence is preserved at each step. Finally, all operations defined by (\[d1\]) and (\[d2\]) are background preserving. We conclude that the canonical transformation $F_{R}$ obtained composing the $F_{n}$s solves the problem. Using (\[gindep2back\]) and (\[give\]) we conclude that in the new variables $$\frac{\partial \Gamma _{f}}{\partial \xi }=\left\langle \frac{\partial S_{f}}{\partial \xi }\right\rangle =0\qquad \llbracket \bar{S},\Gamma _{f}\rrbracket =0, \label{finalmenteback}$$ for all gauge-fixing parameters $\xi $. Non-background-preserving canonical transformations --------------------------------------------------- In the usual approach the results derived so far apply with straightforward modifications. It is sufficient to ignore the background fields and sources, as well as the background transformations, and use the standard parentheses $(X,Y)$ instead of $\llbracket X,Y\rrbracket $. Thus, given a tree-level canonical transformation generated by $$F(\Phi ,K^{\prime })=\int \ \Phi ^{\alpha }K_{\alpha }^{\prime }+Q(\Phi ,K^{\prime },\xi ), \label{f1}$$ there exists a local functional $Q_{R,\xi }$ satisfying (\[babaoback\]) such that $$\frac{\partial S_{R}}{\partial \xi }=(S_{R},Q_{R,\xi }),\qquad \frac{\partial \Gamma _{R}}{\partial \xi }=(\Gamma _{R},\langle Q_{R,\xi }\rangle ), \label{br}$$ and there exists a renormalized canonical transformation $$F_{R}(\Phi ,K^{\prime })=\int \ \Phi ^{A}K_{A}^{\prime }+Q_{R}(\Phi ,K^{\prime },\xi ), \label{f2}$$ where $Q_{R}(\Phi ,K^{\prime },\xi )=Q(\Phi ,K^{\prime },\xi )+\mathcal{O}(\hbar )$ is a local functional, such that the transformed action $S_{f }(\Phi ^{\prime },K^{\prime })=S_{R}(\Phi ,K,\xi )$ is $\xi $ independent. Said differently, the entire $\xi $ dependence of $S_{R}$ is reabsorbed into the transformation: $$S_{R}(\Phi ,K,\xi )=S_{f}(\Phi ^{\prime }(\Phi ,K,\xi ),K^{\prime }(\Phi ,K,\xi )).$$ In the presence of background fields ${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$ and background sources ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$, dropping assumption (\[assumback\]) and ignoring invariance under background transformations, encoded in the parentheses $\llbracket \bar{S},X\rrbracket $, the results found above can be easily generalized to any classical action $S(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })$ that solves $\llbracket S,S\rrbracket =0$ and is such that $\delta _{l}S/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }$ is $\Phi $ independent, and to any ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$-independent canonical transformation. Indeed, these assumptions are enough to apply theorem \[thb\] and corollary \[cora\], and go through the derivation ignoring the parentheses $\llbracket \bar{S},X\rrbracket $. The tree-level canonical transformation is described by a generating functional of the form $$F(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=\int \ \Phi ^{\alpha }K_{\alpha }^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }^{\prime }+Q(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu },K^{\prime },\xi ). \label{fa1}$$ We still find the differential equations $$\frac{\partial S_{R}}{\partial \xi }=\llbracket S_{R},Q_{R,\xi }\rrbracket ,\qquad \frac{\partial \Gamma _{R}}{\partial \xi }=\llbracket \Gamma _{R},\langle Q_{R,\xi }\rangle \rrbracket , \label{eqw}$$ where $Q_{R,\xi }$ satisfies (\[babaoback\]). When we integrate the first of these equations with the procedure defined above we build a renormalized canonical transformation $$F_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime },\xi )=\int \ \Phi ^{\alpha}K_{\alpha}^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha}{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha}^{\prime }+Q_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu },K^{\prime },\xi ), \label{fra1}$$ where $Q_{R}=Q+\mathcal{O}(\hbar )$ is a local functional, such that the transformed action $S_{f}(\Phi ^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\prime },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=S_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu },\xi )$ is $\xi $ independent. The only difference is that now $Q_{R,\xi }$, $\langle Q_{R,\xi }\rangle $, $T_{n}$, $H_{n}$ and $Q_{R}$ can depend on ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$, which does not disturb any of the arguments used in the derivation. Canonical transformations in ----------------------------- We have integrated the first equation of (\[eqw\]), and shown that the $\xi$ dependence can be reabsorbed in the canonical transformation (\[fra1\]) on the renormalized action $S_{R}$, which gives the $\xi$-independent action with $S_{\rm f}$. We know that the generating functional $\Gamma_{\rm f}$ of one-particle irreducible Green functions determined by $S_{\rm f}$ is $\xi$ independent. We can also prove that $\Gamma_{\rm f}$ can be obtained applying a (non-local) canonical transformation directly on $\Gamma_{R}$. To achieve this goal we integrate the second equation of (\[eqw\]). The integration algorithm is the same as the one of subsection \[integrating\], with the difference that $Q_{R,\xi}$ is replaced by $\langle Q_{R,\xi}\rangle$. The canonical transformation on $\Gamma_{R}$ has a generating functional of the form $$F_{\Gamma}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime },\xi )=\int \ \Phi ^{\alpha}K_{\alpha}^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha}{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha}^{\prime }+Q_{\Gamma}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu },K^{\prime },\xi ), \label{fra2}$$ where $Q_{\Gamma}=Q+{\cal O}(\hbar)$ (non-local) radiative corrections. The result just obtained is actually more general, and proves that if $S$ is any action that solves the master equation (it can be the classical action, the renormalized action, or any other action) canonical transformations on $S$ correspond to canonical transformations on the $\Gamma$ functional determined by $S$. See [@quadri] for a different derivation of this result in Yang-Mills theory. Our line of reasoning can be recapitulated as follows: in the usual approach, ($i$) make a canonical transformation (\[f1\]) on $S$; ($ii$) derive the equations of gauge dependence for the action, which are $\partial S/\partial\xi=( S,Q_{\xi}) $; ($iii$) derive the equations of gauge dependence for the $\Gamma$ functional determined by $S$, which are $\partial \Gamma/\partial\xi=( \Gamma,\langle Q_{\xi}\rangle)$, and integrate them. The property just mentioned may sound obvious, and is often taken for granted, but actually needed to be proved. The reason is that the canonical transformations we are talking about are not true changes of field variables inside functional integrals, but mere replacements of integrands [@fieldcov]. Therefore, we cannot automatically infer how a transformation on the action $S$ affects the generating functionals $Z$, $W=\ln Z$ and $\Gamma$, and need to make some additional effort to get where we want. We recall that to skip this kind of supplementary analysis we need to use the formalism of the master functional, explained in refs. [@masterf; @mastercan]. Application ----------- An interesting application that illustrates the results of this section is the comparison between the renormalized action (\[sr1\]), which was obtained with the background field method and the raw subtraction procedure of section 3, and the renormalized action $S_{R}^{\prime }$ that can be obtained with the same raw subtraction in the usual non-background field approach. The usual approach is retrieved by picking a gauge fermion $\Psi ^{\prime }$ that depends on $\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$, such as $$\Psi ^{\prime }(\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })=\int \bar{C}^{I}G^{Ii}(0,\partial )(\phi ^{i}+{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }^{i}). \label{gfno}$$ Making the canonical transformation generated by $$F_{\text{gf}}^{\prime }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=\int \ \Phi ^{\alpha }K_{\alpha }^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }^{\prime }+\Psi ^{\prime }(\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }) \label{backgfgen2}$$ on (\[sback\]) we find the classical action $$S^{\prime }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=\hat{S}^{\prime }(\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K)+\bar{S}^{\prime }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }), \label{sr2c}$$ where $$\hat{S}^{\prime }(\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K)=S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })-\int R^{\alpha }(\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })\bar{K}_{\alpha },\qquad \bar{S}^{\prime }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })({\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }-K_{\alpha }), \label{sr2gf}$$ and the barred sources $\bar{K}_{\alpha }$ coincide with $K_{\alpha }$ apart from $\bar{K}_{\phi }^{i}$ and $\bar{K}_{\bar{C}}^{I}$, which are $$\bar{K}_{\phi }^{i}=K_{\phi }^{i}-\bar{C}^{I}G^{Ii}(0,-\overleftarrow{\partial }),\qquad \bar{K}_{\bar{C}}^{I}=K_{\bar{C}}^{I}-G^{Ii}(0,\partial )(\phi ^{i}+{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }^{i}). \label{chif2}$$ Clearly, $\hat{S}^{\prime }$ is the gauge-fixed classical action of the usual approach, apart from the shift $\Phi \rightarrow \Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$. The radiative corrections are generated only by $\hat{S}^{\prime }$ and do not affect $\bar{S}^{\prime }$. Indeed, $\hat{S}^{\prime }$ as well as the radiative corrections are unaffected by setting ${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }=0$ and then shifting $\Phi $ back to $\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$, while $\bar{S}^{\prime }$ disappears doing this. Thus, $\bar{S}^{\prime }$ is nonrenormalized, and the renormalized action $S_{R}^{\prime }$ has the form $$S_{R}^{\prime }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=\hat{S}_{R}^{\prime }(\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K)+\bar{S}^{\prime }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }). \label{sr2}$$ Now we compare the classical action (\[sbaccagf\]) of the background field method with the classical action (\[sr2c\]) of the usual approach. We recapitulate how they are obtained with the help of the following schemes: $$\begin{tabular}{cccccccccc} (\ref{sback}) & $\stackrel{(\ref{casbacca})}{\mathrel{\scalebox{2.5}[1]{$\longrightarrow$}}}$ & (\ref{sbacca}) & $\stackrel{(\ref{backgfgen})}{\mathrel{\scalebox{2.5}[1]{$\longrightarrow$}}}$ & $S=$ (\ref{sbaccagf}) & \qquad \qquad & (\ref{sback}) & $\stackrel{(\ref{backgfgen2})}{\mathrel{\scalebox{2.5}[1]{$ \longrightarrow$}}}$ & $S^{\prime }=$ (\ref {sr2gf}). & \end{tabular}$$ Above the arrows we have put references to the corresponding canonical transformations, which are (\[casbacca\]), (\[backgfgen\]) and (\[backgfgen2\]) and commute with one another. We can interpolate between the classical actions (\[sbaccagf\]) and (\[sr2gf\]) by means of a ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$-independent non-background-preserving canonical transformation generated by $$F_{\xi }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime },\xi )=\int \ \Phi ^{\alpha }K_{\alpha }^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }^{\prime }+\xi \Delta \Psi +\xi \int \mathcal{R}_{\bar{C}}^{I}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{B}^{I\hspace{0.01in}\prime }. \label{ds}$$ where $\xi $ is a gauge-fixing parameter that varies from 0 to 1, and $$\Delta \Psi =\int \bar{C}^{I}\left( G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )\phi ^{i}-G^{Ii}(0,\partial )(\phi ^{i}+{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }^{i})\right) .$$ Precisely, start from the non-background field theory (\[sr2c\]), andtake its variables to be primed ones. We know that $\hat{S}^{\prime }$ depends on the combination $$\tilde{K}_{\phi }^{i\hspace{0.01in}\prime }=K_{\phi }^{i\hspace{0.01in}\prime }-\bar{C}^{I\hspace{0.01in}}G^{Ii}(0,-\overleftarrow{\partial }), \label{kip}$$ and we have $\bar{C}^{I\hspace{0.01in}}=\bar{C}^{I\hspace{0.01in}\prime }$. Expressing the primed fields and sources in terms of the unprimed ones and $\xi $, we find the interpolating classical action $$S_{\xi }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })-\int R^{\alpha }(\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })\tilde{K}_{\alpha }(\xi )-\xi \int \mathcal{R}_{\bar{C}}^{I}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })\tilde{K}_{\bar{C}}^{I}(\xi )-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })({\mkern2mu\underline{\mkern-2mu\smash{\tilde{K}}\mkern-2mu}\mkern2mu }_{\alpha }(\xi )-\tilde{K}_{\alpha }(\xi )), \label{sx}$$ where $\tilde{K}_{C}^{I}(\xi )=K_{C}^{I}$, $$\tilde{K}_{\phi }^{i}(\xi )=K_{\phi }^{i}-\xi \bar{C}^{I\hspace{0.01in}}G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },-\overleftarrow{\partial })-(1-\xi )\bar{C}^{I\hspace{0.01in}}G^{Ii}(0,-\overleftarrow{\partial }), \label{convex}$$ while the other $\xi $-dependent tilde sources have expressions that we do not need to report here. It suffices to say that they are $K_{\phi }^{i}$ independent, such that $\delta _{r}S_{\xi }/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }=-R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })$, and linear in the quantum fields $\Phi $, apart from ${\mkern2mu\underline{\mkern-2mu\smash{\tilde{K}}\mkern-2mu}\mkern2mu }_{\phi }^{i}(\xi )$, which is quadratic. Thus the action $S_{\xi }$ and the transformation $F_{\xi }$ satisfy the assumptions that allow us to apply theorem \[thb\] and corollary \[cora\]. Actually, (\[ds\]) is of type (\[fa1\]); therefore we have the differential equations (\[eqw\]) and the renormalized canonical transformation (\[fra1\]). We want to better characterize the renormalized version $F_{R}$ of $F_{\xi }$. We know that the derivative of the renormalized $\Gamma $ functional with respect to $\xi $ is governed by the renormalized version of the average $$\left\langle \widetilde{\frac{\partial F_{\xi }}{\partial \xi }}\right\rangle =\langle \Delta \Psi \rangle +\int \mathcal{R}_{\bar{C}}^{I}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{B}^{I}.$$ It is easy to see that $\langle \Delta \Psi \rangle $ is independent of ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$, $B$, $K_{\bar{C}}$ and $K_{B}$, since no one-particle irreducible diagrams with such external legs can be constructed. In particular, $K_{B}^{I\hspace{0.01in}\prime }=K_{B}^{I}$. Moreover, using the explicit form (\[sx\]) of the action $S_{\xi }$ and arguments similar to the ones that lead to formulas (\[chif\]), we easily see that $\langle \Delta \Psi \rangle $ is equal to $\Delta \Psi $ plus a functional that does not depend on $K_{\phi }^{i}$ and $\bar{C}^{I}$ separately, but only on the convex combination (\[convex\]). Indeed, the $\bar{C}$-dependent terms of (\[sx\]) that do not fit into the combination (\[convex\]) are $K_{\phi }^{i}$ independent and at most quadratic in the quantum fields, so they cannot generate one-particle irreducible diagrams that have either $K_{\phi }^{i}$ or $\bar{C}$ on the external legs. Clearly, the renormalization of $\langle \Delta \Psi \rangle $ also satisfies the properties just stated for $\langle \Delta \Psi \rangle $. Following the steps of the previous section we can integrate the $\xi $ derivative and reconstruct the full canonical transformation. However, formula (\[d2\]) shows that the integration over $\xi $ must be done by keeping fixed the unprimed fields $\Phi $ and the primed sources $K^{\prime } $. When we do this for the zeroth canonical transformation $F_{0}$ of (\[fn\]), the combination $\tilde{K}_{\phi }^{i}(\xi )$ is turned into (\[kip\]), which is $\xi $ independent. Every other transformation $F_{n}$ of (\[fn\]) preserves the combination (\[kip\]), so the integrated canonical transformation does not depend on $K_{\phi }^{i}$ and $\bar{C}^{I}$ separately, but only on the combination $\tilde{K}_{\phi }^{i\hspace{0.01in}\prime }$, and the generating functional of the renormalized version $F_{R}$ of $F_{\xi }$ has the form $$F_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime },\xi )=\int \ \Phi ^{\alpha }K_{\alpha }^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }^{\prime }+\xi \Delta \Psi +\xi \int \mathcal{R}_{\bar{C}}^{I}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{B}^{I\hspace{0.01in}\prime }+\Delta F_{\xi }(\phi ,C,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu },\tilde{K}_{\phi }^{\prime },K_{C}^{\prime },\xi ). \label{fr}$$ Using this expression we can verify [*a posteriori*]{} that indeed $\tilde{K}_{\phi }^{i}(\xi )$ depends just on (\[kip\]), not on $K_{\phi }^{i\hspace{0.01in}\prime }$ and $\bar{C}^{I}$ separately. Moreover, (\[fr\]) implies $${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha \hspace{0.01in}\prime }={\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha},\qquad B^{I\hspace{0.01in}\prime }=B^{I}+\xi \mathcal{R}_{\bar{C}}^{I}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }),\qquad \bar{C}^{I\hspace{0.01in}\prime }=\bar{C}^{I},\qquad K_{B}^{I\hspace{0.01in}\prime }=K_{B}^{I}. \label{besid}$$ In the next section these results are used to achieve parametric completeness. Renormalization and parametric completeness =========================================== The raw renormalization algorithm of section 3 subtracts away divergences just as they come. It does not ensure, *per se*, RG invariance, for which it is necessary to prove parametric completeness, namely that all divergences can be subtracted by redefining parameters and making canonical transformations. We must show that we can include in the classical action all invariants that are generated back by renormalization, and associate an independent parameter with each of them. The purpose of this section is to show that the background field method allows us to prove parametric completeness in a rather straightforward way, making cohomological classifications unnecessary. We want to relate the renormalized actions (\[sr1\]) and (\[sr2\]). From the arguments of the previous section we know that these two actions are related by the canonical transformation generated by (\[fr\]) at $\xi =1$. We have $$\hat{S}_{R}^{\prime }(\Phi ^{\prime }+{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime })-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })({\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }^{\prime }-K_{\alpha }^{\prime })=\hat{S}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)-\int \mathcal{R}^{\alpha }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{\alpha }-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }){\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }. \label{ei}$$ From (\[fr\]) we find the transformation rules (\[besid\]) at $\xi =1$ and $$\phi ^{\prime }=\phi +\frac{\delta \Delta F}{\delta \tilde{K}_{\phi }^{\prime }},\qquad K_{\bar{C}}^{I}=K_{\bar{C}}^{I\hspace{0.01in}\prime }+G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )\phi ^{i}-G^{Ii}(0,\partial )(\phi ^{i\hspace{0.01in}\prime }+{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }^{i})+\frac{\delta }{\delta \bar{C}^{I}}\int \mathcal{R}_{\bar{C}}^{J}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{B}^{J}, \label{beside}$$ where $\Delta F$ is $\Delta F_{\xi}$ at $\xi=1$. Here and below we sometimes understand indices when there is no loss of clarity. We want to express equation (\[ei\]) in terms of unprimed fields and primed sources, and then set $\phi =C={\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime }=0$. We denote this operation with a subscript 0. Keeping ${\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu },\bar{C},B$ and $K^{\prime }$ as independent variables, we get $$\begin{aligned} \hat{S}_{R}^{\prime }(\Phi _{0}^{\prime }+{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime }) &=&\hat{S}_{R}(0,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },0)-\int B^{I\prime }(K_{\bar{C}}^{I\hspace{0.01in}\prime }-G^{Ii}(0,\partial )(\phi _{0}^{i\hspace{0.01in}\prime }+{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }^{i})) \nonumber \\ &&-\int \mathcal{R}_{\bar{C}}^{I}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })\frac{\delta }{\delta \bar{C}^{I}}\int \mathcal{R}_{\bar{C}}^{J}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{B}^{J\hspace{0.01in}\prime }-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })({\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha 0}+K_{\alpha }^{\prime }). \label{uio}\end{aligned}$$ To derive this formula we have used $$\hat{S}_{R}(\{0,0,\bar{C},B\},{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)=\hat{S}_{R}(0,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },0)-\int B^{I}K_{\bar{C}}^{I}, \label{mina}$$ together with (\[iddo\]), (\[besid\]) and (\[beside\]). The reason why (\[mina\]) holds is that at $C=0$ there are no objects with positive ghost numbers inside the left-hand side of this equation; therefore we can drop every object that has a negative ghost number, which means $\bar{C}$ and all sources $K$ but $K_{\bar{C}}^{I}$. Since (\[chio\]) are the only $B$ and $K_{\bar{C}}^{I}$-dependent terms, and they are not renormalized, at $\phi =0$ we find (\[mina\]). Now, consider the canonical transformation $\{{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu },\bar{C},B\},\breve{K}\rightarrow \Phi ^{\prime \prime },K^{\prime }$ defined by the generating functional $$F(\{{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu },\bar{C},B\},K^{\prime })=\int {\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }K_{\phi }^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }K_{C}^{\prime }+F_{R}(\{0,0,\bar{C},B\},{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },0,1). \label{for}$$ It gives the transformation rules $$\begin{aligned} \Phi ^{\hspace{0.01in}\prime \prime } &=&{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }+\Phi _{0}^{\prime },\qquad \breve{K}_{\phi }=K_{\phi }^{\prime }+{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\phi 0},\qquad \breve{K}_{C}=K_{C}^{\prime }+{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{C0}, \\ \breve{K}_{\bar{C}}^{I} &=&K_{\bar{C}}^{I\hspace{0.01in}\prime }-G^{Ii}(0,\partial )\phi ^{i\hspace{0.01in}\prime \prime }+\frac{\delta }{\delta \bar{C}^{I}}\int \mathcal{R}_{\bar{C}}^{J}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{B}^{J\hspace{0.01in}\prime },\qquad \breve{K}_{B}=K_{B}^{\prime },\end{aligned}$$ which turn formula (\[uio\]) into $$\begin{aligned} \hat{S}_{R}^{\prime }(\Phi ^{\prime \prime },K^{\prime }) &=&\hat{S}_{R}(0,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },0)-\int R_{\phi }^{i}({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })\breve{K}_{\phi }^{i}-\int R_{C}^{I}({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })\breve{K}_{C}^{I} \nonumber \\ &&-\int B^{I}\breve{K}_{\bar{C}}^{I}-\int \mathcal{R}_{\bar{C}}^{I}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })\breve{K}_{\bar{C}}^{I}-\int \mathcal{R}_{B}^{I}(B,{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })\breve{K}_{B}^{I}. \label{fin0}\end{aligned}$$ Note that $(\hat{S}_{R}^{\prime },\hat{S}_{R}^{\prime })=0$ is automatically satisfied by (\[fin0\]). Indeed, we know that $\hat{S}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)$ is invariant under background transformations, and so is $\hat{S}_{R}(0,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },0)$, because $\Phi $ and $K$ transform as matter fields. We can classify $\hat{S}_{R}(0,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },0)$ using its gauge invariance. Let $\mathcal{G}_{i}(\phi )$ denote a basis of gauge-invariant local functionals constructed with the physical fields $\phi $. Then $$\hat{S}_{R}(0,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },0)=\sum_{i}\tau _{i}\mathcal{G}_{i}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }), \label{comple}$$ for suitable constants $\tau _{i}$. Now we manipulate these results in several ways to make their consequences more explicit. To prepare the next discussion it is convenient to relabel $\{{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu },\bar{C},B\}$ as $\breve{\Phi}^{\alpha }$ and $K^{\prime }$ as $K^{\prime \prime }$. Then formulas (\[for\]) and (\[fin0\]) tell us that the canonical transformation $$F_{1}(\breve{\Phi},K^{\prime \prime })=\int \breve{\phi}K_{\phi }^{\prime \prime }+\int \ \breve{C}K_{C}^{\prime \prime }+F_{R}(\{0,0,{\breve{\bar{C}}},\breve{B}\},\{\breve{\phi},\breve{C}\},K^{\prime \prime },0,1) \label{forex}$$ is such that $$\hat{S}_{R}^{\prime }(\Phi ^{\prime \prime },K^{\prime \prime })=\hat{S}_{R}(0,\breve{\phi},0)-\int \bar{R}^{\alpha }(\breve{\Phi})\breve{K}_{\alpha }. \label{fin0ex}$$ #### Parametric completeness Making the further canonical transformation $\breve{\Phi},\breve{K}\rightarrow \Phi ,K$ generated by $$F_{2}(\Phi ,\breve{K})=\int \Phi ^{\alpha }\breve{K}_{\alpha }+\int \bar{C}^{I}G^{Ii}(0,\partial )\phi ^{i}-\int \mathcal{R}_{\bar{C}}^{I}(\bar{C},C)\breve{K}_{B}^{I},$$ we get $$\hat{S}_{R}^{\prime }(\Phi ^{\prime \prime },K^{\prime \prime })=\hat{S}_{R}(0,\phi ,0)-\int R^{\alpha }(\Phi )\bar{K}_{\alpha }, \label{keynb}$$ where the barred sources are the ones of (\[chif2\]) at ${\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }=0$. If we start from the most general gauge-invariant classical action, $$S_{c}(\phi ,\lambda )\equiv \sum_{i}\lambda _{i}\mathcal{G}_{i}(\phi ), \label{scgen}$$ where $\lambda _{i}$ are physical couplings (apart from normalization constants), identities (\[comple\]) and (\[keynb\]) give $$\hat{S}_{R}^{\prime }(\Phi ^{\prime \prime },K^{\prime \prime })=S_{c}(\phi ,\tau (\lambda ))-\int R^{\alpha }(\Phi )\bar{K}_{\alpha }. \label{kk}$$ This result proves parametric completeness in the usual approach, because it tells us that the renormalized action of the usual approach is equal to the classical action $\hat{S}^{\prime }(\Phi ,K)$ (check (\[sr2c\])-(\[sr2gf\]) at ${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }={\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }=0$), apart from parameter redefinitions $\lambda \rightarrow \tau $ and a canonical transformation. In this derivation the role of the background field method is just to provide the key tool to prove the statement. We can also describe parametric completeness in the background field approach. Making the canonical transformation $\breve{\Phi},\breve{K}\rightarrow \hat{\Phi},\hat{K}$ generated by $$F_{2}^{\prime }(\hat{\Phi},\breve{K})=\int \hat{\Phi}^{\alpha }\breve{K}_{\alpha }+\int {\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }^{i}\breve{K}_{\phi }^{i}+\int {\hat{\bar{C}}}^{I}G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )\hat{\phi}^{i}-\int \mathcal{R}_{\bar{C}}^{I}({\hat{\bar{C}}},\hat{C})\breve{K}_{B}^{I}, \label{cancan}$$ formula (\[fin0ex\]) becomes $$\hat{S}_{R}^{\prime }(\Phi ^{\prime \prime },K^{\prime \prime })=S_{c}(\hat{\phi}+{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\tau )-\int R^{\alpha }(\hat{\phi}+{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\hat{C},{\hat{\bar{C}}},\hat{B})\widetilde{\hat{K}}_{\alpha }, \label{y0}$$ where the relations between tilde and nontilde sources are the hat versions of (\[chif\]). Next, we make a ${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$ translation on the left-hand side of (\[y0\]) applying the canonical transformation $\Phi ^{\prime \prime },K^{\prime \prime }$ $\rightarrow \Phi ^{\prime },K^{\prime }$ generated by $$F_{3}(\Phi ^{\prime },K^{\prime \prime })=\int (\Phi ^{\alpha \hspace{0.01in}\prime }+{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha })K_{\alpha }^{\prime \prime }. \label{traslacan}$$ Doing so, $\hat{S}_{R}^{\prime }(\Phi ^{\prime \prime },K^{\prime \prime })$ is turned into $\hat{S}_{R}^{\prime }(\Phi ^{\prime }+{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime })$. At this point, we want to compare the result we have obtained with (\[ei\]). Recall that formula (\[ei\]) involves the canonical transformation (\[fr\]) at $\xi =1$. If we set ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime }=0$ we project that canonical transformation onto a canonical transformation $\Phi ,K\rightarrow \Phi ^{\prime },K^{\prime }$ generated by $F_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },0,1)$, where ${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$ is regarded as a spectator. Furthermore, it is convenient to set ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }=0$, because then formula (\[ei\]) turns into $$\hat{S}_{R}^{\prime }(\Phi ^{\prime }+{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime })=\hat{S}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K),$$ where now primed fields and sources are related to the unprimed ones by the canonical transformation generated by $F_{R}(\Phi ,\{{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },0\},K^{\prime },0,1)$. Finally, recalling that $\hat{S}_{R}^{\prime }(\Phi ^{\prime \prime },K^{\prime \prime })=\hat{S}_{R}^{\prime }(\Phi ^{\prime }+{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime })$ and using (\[y0\]) we get the key formula we wanted, namely $$\hat{S}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)=S_{c}(\hat{\phi}+{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\tau (\lambda ))-\int R^{\alpha }(\hat{\phi}+{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\hat{C},{\hat{\bar{C}}},\hat{B})\widetilde{\hat{K}}_{\alpha }. \label{key}$$ Observe that formula (\[key0\]) of the introduction is formula (\[key\]) with antighosts and Lagrange multipliers switched off. Checking (\[sbaccagf\]), formula (\[key\]) tells us that the renormalized background field action $\hat{S}_{R}$ is equal to the classical background field action $\hat{S}_{\text{gf}}$ up to parameter redefinitions $\lambda \rightarrow \tau $ and a canonical transformation. This proves parametric completeness in the background field approach. The canonical transformation $\Phi ,K\rightarrow \hat{\Phi},\hat{K}$ involved in formula (\[key\]) is generated by the functional $\hat{F}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\hat{K})$ obtained composing the transformations generated by $F_{R}(\Phi ,\{{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },0\},K^{\prime },0,1)$, $F_{1}(\breve{\Phi},K^{\prime \prime })$, $F_{2}^{\prime }(\hat{\Phi},\breve{K})$ and $F_{3}(\Phi ^{\prime },K^{\prime \prime })$ of formulas (\[fr\]), (\[forex\]), (\[cancan\]) and (\[traslacan\]) (at ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }=0$). Working out the composition it is easy to prove that $${\hat{\bar{C}}}=\bar{C},\qquad \hat{B}=B,\qquad \hat{K}_{B}=K_{B},\qquad \hat{K}_{\bar{C}}^{I}-G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )\hat{\phi}^{i}=K_{\bar{C}}^{I}-G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )\phi ^{i},$$ and therefore $\hat{F}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\hat{K})$ has the form $$\hat{F}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\hat{K})=\int \Phi ^{\alpha }\hat{K}_{\alpha }+\Delta \hat{F}(\phi ,C,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\hat{K}_{\phi }^{i}-{\hat{\bar{C}}}^{I}G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },-\overleftarrow{\partial }),\hat{K}_{C}), \label{frhat}$$ where $\Delta \hat{F}=\mathcal{O}(\hbar )$-poles. Examples ======== In this section we give two examples, non-Abelian gauge field theories and quantum gravity, which are also useful to familiarize oneself with the notation and the tools used in the paper. We switch to Minkowski spacetime. The dimensional-regularization technique is understood. Yang-Mills theory ----------------- The first example is non-Abelian Yang-Mills theory with simple gauge group $G $ and structure constants $f^{abc}$, coupled to fermions $\psi ^{i}$ in some representation described by anti-Hermitian matrices $T_{ij}^{a}$. The classical action $S_{c}(\phi )$ can be restricted by power counting, or enlarged to include all invariants of (\[scgen\]). The nonminimal non-gauge-fixed action $S$ is the sum $\hat{S}+\bar{S}$ of (\[deco\]) and (\[sbar\]). We find $$\begin{aligned} \hat{S} &=&S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })+\int \hspace{0.01in}\left[ g(\bar{\psi}^{i}+{\mkern2mu\underline{\mkern-2mu\smash{\bar{\psi}}\mkern-2mu}\mkern2mu }^{i})T_{ij}^{a}C^{a}K_{\psi }^{j}+g\bar{K}_{\psi }^{i}T_{ij}^{a}C^{a}(\psi ^{j}+{\mkern2mu\underline{\mkern-2mu\smash{\psi }\mkern-2mu}\mkern2mu }^{j})\right] \\ &&-\int \hspace{0.01in}\left[ ({\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }_{\mu }C^{a}+gf^{abc}A_{\mu }^{b}C^{c})K^{\mu a}-\frac{1}{2}gf^{abc}C^{b}C^{c}K_{C}^{a}+B^{a}K_{\bar{C}}^{a}\right] ,\end{aligned}$$ and $$\begin{aligned} \bar{S} &=&gf^{abc}\int \hspace{0.01in}{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{b}(A_{\mu }^{c}K^{\mu a}+C^{c}K_{C}^{a}+\bar{C}^{c}K_{\bar{C}}^{a}+B^{c}K_{B}^{a})+g\int \hspace{0.01in}\left[ \bar{\psi}^{i}T_{ij}^{a}{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{a}K_{\psi }^{j}+\bar{K}_{\psi }^{i}T_{ij}^{a}{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{a}\psi ^{j}\right] \nonumber \\ &&-\int \hspace{0.01in}\left[ ({\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }_{\mu }{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{a}){\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\mu a}-\frac{1}{2}gf^{abc}{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{b}{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{c}{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{C}^{a}-g{\mkern2mu\underline{\mkern-2mu\smash{\bar{\psi}}\mkern-2mu}\mkern2mu }^{i}T_{ij}^{a}{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{a}{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\psi }^{j}-g{\mkern2mu\underline{\mkern-2mu\smash{\bar{K}}\mkern-2mu}\mkern2mu }_{\psi }^{i}T_{ij}^{a}{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{a}{\mkern2mu\underline{\mkern-2mu\smash{\psi }\mkern-2mu}\mkern2mu }^{j}\right] . \label{sbary}\end{aligned}$$ The covariant derivative ${\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }_{\mu }$ is the background one; for example ${\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }_{\mu }\Lambda ^{a}=\partial _{\mu }\Lambda ^{a}+gf^{abc}{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu }_{\mu }^{b}\Lambda ^{c}$. The first line of (\[sbary\]) shows that all quantum fields transform as matter fields under background transformations. It is easy to check that $\hat{S}$ and $\bar{S}$ satisfy $\llbracket \hat{S},\hat{S}\rrbracket =\llbracket \hat{S},\bar{S}\rrbracket =\llbracket \bar{S},\bar{S}\rrbracket =0$. A common background-preserving gauge fermion is $$\Psi =\int \bar{C}^{a}\left( -\frac{\lambda }{2}B^{a}+{\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }^{\mu }A_{\mu }^{a}\right) , \label{seeym}$$ and the gauge-fixed action $\hat{S}_{\text{gf}}=\hat{S}+\llbracket \hat{S},\Psi \rrbracket $ reads $$\hat{S}_{\text{gf}}=\hat{S}-\frac{\lambda }{2}\int (B^{a})^{2}+\int B^{a}{\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }^{\mu }A_{\mu }^{a}-\int \bar{C}^{a}{\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }^{\mu }({\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }_{\mu }C^{a}+gf^{abc}A_{\mu }^{b}C^{c}).$$ Since the gauge fixing is linear in the quantum fields, the action $\hat{S}$ depends on the combination $K_{\mu }^{a}+{\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }_{\mu }\bar{C}^{a}$ and not on $K_{\mu }^{a}$ and $\bar{C}^{a}$ separately. From now on we switch matter fields off, for simplicity, and set $\lambda =0$. We describe renormalization using the approach of this paper. First we concentrate on the standard power-counting renormalizable case, where $$S_{c}(A,g)=-\frac{1}{4}\int F_{\mu \nu }^{a}(A,g)F^{\mu \nu \hspace{0.01in}a}(A,g),\qquad \qquad F_{\mu \nu }^{a}(A,g)=\partial _{\mu }A_{\nu }^{a}-\partial _{\nu }A_{\mu }^{a}+gf^{abc}A_{\mu }^{b}A_{\nu }^{c}.$$ The key formula (\[key\]) gives $$\begin{aligned} \hat{S}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu },K) &=&-\frac{Z}{4}\int F_{\mu \nu }^{a}(\hat{A}+{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu },g)F^{\mu \nu \hspace{0.01in}a}(\hat{A}+{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu },g)+\int \hat{B}^{a}{\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }^{\mu }\hat{A}_{\mu }^{a}-\int \hat{B}^{a}\hat{K}_{\bar{C}}^{a} \nonumber \\ &&+\int \hspace{0.01in}(\hat{K}^{\mu a}+{\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }^{\mu }{\hat{\bar{C}}}^{a})({\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }_{\mu }\hat{C}^{a}+gf^{abc}\hat{A}_{\mu }^{b}\hat{C}^{c})+\frac{1}{2}gf^{abc}\int \hat{C}^{b}\hat{C}^{c}\hat{K}_{C}^{a}, \label{sat}\end{aligned}$$ where $Z$ is a renormalization constant. The most general canonical transformation $\Phi ,K\rightarrow \hat{\Phi},\hat{K}$ that is compatible with power counting, global gauge invariance and ghost number conservation can be easily written down. Introducing unknown constants where necessary, we find that its generating functional has the form $$\begin{aligned} \hat{F}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu },\hat{K}) &=&\int (Z_{A}^{1/2}A_{\mu }^{a}+{\mkern2mu\underline{\mkern-2mu\smash{Z}\mkern-2mu}\mkern2mu }_{A}^{1/2}{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu }_{\mu }^{a})\hat{K}^{\mu a}+\int Z_{C}^{1/2}C^{a}\hat{K}_{C}^{a}+\int Z_{\bar{C}}^{1/2}\bar{C}^{a}\hat{K}_{\bar{C}}^{a} \\ &&+\int (Z_{B}^{1/2}B^{a}+\alpha {\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }^{\mu }A_{\mu }^{a}+\beta \partial ^{\mu }{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu }_{\mu }^{a}+\gamma gf^{abc}{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu }^{\mu b}A_{\mu }^{c}+\delta gf^{abc}\bar{C}^{b}C^{c})\hat{K}_{B}^{a} \\ &&+\int \bar{C}^{a}(\zeta B^{a}+\xi {\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }^{\mu }A_{\mu }^{a}+\eta \partial ^{\mu }{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu }_{\mu }^{a}+\theta gf^{abc}{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu }^{\mu b}A_{\mu }^{c}+\chi gf^{abc}\bar{C}^{b}C^{c}) \\ &&+\int \sigma \hat{K}_{\bar{C}}^{a}\hat{K}_{B}^{a}+\int \tau gf^{abc}C^{a}\hat{K}_{B}^{b}\hat{K}_{B}^{c}.\end{aligned}$$ Inserting it in (\[sat\]) and using the nonrenormalization of the $B$ and $K_{\bar{C}}$-dependent terms, we find $a=\beta =\gamma =\delta =\zeta =\theta =\chi =\sigma =\tau =0$ and $$\xi =1-Z_{\bar{C}}^{1/2}Z_{A}^{1/2},\qquad \eta =-Z_{\bar{C}}^{1/2}{\mkern2mu\underline{\mkern-2mu\smash{Z}\mkern-2mu}\mkern2mu }_{A}^{1/2},\qquad Z_{B}=Z_{\bar{C}}. \label{zeta}$$ It is easy to check that $Z_{\bar{C}}$ disappears from the right-hand side of (\[sat\]), so we can set $Z_{\bar{C}}=1$. Furthermore, we know that $\hat{S}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu },K)$ is invariant under background transformations ($\llbracket \hat{S}_{R},\bar{S}\rrbracket =0$), which requires ${\mkern2mu\underline{\mkern-2mu\smash{Z}\mkern-2mu}\mkern2mu }_{A}=0$. Finally, the canonical transformation just reads $$\hat{F}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu },\hat{K})=\int Z_{A}^{1/2}A_{\mu }^{a}\hat{K}^{\mu a}+\int Z_{C}^{1/2}C^{a}\hat{K}_{C}^{a}+\int \bar{C}^{a}\hat{K}_{\bar{C}}^{a}+\int B^{a}\hat{K}_{B}^{a}+(1-Z_{A}^{1/2})\int \bar{C}^{a}({\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }^{\mu }A_{\mu }^{a}),$$ which contains the right number of independent renormalization constants and is of the form (\[frhat\]). Defining $Z_{g}=Z^{-1/2}$ and $Z_{A}^{\prime }=ZZ_{A}$ we can describe renormalization in a more standard way. Writing $$\hat{S}_{R}(0,\hat{A}+{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu },0)=-\frac{1}{4}\int F_{\mu \nu }^{a}(Z_{A}^{\prime \hspace{0.01in}1/2}A+Z_{g}^{-1}{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu },gZ_{g})F^{\mu \nu \hspace{0.01in}a}(Z_{A}^{\prime \hspace{0.01in}1/2}A+Z_{g}^{-1}{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu },gZ_{g}),$$ we see that $Z_{g}$ is the usual gauge-coupling renormalization constant, while $Z_{A}^{\prime }$ and $Z_{g}^{-2}$ are the wave-function renormalization constants of the quantum gauge field $A$ and the background gauge field ${\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu }$, respectively. We remark that the local divergent canonical transformation $\Phi ,K\rightarrow \hat{\Phi},\hat{K}$ corresponds to a highly nontrivial, convergent but non-local canonical transformation at the level of the $\Gamma $ functional. If the theory is not power-counting renormalizable, then we need to consider the most general classical action, equal to the right-hand side of (\[scgen\]). Counterterms include vertices with arbitrary numbers of external $\Phi $ and $K$ legs. Nevertheless, the key formula (\[key\]) ensures that the renormalized action $\hat{S}_{R}$ remains exactly the same, up to parameter redefinitions and a canonical transformation. The only difference is that now even the canonical transformation $\hat{F}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\hat{K})$ of (\[frhat\]) becomes nonpolynomial and highly nontrivial. Quantum gravity --------------- Having written detailed formulas for Yang-Mills theory, in the case of quantum gravity we can just outline the key ingredients. In particular, we stress that the linearity assumption is satisfied both in the first-order and second-order formalisms, both using the metric $g_{\mu \nu }$ and the vielbein $e_{\mu }^{a}$. For example, using the second-order formalism and the vielbein, the symmetry transformations are encoded in the expressions $$\begin{aligned} -\int R^{\alpha }(\Phi )K_{\alpha } &=&\int (e_{\rho }^{a}\partial _{\mu }C^{\rho }+C^{\rho }\partial _{\rho }e_{\mu }^{a}+C^{ab}e_{\mu b})K_{a}^{\mu }+\int C^{\rho }(\partial _{\rho }C^{\mu })K_{\mu }^{C} \\ &&+\int (C^{ac}\eta _{cd}C^{db}+C^{\rho }\partial _{\rho }C^{ab})K_{ab}^{C}-\int B_{\mu }K_{\bar{C}}^{\mu }-\int B_{ab}K_{\bar{C}}^{ab}, \\ -\int \bar{R}^{\alpha }(\Phi )K_{\alpha } &=&-\int R^{\alpha }(\Phi )K_{\alpha }-\int (\bar{C}_{\rho }\partial _{\mu }C^{\rho }-C^{\rho }\partial _{\rho }\bar{C}_{\mu })K_{\bar{C}}^{\mu }+\int \left( B_{\rho }\partial _{\mu }C^{\rho }+C^{\rho }\partial _{\rho }B_{\mu }\right) K_{B}^{\mu },\end{aligned}$$ in the minimal and nonminimal cases, respectively, where $C^{\mu }$ are the ghosts of diffeomorphisms, $C^{ab}$ are the Lorentz ghosts and $\eta _{ab}$ is the flat-space metric. We see that both $R^{\alpha }(\Phi )$ and $\bar{R}^{\alpha }(\Phi )$ are at most quadratic in $\Phi $. Matter fields are also fine, since vectors $A_{\mu }$, fermions $\psi $ and scalars $\varphi $ contribute with $$\begin{aligned} &&-\int (\partial _{\mu }C^{a}+gf^{abc}A_{\mu }^{b}C^{c}-C^{\rho }\partial _{\rho }A_{\mu }^{a}-A_{\rho }^{a}\partial _{\mu }C^{\rho })K_{A}^{\mu a}+\int \left( C^{\rho }\partial _{\mu }C^{a}+\frac{1}{2}gf^{abc}C^{b}C^{c}\right) K_{C}^{a} \\ &&\qquad +\int C^{\rho }(\partial _{\rho }\varphi )K_{\varphi }+\int C^{\rho }(\partial _{\rho }\bar{\psi})K_{\psi }-\frac{i}{4}\int \bar{\psi}\sigma ^{ab}C_{ab}K_{\psi }+\int K_{\bar{\psi}}C^{\rho }(\partial _{\rho }\psi )-\frac{i}{4}\int K_{\bar{\psi}}\sigma ^{ab}C_{ab}\psi ,\end{aligned}$$ where $\sigma ^{ab}=i[\gamma ^{a},\gamma ^{b}]/2$. Expanding around flat space, common linear gauge-fixing conditions for diffeomorphisms and local Lorentz symmetry are $\eta ^{\mu \nu }\partial _{\mu }e_{\nu }^{a}=\xi \eta ^{a\mu }\partial _{\mu }e_{\nu }^{b}\delta _{b}^{\nu }$, $e_{\mu }^{a}=e_{\nu }^{b}\eta _{b\mu }\eta ^{\nu a}$, respectively. In the first-order formalism we just need to add the transformation of the spin connection $\omega _{\mu }^{ab}$, encoded in $$\int (C^{\rho }\partial _{\rho }\omega _{\mu }^{ab}+\omega _{\rho }^{ab}\partial _{\mu }C^{\rho }-\partial _{\mu }C^{ab}+C^{ac}\eta _{cd}\omega _{\mu }^{db}-\omega _{\mu }^{ac}\eta _{cd}C^{db})K_{ab}^{\mu }.$$ Moreover, in this case we can also gauge-fix local Lorentz symmetry with the linear gauge-fixing condition $\eta ^{\mu \nu }\partial _{\mu }\omega _{\nu }^{ab}=0$, instead of $e_{\mu }^{a}=e_{\nu }^{b}\eta _{b\mu }\eta ^{\nu a}$. We see that all gauge symmetries that are known to have physical interest satisfy the linearity assumption, together with irreducibility and off-shell closure. On the other hand, more speculative symmetries (such as local supersymmetry) do not satisfy those assumptions in an obvious way. When auxiliary fields are introduced to achieve off-shell closure, some symmetry transformations (typically, those of auxiliary fields) are nonlinear [@superg]. The relevance of this issue is already known in the literature. For example, in ref. [@superspace] it is explained that in supersymmetric theories the standard background field method cannot be applied, precisely because the symmetry transformations are nonlinear. It is argued that the linearity assumption is tied to the linear splitting $\Phi \rightarrow \Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$ between quantum fields $\Phi $ and background fields ${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$, and that the problem of supersymmetric theories can be avoided with a nonlinear splitting of the form $\Phi \rightarrow \Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$ $+$ nonlinear corrections. Perhaps it is possible to generalize the results of this paper to supergravity following those guidelines. From our viewpoint, the crucial property is that the background transformations of the quantum fields are linear in the quantum fields themselves, because then they do not renormalize. An alternative strategy, not bound to supersymmetric theories, is that of introducing (possibly infinitely many) auxiliary fields, replacing every nonlinear term appearing in the symmetry transformations with a new field $N $, and then proceeding similarly with the $N$ transformations and the closure relations, till all functions $R^{\alpha }$ are at most quadratic. The natural framework for this kind of job is the one of refs. [@fieldcov; @masterf; @mastercan], where the fields $N$ are dual to the sources $L$ coupled to composite fields. Using that approach the Batalin-Vilkovisky formalism can be extended to the composite-field sector of the theory and all perturbative canonical transformations can be studied as true changes of field variables in the functional integral, instead of mere replacements of integrands. For reasons of space, though, we cannot pursue this strategy here. The quest for parametric completeness: Where we stand now ========================================================= In this section we make remarks about the problem of parametric completeness in general gauge theories and recapitulate where we stand now on this issue. To begin with, consider non-Abelian Yang-Mills theory as a deformation of its Abelian limit. The minimal solution $S(g)$ of the master equation $(S(g),S(g))=0$ reads $$S(g)=-\frac{1}{4}\int F_{\mu \nu }^{a}F^{\mu \nu \hspace{0.01in}a}+\int K^{\mu a}\partial _{\mu }C^{a}+gf^{abc}\int \left( K^{\mu a}A_{\mu }^{b}+\frac{1}{2}K_{C}^{a}C^{b}\right) C^{c}.$$ Differentiating the master equation with respect to $g$ and setting $g=0$, we find $$(S,S)=0,\qquad (S,\omega )=0,\qquad S=S(0),\qquad \omega =\left. \frac{\mathrm{d}S(g)}{\mathrm{d}g}\right| _{g=0}.$$ On the other hand, we can easily prove that there exists no local functional $\chi $ such that $\omega =(S,\chi )$. Thus, we can say that $\omega $ is a nontrivial solution of the cohomological problem associated with an Abelian Yang-Mills theory that contains a suitable number of photons [@regnocoho]. Nevertheless, renormalization cannot turn $\omega $ on as a counterterm, because $S(0)$ is a free field theory. Even if we couple the theory to gravity and assume that massive fermions are present (which allows us to construct dimensionless parameters multiplying masses with the Newton constant), radiative corrections cannot dynamically “un-Abelian-ize” the theory, namely convert an Abelian theory into a non-Abelian one. One way to prove this fact is to note that the dependence on gauge fields is even at $g=0$, but not at $g\neq 0$. The point is, however, that cohomology *per se* is unable to prove it. Other properties must be advocated, such as the discrete symmetry just mentioned. In general, we cannot rely on cohomology only, and the possibility that gauge symmetries may be dynamically deformed in nontrivial and observable ways remains open. In ref. [@regnocoho] the issue of parametric completeness was studied in general terms. In that approach, which applies to all theories that are manifestly free of gauge anomalies, renormalization triggers an automatic parametric extension till the classical action becomes parametrically complete. The results of ref. [@regnocoho] leave the door open to dynamically induced nontrivial deformations of the gauge symmetry. Instead, the results found here close that door in all cases where they apply, which means manifestly nonanomalous irreducible gauge symmetries that close off shell and satisfy the linearity assumption. The reason is – we stress it again – that by formulas (\[kk\]) and (\[key\]) all dynamically induced deformations can be organized into parameter redefinitions and canonical transformations. As far as we know now, gauge symmetries can still be dynamically deformed in observable ways in theories that do not satisfy the assumptions of this paper. Supergravities are natural candidates to provide explicit examples. Conclusions =========== The background field method and the Batalin-Vilkovisky formalism are convenient tools to quantize general gauge field theories. In this paper we have merged the two to rephrase and generalize known results about renormalization, and to study parametric completeness. Our approach applies when gauge anomalies are manifestly absent, the gauge algebra is irreducible and closes off shell, the gauge transformations are linear functions of the fields, and closure is field independent. These assumptions are sufficient to include the gauge symmetries we need for physical applications, such as Abelian and non-Abelian Yang-Mills symmetries, local Lorentz symmetry and general changes of coordinates, but exclude other potentially interesting symmetries, such as local supersymmetry. Both renormalizable and nonrenormalizable theories are covered, such as QED, non-Abelian Yang-Mills theories, quantum gravity and Lorentz-violating gauge theories, as well as effective and higher-derivative models, in arbitrary dimensions, and also extensions obtained adding any set of composite fields. At the same time, chiral theories, and therefore the Standard Model, possibly coupled with quantum gravity, require the analysis of anomaly cancellation and the Adler-Bardeen theorem, which we postpone to a future investigation. The fact that supergravities are left out from the start, on the other hand, suggests that there should exist either a no-go theorem or a more advanced framework. At any rate, we are convinced that our formalism is helpful to understand several properties better and address unsolved problems. We have studied gauge dependence in detail, and renormalized the canonical transformation that continuously interpolates between the background field approach and the usual approach. Relating the two approaches, we have proved parametric completeness without making use of cohomological classifications. The outcome is that in all theories that satisfy our assumptions renormalization cannot hide any surprises; namely the gauge symmetry remains essentially the same throughout the quantization process. In the theories that do not satisfy our assumptions, instead, the gauge symmetry could be dynamically deformed in physically observable ways. It would be remarkable if we discovered explicit examples of theories where this sort of “dynamical creation” of gauge symmetries actually takes place. 12truept [**Acknowledgments**]{} 2truept The investigation of this paper was carried out as part of a program to complete the book [@webbook], which will be available at [`Renormalization.com`](http://renormalization.com) once completed. Appendix ======== In this appendix we prove several theorems and identities that are used in the paper. We use the Euclidean notation and the dimensional-regularization technique, which guarantees, in particular, that the functional integration measure is invariant under perturbatively local changes of field variables. The generating functionals $Z$ and $W$ are defined from $$Z(J,K,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=\int [\mathrm{d}\Phi ]\hspace{0.01in}\exp (-S(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })+\int \Phi ^{\alpha }J_{\alpha })=\exp W(J,K,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }), \label{defa}$$ and $\Gamma (\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })$ $=-W+\int \Phi ^{\alpha }J_{\alpha }$ is the $W$ Legendre transform. Averages denote the sums of connected diagrams (e.g. $\langle A(x)B(y)\rangle =\langle A(x)B(y)\rangle _{\text{nc}}-\langle A(x)\rangle \langle B(y)\rangle $, where $\langle A(x)B(y)\rangle _{\text{nc}}$ includes disconnected diagrams). Moreover, the average $\langle X\rangle $ of a local functional $X $ can be viewed as a functional of $\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu } $ (in which case it collects one-particle irreducible diagrams) or a functional of $J,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$. When we need to distinguish the two options we specify whether $\Phi $ or $J$ are kept constant in functional derivatives. First we work in the usual (non-background field) framework; then we generalize the results to the background field method. To begin with, we recall a property that is true even when the action $S(\Phi ,K)$ does not satisfy the master equation. The identity $(\Gamma ,\Gamma )=\langle (S,S)\rangle $ holds. \[th0\] *Proof*. Applying the change of field variables $$\Phi ^{\alpha }\rightarrow \Phi ^{\alpha }+\theta (S,\Phi ^{\alpha }) \label{chv}$$ to (\[defa\]), where $\theta $ is a constant anticommuting parameter, we obtain $$\theta \int \left\langle \frac{\delta _{r}S}{\delta K_{\alpha }}\frac{\delta _{l}S}{\delta \Phi ^{\alpha }}\right\rangle -\theta \int \left\langle \frac{\delta _{r}S}{\delta K_{\alpha }}\right\rangle J_{\alpha }=0,$$ whence $$\frac{1}{2}\langle (S,S)\rangle =-\int \left\langle \frac{\delta _{r}S}{\delta K_{\alpha }}\frac{\delta _{l}S}{\delta \Phi ^{\alpha }}\right\rangle =-\int \left\langle \frac{\delta _{r}S}{\delta K_{\alpha }}\right\rangle J_{\alpha }=\int \frac{\delta _{r}W}{\delta K_{\alpha }}\frac{\delta _{l}\Gamma }{\delta \Phi ^{\alpha }}=-\int \frac{\delta _{r}\Gamma }{\delta K_{\alpha }}\frac{\delta _{l}\Gamma }{\delta \Phi ^{\alpha }}=\frac{1}{2}(\Gamma ,\Gamma ).$$ Now we prove results for an action $S$ that satisfies the master equation $(S,S)=0$. If $(S,S)=0$ then $(\Gamma ,\langle X\rangle )=\langle (S,X)\rangle $ for every local functional $X$. \[theorem2\] *Proof*. Applying the change of field variables (\[chv\]) to $$\langle X\rangle =\frac{1}{Z(J,K)}\int [\mathrm{d}\Phi ]\hspace{0.01in}X\exp (-S+\int \Phi ^{\alpha }J_{\alpha }),$$ and using $(S,S)=0$ we obtain $$\int \left\langle \frac{\delta _{r}S}{\delta K_{\alpha }}\frac{\delta _{l}X}{\delta \Phi ^{\alpha }}\right\rangle =(-1)^{\varepsilon _{X}+1}\int \left\langle X\frac{\delta _{r}S}{\delta K_{\alpha }}\right\rangle \frac{\delta _{l}\Gamma }{\delta \Phi ^{\alpha }}, \label{r1}$$ where $\varepsilon _{X}$ denotes the statistics of the functional $X$ (equal to 0 if $X$ is bosonic, 1 if it is fermionic, modulo 2). Moreover, we also have $$\int \left\langle \frac{\delta _{r}S}{\delta \Phi ^{\alpha }}\frac{\delta _{l}X}{\delta K_{\alpha }}\right\rangle =\int \frac{\delta _{r}\Gamma }{\delta \Phi ^{\alpha }}\left\langle \frac{\delta _{l}X}{\delta K_{\alpha }}\right\rangle , \label{r2}$$ which can be proved starting from the expression on the left-hand side and integrating by parts. In the derivation we use the fact that since $X$ is local, $\delta _{r}\delta _{l}X/(\delta \Phi ^{\alpha }\delta K_{\alpha })$ is set to zero by the dimensional regularization, which kills the $\delta (0) $s and their derivatives. Next, straightforward differentiations give $$\begin{aligned} \left. \frac{\delta _{l}\langle X\rangle }{\delta K_{\alpha }}\right| _{J} &=&\left\langle \frac{\delta _{l}X}{\delta K_{\alpha }}\right\rangle -\left\langle \frac{\delta _{l}S}{\delta K_{\alpha }}X\right\rangle \label{r3} \\ &=&\left. \frac{\delta _{l}\langle X\rangle }{\delta K_{\alpha }}\right| _{\Phi }-\int \left. \frac{\delta _{l}J_{\beta }}{\delta K_{\alpha }}\right| _{\Phi }\left. \frac{\delta _{l}\langle X\rangle }{\delta J_{\beta }}\right| _{K}. \label{r4}\end{aligned}$$ At this point, using (\[r1\])-(\[r4\]) and $(J_{\alpha},\Gamma )=0$ (which can be proved by differentiating $(\Gamma ,\Gamma )=0$ with respect to $\Phi ^{\alpha}$), we derive $(\Gamma ,\langle X\rangle )=\langle (S,X)\rangle $. If $(S,S)=0$ and $$\frac{\partial S}{\partial \xi }=(S,X), \label{bbug}$$ where $X$ is a local functional and $\xi $ is a parameter, then $$\frac{\partial \Gamma }{\partial \xi }=(\Gamma ,\langle X\rangle ). \label{pprove}$$ \[bbugc\] *Proof*. Using theorem \[theorem2\] we have $$\frac{\partial \Gamma }{\partial \xi }=-\frac{\partial W}{\partial \xi }=\langle \frac{\partial S}{\partial \xi }\rangle =\langle (S,X)\rangle =(\Gamma ,\langle X\rangle ).$$ Now we derive results that hold even when $S$ does not satisfy the master equation. \[blabla\]The identity $$(\Gamma ,\langle X\rangle )=\langle (S,X)\rangle -\frac{1}{2}\langle (S,S)X\rangle _{\Gamma } \label{prove0}$$ holds, where $X$ is a generic local functional and $\langle AB\cdots Z\rangle _{\Gamma }$ denotes the set of connected, one-particle irreducible diagrams with one insertion of $A$, $B$, $\ldots Z$. This theorem is a generalization of theorem \[theorem2\]. It is proved by repeating the derivation without using $\left( S,S\right) =0$. First, observe that formula (\[r1\]) generalizes to $$\int \left\langle \frac{\delta _{r}S}{\delta K_{\alpha }}\frac{\delta _{l}X}{\delta \Phi ^{\alpha }}\right\rangle =(-1)^{\varepsilon _{X}+1}\int \left\langle X\frac{\delta _{r}S}{\delta K_{\alpha }}\right\rangle \frac{\delta _{l}\Gamma }{\delta \Phi ^{\alpha }}-\frac{1}{2}\langle (S,S)X\rangle . \label{r11}$$ On the other hand, formula (\[r2\]) remains the same, as well as (\[r3\]) and (\[r4\]). We have $$\left( \Gamma ,\langle X\rangle \right) =\langle (S,X)\rangle -\frac{1}{2}\langle (S,S)X\rangle +\int \frac{\delta _{r}\Gamma }{\delta \Phi ^{\alpha }}\left. \frac{\delta _{l}J_{\beta }}{\delta K_{\alpha }}\right| _{\Phi }\left. \frac{\delta _{l}\langle X\rangle }{\delta J_{\beta }}\right| _{K}-\int \frac{\delta _{r}\Gamma }{\delta K_{\alpha }}\left. \frac{\delta _{l}\langle X\rangle }{\delta \Phi ^{\alpha }}\right| _{K}.$$ Differentiating $(\Gamma ,\Gamma )$ with respect to $\Phi ^{\alpha}$ we get $$\frac{1}{2}\frac{\delta _{r}(\Gamma ,\Gamma )}{\delta \Phi ^{\alpha}}=\frac{1}{2}\frac{\delta _{l}(\Gamma ,\Gamma )}{\delta \Phi ^{\alpha}}=(J_{\alpha },\Gamma )=(-1)^{\varepsilon _{\alpha }}(\Gamma ,J_{\alpha }),$$ where $\varepsilon _{\alpha }$ is the statistics of $\Phi ^{\alpha }$. Using $(\Gamma ,\Gamma )=\langle (S,S)\rangle $ we finally obtain $$\left( \Gamma ,\langle X\rangle \right) =\langle (S,X)\rangle -\frac{1}{2}\langle (S,S)X\rangle +\frac{1}{2}\int (-1)^{\varepsilon _{\alpha }}\frac{\delta _{r}\langle (S,S)\rangle }{\delta \Phi ^{\alpha }}\left. \frac{\delta _{l}\langle X\rangle }{\delta J_{\alpha }}\right| _{K}. \label{allo}$$ The set of irreducible diagrams contained in $\langle A\hspace{0.01in}B\rangle $, where $A$ and $B$ are local functionals, is given by the formula $$\langle A\hspace{0.01in}B\rangle _{\Gamma }=\langle AB\rangle -\{\langle A\rangle ,\langle B\rangle \}, \label{oo}$$ where $\{X,Y\}$ are the “mixed brackets” [@BV2] $$\{X,Y\}\equiv \int \frac{\delta _{r}X}{\delta \Phi ^{\alpha }}\langle \Phi ^{\alpha }\Phi ^{\beta }\rangle \frac{\delta _{l}Y}{\delta \Phi ^{\beta }}=\int \frac{\delta _{r}X}{\delta \Phi ^{\alpha }}\frac{\delta _{r}\delta _{r}W}{\delta J_{\beta }\delta J_{\alpha }}\frac{\delta _{l}Y}{\delta \Phi ^{\beta }}=\int \left. \frac{\delta _{r}X}{\delta J_{\alpha }}\right| _{K}\frac{\delta _{l}Y}{\delta \Phi ^{\alpha }}, \label{mixed brackets}$$ $X$ and $Y$ being functionals of $\Phi $ and $K$. Indeed, $\{\langle A\rangle ,\langle B\rangle \}$ is precisely the set of diagrams in which the $A$ and $B$ insertions are connected in a one-particle reducible way. Thus, formula (\[allo\]) coincides with (\[prove0\]). Using (\[prove0\]) we also have the identity $$\frac{\partial \Gamma }{\partial \xi }-\left( \Gamma ,\langle X\rangle \right) =\left\langle \frac{\partial S}{\partial \xi }-\left( S,X\right) \right\rangle +\frac{1}{2}\left\langle \left( S,S\right) \hspace{0.01in}X\right\rangle _{\Gamma }, \label{provee}$$ which generalizes corollary \[bbugc\]. Now we switch to the background field method. We begin by generalizing theorem \[th0\]. If the action $S(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })$ is such that $\delta _{l}S/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }$ is $\Phi $ independent, the identity $$\llbracket \Gamma ,\Gamma \rrbracket =\langle \llbracket S,S\rrbracket \rangle$$ holds. \[thb\] *Proof*. Since $\delta _{l}S/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }$ is $\Phi $ independent we have $\delta _{l}\Gamma /\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }=\delta _{l}S/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }$. Using theorem \[th0\] we find $$\llbracket \Gamma ,\Gamma \rrbracket =(\Gamma ,\Gamma )+2\int \frac{\delta _{r}\Gamma }{\delta {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }}\frac{\delta _{l}\Gamma }{\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }}=\langle (S,S)\rangle +2\int \langle \frac{\delta _{r}S}{\delta {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }}\rangle \frac{\delta _{l}S}{\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }}=\langle (S,S)\rangle +2\int \langle \frac{\delta _{r}S}{\delta {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }}\frac{\delta _{l}S}{\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }}\rangle =\langle \llbracket S,S\rrbracket \rangle .$$ Next, we mention the useful identity $$\left. \frac{\delta _{l}\langle X\rangle }{\delta {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }}\right| _{\Phi }=\left\langle \frac{\delta _{l}X}{\delta {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }}\right\rangle -\left\langle \frac{\delta _{l}S}{\delta {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }}X\right\rangle _{\Gamma }, \label{dera}$$ which holds for every local functional $X$. It can be proved by taking (\[r3\])–(\[r4\]) with $K\rightarrow {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$ and using (\[oo\])–(\[mixed brackets\]). Mimicking the proof of theorem \[thb\] and using (\[dera\]), it is easy to prove that theorem \[blabla\] implies the identity $$\llbracket \Gamma ,\langle X\rangle \rrbracket =\langle \llbracket S,X\rrbracket \rangle -\frac{1}{2}\langle \llbracket S,S\rrbracket X\rangle _{\Gamma }, \label{bb2}$$ for every ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$-independent local functional $X$. Thus we have the following property. The identity $$\frac{\partial \Gamma }{\partial \xi }-\llbracket \Gamma ,\langle X\rangle \rrbracket =\left\langle \frac{\partial S}{\partial \xi }-\llbracket S,X\rrbracket \right\rangle +\frac{1}{2}\langle \llbracket S,S\rrbracket X\rangle _{\Gamma } \label{proveg}$$ holds for every action $S$ such that $\delta _{l}S/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }$ is $\Phi $ independent, for every ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$-independent local functional $X$ and for every parameter $\xi $. \[cora\] If the action $S$ has the form (\[assu\]) and $X$ is also ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$ independent, applying (\[backghost\]) to (\[bb2\]) we obtain $$\llbracket \hat{\Gamma},\langle X\rangle \rrbracket =\langle \llbracket \hat{S},X\rrbracket \rangle -\frac{1}{2}\langle \llbracket \hat{S},\hat{S}\rrbracket X\rangle _{\Gamma },\qquad \llbracket \bar{S},\langle X\rangle \rrbracket =\langle \llbracket \bar{S},X\rrbracket \rangle -\langle \llbracket \bar{S},\hat{S}\rrbracket X\rangle _{\Gamma },\qquad \langle \llbracket \bar{S},\bar{S}\rrbracket X\rangle _{\Gamma }=0, \label{blablaback}$$ which imply the following statement. If $S$ satisfies the assumptions of (\[assu\]), $\llbracket \bar{S},X\rrbracket =0$ and $\llbracket \bar{S},\hat{S}\rrbracket =0$, where $X$ is a ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$- and ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$-independent local functional, then $\llbracket \bar{S},\langle X\rangle \rrbracket =0$. \[corolla\] Finally, we recall a result derived in ref. [@removal]. If $\Phi ,K\rightarrow \Phi ^{\prime },K^{\prime }$ is a canonical transformation generated by $F(\Phi ,K^{\prime })$, and $\chi (\Phi ,K)$ is a functional behaving as a scalar (that is to say $\chi ^{\prime }(\Phi ^{\prime },K^{\prime })=\chi (\Phi ,K)$), then $$\frac{\partial \chi ^{\prime }}{\partial \varsigma }=\frac{\partial \chi }{\partial \varsigma }-(\chi ,\tilde{F}_{\varsigma }) \label{thesis}$$ for every parameter $\varsigma $, where $\tilde{F}_{\varsigma }(\Phi ,K)\equiv F_{\varsigma }(\Phi ,K^{\prime }(\Phi ,K))$ and $F_{\varsigma }(\Phi ,K^{\prime })=\partial F/\partial \varsigma $. \[theorem5\] *Proof*. When we do not specify the variables that are kept constant in partial derivatives, it is understood that they are the natural variables. Thus $F$, $\Phi ^{\prime }$ and $K$ are functions of $\Phi ,K^{\prime }$, while $\chi $ and $\tilde{F}_{\varsigma }$ are functions of $\Phi ,K$ and $\chi ^{\prime }$ is a function of $\Phi ^{\prime },K^{\prime }$. It is useful to write down the differentials of $\Phi ^{\prime }$ and $K$, which are [@vanproeyen] $$\begin{aligned} \mathrm{d}\Phi ^{\prime \hspace{0.01in}\alpha } &=&\int \frac{\delta _{l}\delta F}{\delta K_{\alpha }^{\prime }\delta \Phi ^{\beta }}\mathrm{d}\Phi ^{\beta }+\int \frac{\delta _{l}\delta F}{\delta K_{\alpha }^{\prime }\delta K_{\beta }^{\prime }}\mathrm{d}K_{\beta }^{\prime }+\frac{\partial \Phi ^{\prime \hspace{0.01in}\alpha }}{\partial \varsigma }\mathrm{d}\varsigma , \nonumber \\ \mathrm{d}K_{\alpha } &=&\int \mathrm{d}\Phi ^{\beta }\frac{\delta _{l}\delta F}{\delta \Phi ^{\beta }\delta \Phi ^{\alpha }}+\int \mathrm{d}K_{\beta }^{\prime }\frac{\delta _{l}\delta F}{\delta K_{\beta }^{\prime }\delta \Phi ^{\alpha }}+\frac{\partial K_{\alpha }}{\partial \varsigma }\mathrm{d}\varsigma . \label{differentials}\end{aligned}$$ Differentiating $\chi ^{\prime }(\Phi ^{\prime },K^{\prime })=\chi (\Phi ,K)$ with respect to $\varsigma $ at constant $\Phi ^{\prime }$ and $K^{\prime }$, we get $$\frac{\partial \chi ^{\prime }}{\partial \varsigma }=\frac{\partial \chi }{\partial \varsigma }+\int \frac{\delta _{r}\chi }{\delta \Phi ^{\alpha }}\left. \frac{\partial \Phi ^{\alpha }}{\partial \varsigma }\right| _{\Phi ^{\prime },K^{\prime }}+\int \frac{\delta _{r}\chi }{\delta K_{\alpha }}\left. \frac{\partial K_{\alpha }}{\partial \varsigma }\right| _{\Phi ^{\prime },K^{\prime }}. \label{sigmaprimosue2}$$ Formulas (\[differentials\]) allow us to write $$\frac{\partial \Phi ^{\prime \hspace{0.01in}\alpha }}{\partial \varsigma }=-\int \frac{\delta _{l}\delta F}{\delta K_{\alpha }^{\prime }\delta \Phi ^{\beta }}\left. \frac{\partial \Phi ^{\beta }}{\partial \varsigma }\right| _{\Phi ^{\prime },K^{\prime }},\qquad \frac{\delta _{l}\delta F}{\delta K_{\alpha }^{\prime }\delta \Phi ^{\beta }}=\left. \frac{\delta _{l}K_{\beta }}{\delta K_{\alpha }^{\prime }}\right| _{\Phi ,\varsigma },$$ and therefore we have $$\frac{\delta \tilde{F}_{\varsigma }}{\delta K_{\alpha }}=\int \left. \frac{\delta _{l}K_{\beta }^{\prime }}{\delta K_{\alpha }}\right| _{\Phi ,\varsigma }\frac{\partial \Phi ^{\prime \hspace{0.01in}\beta }}{\partial \varsigma }=-\int \left. \frac{\delta _{l}K_{\beta }^{\prime }}{\delta K_{\alpha }}\right| _{\Phi ,\varsigma }\left. \frac{\delta _{l}K_{\gamma }}{\delta K_{\beta }^{\prime }}\right| _{\Phi ,\varsigma }\left. \frac{\partial \Phi ^{\gamma }}{\partial \varsigma }\right| _{\Phi ^{\prime },K^{\prime }}=-\left. \frac{\partial \Phi ^{\alpha }}{\partial \varsigma }\right| _{\Phi ^{\prime },K^{\prime }}. \label{div1}$$ Following analogous steps, we also find $$\frac{\delta \tilde{F}_{\varsigma }}{\delta \Phi ^{\alpha }}=\frac{\partial K_{\alpha }}{\partial \varsigma }+\int \left. \frac{\delta _{l}K_{\beta }^{\prime }}{\delta \Phi ^{\alpha }}\right| _{K,\varsigma }\frac{\partial \Phi ^{\prime \hspace{0.01in}\beta }}{\partial \varsigma },\qquad \frac{\partial K_{\alpha }}{\partial \varsigma }=\left. \frac{\partial K_{\alpha }}{\partial \varsigma }\right| _{\Phi ^{\prime },K^{\prime }}-\int \frac{\delta _{l}\delta F}{\delta \Phi ^{\alpha }\delta \Phi ^{\beta }}\left. \frac{\partial \Phi ^{\beta }}{\partial \varsigma }\right| _{\Phi ^{\prime },K^{\prime }},$$ whence $$\left. \frac{\partial K_{\alpha }}{\partial \varsigma }\right| _{\Phi ^{\prime },K^{\prime }}=\frac{\delta \tilde{F}_{\varsigma }}{\delta \Phi ^{\alpha }}+\int \left( \frac{\delta _{l}K_{\gamma }}{\delta \Phi ^{\alpha }}+\left. \frac{\delta _{l}K_{\beta }^{\prime }}{\delta \Phi ^{\alpha }}\right| _{K,\varsigma }\frac{\delta _{l}K_{\gamma }}{\delta K_{\beta }^{\prime }}\right) \left. \frac{\partial \Phi ^{\gamma }}{\partial \varsigma }\right| _{\Phi ^{\prime },K^{\prime }}=\frac{\delta \tilde{F}_{\varsigma }}{\delta \Phi ^{\alpha }}. \label{div2}$$ This formula, together with (\[div1\]), allows us to rewrite (\[sigmaprimosue2\]) in the form (\[thesis\]). [99]{} B.S. De Witt, Quantum theory of gravity. II. The manifestly covariant theory, Phys. Rev. 162 (1967) 1195. L.F. Abbott, The background field method beyond one loop, Nucl. Phys. B 185 (1981) 189. I.A. Batalin and G.A. Vilkovisky, Gauge algebra and quantization, Phys. Lett. B 102 (1981) 27-31; I.A. Batalin and G.A. Vilkovisky, Quantization of gauge theories with linearly dependent generators, Phys. Rev. D 28 (1983) 2567, Erratum-ibid. D 30 (1984) 508; see also S. Weinberg, *The quantum theory of fields*, vol. II, Cambridge University Press, Cambridge 1995. D. Anselmi, Renormalization of gauge theories without cohomology, Eur. Phys. J. C73 (2013) 2508, [13A1 Renormalization.com](http://renormalization.com/13a1/), arXiv:1301.7577 \[hep-th\]. G. Barnich, F. Brandt, M. Henneaux, Local BRST cohomology in the antifield formalism. I. General theorems, Commun. Math. Phys. 174 (1995) 57 and arXiv:hep-th/9405109; G. Barnich, F. Brandt, M. Henneaux, Local BRST cohomology in the antifield formalism. II. Application to Yang-Mills theory, Commun. Math. Phys. 174 (1995) 116 and arXiv:hep-th/9405194; G. Barnich, F. Brandt, M. Henneaux, General solution of the Wess-Zumino consistency condition for Einstein gravity, Phys. Rev. D 51 (1995) R1435 and arXiv:hep-th/9409104; S.D. Joglekar and B.W. Lee, General theory of renormalization of gauge invariant operators, Ann. Phys. (NY) 97 (1976) 160. D. Anselmi and M. Halat, Renormalization of Lorentz violating theories, Phys. Rev. D 76 (2007) 125011 and arXiv:0707.2480 \[hep-th\]; D. Anselmi, Weighted power counting and Lorentz violating gauge theories. I. General properties, Ann. Phys. 324 (2009) 874, [08A2 Renormalization.com](http://renormalization.com/08a2/) and arXiv:0808.3470 \[hep-th\]; D. Anselmi, Weighted power counting and Lorentz violating gauge theories. II. Classification, Ann. Phys. 324 (2009) 1058, [08A3 Renormalization.com](http://renormalization.com/08a3/) and arXiv:0808.3474 \[hep-th\]. S. Weinberg, Ultraviolet divergences in quantum theories of gravitation, in *An Einstein centenary survey*, Edited by S. Hawking and W. Israel, Cambridge University Press, Cambridge 1979. K.S. Stelle, Renormalization of higher-derivative quantum gravity, Phys. Rev. D 16 (1977) 953. E.T. Tomboulis, Superrenormalizable gauge and gravitational theories, arXiv:hep-th/9702146; L. Modesto, Super-renormalizable quantum gravity, Phys.Rev. D86 (2012) 044005 and arXiv:1107.2403 \[hep-th\]; T. Biswas, E. Gerwick, T. Koivisto and A. Mazumdar, Towards singularity and ghost free theories of gravity, Phys.Rev.Lett. 108 (2012) 031101 and arXiv:1110.5249 \[gr-qc\]; L. Modesto, Finite quantum gravity, arXiv:1305.6741 \[hep-th\]. H. Kluberg-Stern and J.B. Zuber, Renormalization of nonabelian gauge theories in a background field gauge. 1. Green functions, Phys. Rev. D12 (1975) 482; H. Kluberg-Stern and J.B. Zuber, Renormalization of nonabelian gauge theories in a background field gauge. 2. Gauge invariant operators, Phys. Rev. D 12 (1975) 3159. D. Colladay and V.A. Kostelecký, Lorentz-violating extension of the Standard Model, Phys. Rev. D58 (1998) 116002 and arXiv:hep-ph/9809521; D. Anselmi, Weighted power counting, neutrino masses and Lorentz violating extensions of the Standard Model, Phys. Rev. D79 (2009) 025017, [08A4 Renormalization.com](http://renormalization.com/08a4/) and arXiv:0808.3475 \[hep-ph\]; D. Anselmi, Standard Model without elementary scalars and high energy Lorentz violation, Eur. Phys. J. C65 (2010) 523, [09A1 Renormalization.com](http://renormalization.com/09a1/), and arXiv:0904.1849 \[hep-ph\]. S.L. Adler and W.A. Bardeen, Absence of higher-order corrections in the anomalous axial vector divergence, Phys. Rev. 182 (1969) 1517. D. Binosi and A. Quadri, Slavnov-Taylor constraints for nontrivial backgrounds, Phys. Rev. D84 (2011) 065017 and arXiv:1106.3240 \[hep-th\]; D. Binosi and A. Quadri, Canonical transformations and renormalization group invariance in the presence of nontrivial backgrounds, Phys. Rev. D85 (2012) 085020 and arXiv:1201.1807 \[hep-th\]; D. Binosi and A. Quadri, The background field method as a canonical transformation, Phys.Rev. D85 (2012) 121702 and arXiv:1203.6637 \[hep-th\]. D. Anselmi, A general field covariant formulation of quantum field theory, Eur. Phys. J. C73 (2013) 2338, [12A1 Renormalization.com](http://renormalization.com/12a1/) and arXiv:1205.3279 \[hep-th\]. D. Anselmi, A master functional for quantum field theory, Eur. Phys. J. C73 (2013) 2385, [12A2 Renormalization.com](http://renormalization.com/12a2/) and arXiv:1205.3584 \[hep-th\]. D. Anselmi, Master functional and proper formalism for quantum gauge field theory, Eur. Phys. J. C73 (2013) 2363, [12A3 Renormalization.com](http://renormalization.com/12a3/) and arXiv:1205.3862 \[hep-th\]. B.L. Voronov, P.M. Lavrov and I.V. Tyutin, Canonical transformations and the gauge dependence in general gauge theories, Sov. J. Nucl. Phys. 36 (1982) 292 and Yad. Fiz. 36 (1982) 498. P. van Nieuwenhuizen, *Supergravity*, Phys. Rept. 68 (1981) 189. S.J. Gates, M.T. Grisaru, M. Rocek and W. Siegel, [*Superspace or one thousand and one lessons in supersymmetry*]{}, Front.Phys. 58 (1983) 1-548, arXiv:hep-th/0108200. D. Anselmi, *Renormalization*, to appear at [`renormalization.com`](http://renormalization.com) D. Anselmi, More on the subtraction algorithm, Class. Quant. Grav. 12 (1995) 319, [94A1 Renormalization.com](http://renormalization.com/94a1/) and arXiv:hep-th/9407023. D. Anselmi, Removal of divergences with the Batalin-Vilkovisky formalism, Class. Quant. Grav. 11 (1994) 2181-2204, [93A2 Renormalization.com](http://renormalization.com/93a2/) and arXiv:hep-th/9309085. W. Troost, P. van Nieuwenhuizen and A. Van Proeyen, Anomalies and the Batalin-Vilkovisky Lagrangian formalism, Nucl. Phys. B 333 (1990) 727.
{ "pile_set_name": "arxiv" }
List of spouses of Prime Ministers of Japan The is the wife or husband of the Prime Minister of Japan. Role and duties The role of the Prime Ministerial Consort is not an official office and as such they are not given a salary or official duties. Spouse of the Prime Ministers of the Empire of Japan (1885–1947) Spouse of the Prime Ministers during the Meiji period (1885–1912) Under the Meiji Emperor Spouse of the Prime Ministers during the Taishō period (1912–1926) Under the Taishō Emperor Spouse of the Prime Ministers during the Shōwa period (1926–1947) Under the Shōwa Emperor Spouse of the Prime Ministers of the State of Japan (1947–present) Spouse of the Prime Ministers during the Shōwa period (1947–1989) Under the Shōwa Emperor Spouse of the Prime Ministers during the Akihito period (1989–present) Under Emperor Akihito References * *Spouse Japan
{ "pile_set_name": "wikipedia_en" }
Teen angel Keisha Grey cant hide her large natural mangos under her tiny t-shirt counting up as Johnny Sins cant hide his large cock in his pants. That babe gets apropos on her knees to take his pistol in her hawt mouth
{ "pile_set_name": "pile-cc" }
Dominick & Dickerman Dominick and Dickerman is an investment and merchant banking firm, located in New York City. From 1899 through 2015, the firm was known as Dominick and Dominick. Following the sale of the wealth management business, the firm reverted to its original name, Dominick and Dickerman. The firm was founded in 1870 and is one of the oldest, continuously operated financial services institutions in the United States. Dominick & Dickerman LLC services its individual and corporate clients primarily through three business divisions: Private Wealth Management, Investment Banking and Institutional Sales. Private Wealth Management offers wealth management advise, including investment strategies, asset allocation, wealth and estate planning, insurance products and alternative investments. The Investment Banking team services public and private corporations around the world by raising capital, developing and implementing strategic merger and acquisition plans, and advising senior management teams on a variety of governance, operations and growth issues. Institutional Sales recommends and executes trading strategies for institutional clients in the United States and abroad. Dominick & Dominick LLC is headquartered in New York City and has offices in Basel, Switzerland. Dominick & Dominick LLC is a member of the Securities Investor Protection Corporation and the Financial Industry Regulatory Authority. History Founding The company was founded on June 15, 1870 as Dominick & Dickerman by William Gayer Dominick and Watson Bradley Dickerman. Dominick was born in Chicago, and moved to New York as a child. In 1869, at the age of 25, he purchased membership on the New York Stock Exchange. At the NYSE, Dominick met Connecticut-born Dickerman and they went into business forming a stock brokerage firm. Dominick's brothers, George and Bayard Dominick, also joined the Exchange and became partners in the firm. Dominick & Dickerman opened its first branch in 1889 in Cincinnati, where the firm was one of only two exchange members. A year later, Dickerman left the firm when he was elected president of the New York Stock Exchange. He would serve as president from 1890 to 1892, then return to the firm. His cofounder, William Dominick, died in 1895 at the age of 50 to typhoid fever. Dickerman would retire in 1909, passing away at the age of 77 in 1923. Dominick & Dickerman changed its name in 1899 to Dominick & Dominick, adding several new partners including Milnor B. Dominick, Andrew V. Stout, J.A. Barnard, and Bernon S. Prentice. The firm was one of the original sources for closed-end funds, launching The Dominick fund, Inc in 1920 by selling 200,000 shares for a raise of $10 million. Despite the stock market crash in 1929, the fund survived and was listed in 1959 on the NYSE before it was merged with Putnam Fund in 1973. Expansion In 1936 Dominick & Dominick expanded through acquisition, merging with A. Iselin & Co., also one of Wall Street's oldest firms. Several months earlier the patriarch of the firm, Adrian Iselin, died at the age of 89. He had joined the firm, which his father formed, as a 22-year-old in 1868. At the time of the merger, Dominick & Dominick had 13 partners, including Gayer G. Dominick (senior partner since 1926), Bayard Dominick, and Gardner Dominick Stout. It next picked up several partners from Iselin & Co., as well as Iselin Securities Corporation, which brought with it an office in Paris, and the Iselin Corporation of Canada with its office in Montreal. Because Dominick & Dominick already maintained a London office, the London office of Iselin Securities was closed. Other European offices were subsequently opened, and Dominick & Dominick soon had a presence in all of the major cities in Europe. World War II A large number of the firm's employees and partners entered the military, including Gayer Dominick. The firm was content to just keep its doors opened and operate. Gayer Dominick had been with the firm since 1909 after graduating from Yale University. In 1935 he was elected a governor of the New York Stock Exchange and helped to hire the first paid president of the Exchange, at the behest of the new Securities and Exchange Commission (SEC). He then left the family firm in 1938 to enter public service, working for the Office of Price Administration in the Roosevelt administration. Post-War expansion After World War II came to an end and following a brief economic recession as the United States reverted to a peacetime footing, the economy enjoyed a long period of growth. Dominick & Dominick benefited from the country's prosperity. Some of the firm's most notable transactions during the postwar years involved Yonkers, New York-based Alexander Smith Carpet Company and Canada's Great Plains Oil. In the late 1950s Dominick & Dominick was also part of a banking syndicate that managed the initial public offering (IPO) of stock issued by Arvida Corporation, which was formed in Florida in 1958 to sell the real estate holdings of Arthur Vining Davis. The IPO gained attention because of objections raised by the SEC to the way the managers had announced the stock sale before filing a registration statement with the SEC, a violation of the law. Dominick & Dominick ended registration as a partnership, reorganizing as a corporation in 1964. The 1960s also saw the firm spread its operation across the country, taking advantage of a bull market to build up a domestic retail brokerage business. In 1962 an office in Chicago was opened. Dominick & Dominick gained a major presence in New England in 1966 by acquiring the firm of Townsend, Dabney, Tyson. Not only did the firm pick up a large Boston office but another 15 offices throughout the Northeast. About 30 additional branch offices across the United States were opened by the end of the 1960s. In 1970 Dominick & Dominick pursued a merger with Clark, Dodge & Co., Inc., a similar size firm, but called it off, electing instead to continue a program of opening new offices and pursuing the acquisition of smaller firms. This plan was also eventually terminated, however, as the stock market began to experience one of the worst bear markets in a generation, and Dominick & Dominick found that it had stretched itself far too thin. Dark period Strapped for cash the firm sold four of its five seats on the New York Stock Exchange and one of two seats on the American Stock Exchange. It also sold a significant stake in the business for $7.25 million to an investment group led by Pierce National Life Insurance Company (now Liberty Corporation), which was in turn controlled by Joe L. Allbritton, founder of Allbritton Communications Company. While the infusion of capital was welcome, Dominick & Dominick still found itself in a difficult position and decided to exit the domestic retail brokerage business and to sell the bulk of its branch offices. The firm's chairman and chief executive, Peter M. Kennedy, explained to the New York Times that "a national retail structure is not right for a firm of our size. We either had to be bigger or smaller." He added, "We are not going out of business. We are just changing the nature of our business." Dominick & Dominick retained a modest retail business but mostly chose to focus on core strengths, including institutional business, money management, corporate finance, municipal bonds, and its international business. It was also in 1973 The Dominick fund, which had about $55 million in assets, was taken over by Putnam Fund. Over the next 20 years, Dominick & Dominick reduced its work force and closed offices in an attempt to focus on more profitable financial services such as research. The firm also became involved in the fixed income area, making corporate and municipal bonds, Eurobonds, and Treasury Notes available to its clients, and launched managed futures programs to participate in the global currency markets. The firm also did a healthy business providing clearing services to more than 100 National Association of Securities Dealers firms; its Dominick & Dominick Advisors unit provided investment and portfolio management services to high-net-worth investors and institutions in the United States, Europe, and Asia. 21st century By the start of the new century, Dominick & Dominick was in decline and needed an infusion of partner capital. In October 2003 the firm brought in a new president and CEO, hiring 58-year-old Michael J. Campbell, a former Marine who had 30 years of experience in the industry, including a 25-year tenure with Donaldson, Lufkin & Jenrette (DLJ) and a stint with Credit Suisse First Boston after Credit Suisse acquired DLJ. With DLJ Campbell managed the private client group, expanding the firm's high-net-worth and midsized institutional investor brokerage business from 75 advisors to a network of more than 500 investment professionals. Campbell joined Dominick & Dominick in 2003, bringing senior management from DLJ and Credit Suisse First Boston. The new management relocated the firm headquarters from lower Manhattan to Midtown Manhattan to an office on 52nd St. In addition, Campbell wasted little time in recruiting new advisors from Credit Suisse and other major financial firms. Dominick & Dominick's first branch office to open under Campbell's management was in the fall of 2004 when an operation in Miami was opened to focus on wealthy Central and South American residents. It was not an acquisition, as Dominick was absorbing the Miami office of Pennsylvania-based First Security Investments, which had been opened by another former DLJ employee, Alain O'Hayon, who stayed on to manage the office. Campbell was very familiar with the potential of a Miami operation, having built up an office in the city for DLJ from just two brokers to more than 70. In 2006 another regional office was added in Atlanta. A year later, in 2007, Dominick & Dominick launched a new risk arbitrage group with the hope to develop synergies with the firm's existing institutional and high-net-worth client base. Stanford Financial Group receivership On November 13, 2009, the US District Court ordered the Brokerage Accounts of Stanford Financial Group Brokerage to be transferred to Dominick & Dominick LLC. The Stanford Group was the firm run by Allen Stanford. Both Stanford Group Company and Dominick & Dominick LLC use Pershing LLC as their clearing firm. The transfer became effective on January 20, 2010. References Notes Bibliography Allan, John H. "Two Wall Street Firms Undergo Changes." The New York Times 22 Feb. 1973: n. pag. Print. New York Times. "Anniversay Celebrated: Dominick & Dominick, Brokers, Oberve Fiftieth Year in Business." The New York Times 15 June 1920: n. pag. Print Staff. "Dominick Branches Sold to Other Firms." The New York Times 8 Aug. 1973: n. pag. Print. Staff. "Iselin Firm to End, Joining Dominicks." The New YOrk Times 17 June 1973: n. pag. Print. Vartan, Vartanig G. "Dominick to Quit Retail Brokerage." The New York Times 31 July 1973: n. pag. Print Dominick & Dominick LLC Homepage. "About US." Dominick & Dominick LLC. N.p., n.d. Web. 27 Apr. 2011. <http://www.dominickanddominick.com/>. External links Category:Financial services companies established in 1870 Category:Investment companies based in New York City Category:Companies based in New York City Category:Financial services companies based in New York City Category:Brokerage firms Category:1870 establishments in New York (state)
{ "pile_set_name": "wikipedia_en" }
Vaganjac Vaganjac is a village in the municipality of Gornji Vakuf, Bosnia and Herzegovina. References Category:Populated places in Gornji Vakuf-Uskoplje
{ "pile_set_name": "wikipedia_en" }
A Blog on India Menu Connect The Dots In her firstbook Stay Hungry Stay Foolish Rashmi Bansal profiled twenty five entrepreneurs who were alumni of IIM – Ahmedabad. Many had then wondered including yours truly, how important an MBA degree is to become an entrepreneur. Rashmi claims this inspired her to write Connect The Dots, story of twenty one entrepreneurs but who dont have an MBA degree. The format of the book is same as her last book. There are twenty chapters, one on each entrepreneur (Gaurav Rathore & Saurabh Vyas who co founded PoliticalEDGE are covered in one chapter) and the entire chapter is based on one single interview. The book is divided in three sections : Jugaad, Junoon & Zubaan. Jugaadis are those who didn’t get any formal training in business but learned by observing, experimenting and applying their mind. It includes some one like Kunwer Sachdev of Su-Kam who created a Rs 500 crore company from scratch; Ganesh Ram, who started what is today India’s largest English training academy, VETA when there were no BPOs and no one knew that English coaching would be as big a market as it is now. Junoonis as the name suggests, are passionate about something that is ahead of its time. This was my favorite section in the book. Gaurav Rathore and Saurabh Vyas envisioned a consulting and research firm exclusively for politics and founded PoliticalEDGE; Satyajit Singh, founder of Shakti Sudha not only created a new industry but also benefited thousands of farmers in rural Bihar; Chetan Maini, founder of Reva, designed a solar car and has been producing electric cars since the time when global warming was not so well known and creating electric cars seemed to make little sense. The third section Zubaan is about creative people like Paresh Mokashi, creator of Harishchandrachi Factory, India’s official entry to Oscar last year or Krishna Reddy, whose Prince Dance Group, consisting of daily wage laborers won India’s Got Talent last year. I had great hopes from the book as I loved Stay Hungry Stay Foolish. The first chapter on Prem Ganpathy is literally a rags to riches story of someone who came to Mumbai with no money and now owns Dosa Plaza, a fast food chain with 26 outlets in the country.The rest of the stories too are very encouraging. The book is replete with inspiring anecdotes and quotes . When I read the synopsis on the third section i.e. Zubaan, I thought it would be probably the weak link in this book as stories on creatives who had made it big in the field of art would be a misfit in this book about entrepreneurs. However, all these artists achieved commercial success by following their passion and this justifies their inclusion in this book about Entrepreneurs. Entrepreneurship after all is about following your heart. Generally when the first book is good and successful authors fail to recreate the magic in their subsequent books and that too in the same genre, as people have high expectations. In this case Rashmi Bansal definitely exceeded my expectations. A very good book and must read for some one aspires to be an entrepreneur.
{ "pile_set_name": "pile-cc" }
My Hero Academia Season 2- Episode 18 After last weeks episode, I was really curious what they had in store for us this week. How the heroes will come back to Earth after such a traumatic experience. And good thing for us, this episode is rightly named “The Aftermath of Hero Killer: Stain”. My Hero Academia- Funimation We open with Izuku, Iida, and Todoroki all in the hospital. They are all recovering from their tremendous fight. But also reflecting how lucky they are to be alive still. The door opens and we see Gran Torino and Pro Hero Manual. First thing Gran Torino does of course is scold Midoriya. But before Gran Torino goes full instructor on him, he tells the boys that they have a visitor. My Hero Academia- Funimation A tall figure turns the corner wearing a professional business suit. It’s Hosu’s chief of police, Kenji Tsuragamae. Who also happens to look like a dog (just go along with it I guess?). Kenji tells the boys that Stain is in custody and is being treated for several broken bones and serious burns. He also reminds them that what they have done was not okay on paper. Uncertified heroes using their Quirks against their instructors orders is highly frowned upon. But Todoroki is not taking it. He tells the chief that if Iida didn’t step in, then Pro Hero Native would have been killed. And the both of them would have been killed without Izuku’s help. But Gran Torino tells Todoroki to hear the chief out. Kenji tells them that the punishment would only happen if this was made public. And the people would applaud their efforts anyway. But if the police kept it quiet, no one gets punished, but the boys don’t get the praise they deserve. Instead Endeavor will get the praise from the masses. It would also explain Stain’s burn scars. So they choose to not be celebrated as heroes and apologize anyways. But Kenji tells them he respects what they did and he thanks them for protecting the peace. So it has hit the news that Endeavor has stopped hero killer Stain and the nomus from destroying Hosu City. It’s all anyone is talking about. Meanwhile, we get a look at how everyone else from Class 1-A is doing in their internship programs. First we look at Bakugo who is having a less than stellar time. The first thing he wants to do is go knock some heads around but his mentor is not allowing him and says it will be business as usual. Hopefully that could help Bakugo control his temper. Kirishima finds out the reason why Midoriya sent him his location. Apparently he also reported the incident last night. Go Kirishima! Momo debuted in a commercial with her mentor and it seems obvious modelling is not what she wants to be doing. But her mentor is letting her go on patrol like she has wanted since the start of their training together. My Hero Academia- Funimation And finally there is Uraraka who is on the phone with Midoriya. She tells him how glad she is that they are all safe. Midoriya of course apologizes for not contacting sooner but she understands. In the midst of the conversation, Uraraka’s mentor Gunhead reminds her that they are going to start their basic training. She then says bye to Midoriya and Gunhead asks in a very cute way, “Your boyfriend?”. And she immediately dismisses it. When Midoriya hangs up he gets all worked up that he talked to a girl on the phone. This scene was my favorite from this episode it had me busting up! We get back to the guys in the hospital and Iida comes out and tells the two that he may have long-term damage in his hand. But he reflects on his actions from that night and regrets them greatly. He shouldn’t have acted so swiftly and carelessly. But Midoriya doesn’t let Iida beat himself up too much. He agrees that him and Iida should get stronger together. My Hero Academia- Funimation We cut to U.A. with All Might in the staff room. He gets a phone call from Gran Torino. He tells All Might that he has had his teaching licence revoked for six months because of Midoriya’s actions but there was no way of avoiding that and that he has come to terms with it. But All Might is very ashamed of himself for letting down his former instructor. But this isn’t the reason Gran Torino called. He really wants to talk about Stain. He says in the few minutes he was with him, it had him trembling. It was because of how intimidating and obsessed he was on what he think a hero should be and what he will do to correct our society. Because this has hit so many news stations, Stains ideology and opinions will be put on blast. People will become influenced by Stain’s beliefs and become a plague. But All Might doesn’t believe it will be a problem because they will probably show up sporadically and they will be taken out 1 by 1. But this is where the League of Villains comes in. If they all combine their hatred and Shigaraki gives them an outlet to express and deal with their evil intentions, it will become a serious problem. Gran Torino then reminds All Might that he must tell Midoriya properly what is concerning him and One for All. Which I have no idea what that is all about. Apparently the quirk is “on the move”? I’m interested in what that might mean. This episode was mostly a lot of dialogue and context but it was needed after such a shift in the story. It was refreshing getting some insight on how everyone else is doing too. I’m hoping next time they elaborate on what is concerning All Might and what is happening with his quirk? Only time will tell.
{ "pile_set_name": "pile-cc" }
Philip Evans (headmaster) Dr Ian Philip Evans OBE FRSC (born 1948) is a British educationalist and a former Headmaster of Bedford School. Biography Born on 2 May 1948 and educated in North Wales at Ruabon Boys Grammar school, Dr Philip Evans read Natural Sciences at Churchill College, Cambridge and obtained a doctorate in inorganic chemistry from Imperial College London, working in the laboratory of Professor Sir Geoffrey Wilkinson. He taught chemistry at St Paul's School, London and, in 1991, he was appointed as Headmaster of Bedford School, a position which he held until the summer of 2008. He was also appointed as a government advisor on education, from which post he retired in 1999, and was subsequently awarded an OBE for his work. He is currently an appointed member of the council of the Royal Society of Chemistry. Publications I. P. Evans, A. Spencer, & G. Wilkinson "Dichlorotetrakis(dimethyl sulfoxide)ruthenium(II) and its use as a source material for new ruthenium(II) complexes" Jrnl. Chem. Soc. Dalton Trans. (1973) 204-209 References [[Category:Welsh schoolteachers] Category:Alumni of Churchill College, Cambridge Category:Living people Category:1948 births Category:Alumni of Imperial College London Category:Headmasters of Bedford School Category:Officers of the Order of the British Empire Category:Fellows of the Royal Society of Chemistry
{ "pile_set_name": "wikipedia_en" }
During my pregnancy, I tried to gather as much information on how painful labor might actually be. I would often hear “mine was horrible, but everyone’s pregnancy is different” or “it was the worst pain I’ve ever felt in my life!” I heard many horror stories which often ended with, “well, don’t worry. You’ll forget about the pain as soon as your child is born.” Not the most reassuring for a first-time mother, but something I definitely kept in mind the entire time. I had feared the unknown, but on the other hand, I knew there was no turning back and that my baby was coming one way or another! Two weeks before my due date, I noticed some blood. My water didn’t break and I saw no mucous plug, but it seemed that something was happening earlier than expected. Soon after, at 1 a.m. I woke up from a notably different type of cramping. It began to occur every 5 minutes. It wasn’t that painful (yet), but uncomfortable. I felt as if I had to go diarrhea every five minutes. If this is labor, I could handle it for sure I thought, but I knew this was only the beginning. My husband nervously drove us to the hospital as if the baby would pop out any second. I had to remind him to not worry. Things usually didn’t happen that fast for first-time moms (or at least I hoped it wouldn’t). I had to go by instinct although in the back of my mind, I wasn’t sure what would happen next. We finally got to the birthing center after an hour of driving and the nurses confirmed I wasn’t even dilated. I couldn’t believe it. We were turned away and had to find a hotel because returning home wasn’t an option. It would take two hours just to return again! The diarrhea-like cramps were painful and uncomfortable; I couldn’t sleep. I was bleeding slightly and started to actually have these cramps and stomach aches over a 10 hour period. I started googling my symptoms (never a good thing!) and discovered there are people who have this uncomfortable feeling for days and weeks! “Fake labor” would not be in my cards, I had hoped. Fortunately, I had an appointment with a midwife in the afternoon and was checked again for any cervical changes. I had finally dilated 3 cm and was 90% effaced. What a relief I thought! I welcomed the pain because I wanted things to progress. I couldn’t imagine having diarrhea cramps for weeks. However, 3 cm isn’t enough to be admitted, we were told, so back to the hotel we went. “When your cramps become more regular, every 3 minutes a part, and you become more snippy, check in again” the midwife suggested. In the mean time, I tried to walk around, pausing multiple times to catch my breath. A couple hours later, I was FINALLY admitted. My husband kept asking me questions non-stop about what I wanted, needed, and more. All I could say was “if I need something, I’ll let you know. Thanks.” I literally couldn’t talk. I felt like vomiting and had heart burn for the first time in my life. As my labor progressed, I felt the urge to push before I was even 10 cm dilated. I would have a cramp, then a couple of minutes later, one that made me yell out in pain as it forced my body to push. A gush of blood would come out as this happened and I felt extremely uncomfortable because the pain was in my back and butt! It would take my breath away. However, the pain was still tolerable, believe it or not. I had a volunteer doula come in that night who helped me breathe, rubbed my back, and encouraged me. She helped me be aware of my voice and how I could use it to save energy and get through the pain. Unfortunately, she couldn’t stay the whole night, but the time she spent with me truly made a difference. Even though labor was hard work and painful, the right breathing technique and support helped ease the pain. This is probably the number one thing that helped me get through labor! As I started heading towards my second night of labor, I wondered how much longer I could go on … I questioned if it was even worth it to continue without an epidural? I went into labor without a plan. I wanted to go with the flow and make decisions as they came. I didn’t want to be tied to a bed or deliver on my back or disappointed if my perfect labor didn’t come true, so I left any expectations open. But after my second sleepless night, I started to inquire about pain medications (although deep down inside I knew I could handle more because the pain was still manageable). I was exhausted and sleep would have been nice especially if I didn’t have to feel any pain with an epidural. There were no walking epidurals available though and I didn’t want to take narcotics (which could make me dizzy), so I continued along, breathing away. A bath was an option too and this I requested and wanted. I was so uncomfortable as things progressed. I couldn’t get in the shower to relax my muscles, but somehow a bath sounded soothing and worth the effort. As soon as the bath was ready, however, I suddenly felt a pop down below as if major pressure had been released from my insides. Immediately, there was a shift. The back and butt pressure/pain I felt was no longer there. It was time to push! I knew as soon as I felt it. As the baby descended, I felt the burning sensation of the babies head crowning – a temporary stretching sting. The cramps were still there and I had no control over my own pushing. I let my body do its own work and took the breaks my body provided in between each wave of labor. I was standing up giving birth because I couldn’t get onto the bed as I would have liked and was given a stool to put my right foot onto in order to widen my pelvis. Gravity certainly assisted me. However, I never expected to be standing for 50 minutes! My legs were becoming tired and shaky, but I couldn’t move. My energy was sapped and I regretted not exercising more. Standing up was the most comfortable thing to do though and I listened to my body’s cues. I started to go along with my body’s signals to push, but after a while I felt as if the baby would never come out because things weren’t progressing fast enough. After his head came out, I thought it was all over until I heard my husband say “push, his body is stuck!” I ended up pushing as hard as I could and a gush of fluids came spewing onto the floor. It was the best sense of relief. The midwives held my baby from under me and told me to grab him. He was screaming, kicking, and punching his way into this life. He was so slippery, I was terrified to grab him. I had never held a baby before. He would be my first. I held my son and put him on my chest. I couldn’t stop looking at him in awe. He was so beautiful to me and I felt overwhelmed with love and joy. When the umbilical cord finally stopped pulsating, which happened surprisingly quick, my husband carefully snipped it. At this point, I’m glad my husband didn’t pass out. I always joked that he would get queasy and faint, but my husband did amazing! While holding my son, I had to deliver my placenta which did not hurt at all. In fact, I couldn’t even feel much down below because of the adrenaline pumping throughout my veins. Looking into my son’s eyes and holding him for the first time was the most incredible thing in the world. The pain that I felt earlier in labor vanished and I felt ecstatic to have made it through. It’s true what they say … After your baby is born, you forget the pain of labor and birth. At least most of it. Hello! Hello! It's nice to meet you! I'm Mary. Thank you for stopping by Stirring Up Delight. I hope you'll find some useful tips, recipes, and reviews or maybe a story or two that you might enjoy. Read More… Follow me Subscribe to Blog via Email Enter your email address to subscribe to this blog and receive notifications of new posts by email. Email Address Stirringupdelight.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com For those of you who were told you might need an endometrial biopsy, here’s my experience, so you can sleep a little better at night. Although I do not know what yours will be like, I can tell you that not all of them turn out horrific like you might’ve read online. Why You Might […] Kalua pig is a dish from Hawaii that may be intimidating to make if it’s done traditionally, but modern technology has its benefits. You don’t have to roast a pig underground, but instead you can use your slower cooker to make it. How easy is it? Buy some pork butt at the store and toss […] I’m so excited to be planning my niece’s 1st birthday party this fall! For anyone who really knows me, I absolutely love planning. It is one of my obsessions. After making numerous planning mistakes, however, I would like to share with you some tips I’ve learned along the way. If you are planning your child’s […] One of my favorite drinks when I return home is plantation iced tea. Last year, when I spent a week back home in Hawaii, I ended up drinking it as often as I found it on the menu. Now that the heat of summer is here, I’m dreaming of returning home to visit again. I really […]
{ "pile_set_name": "pile-cc" }
id: dsq-747531936 date: 2010-04-05T22:49:24.0000000-07:00 name: DonSleza4e avatar: https://disqus.com/api/users/avatars/DonSleza4e.jpg message: <p>Awesome<br>Integrated lib with my <a href="http://asp.net" rel="nofollow noopener" title="asp.net">asp.net</a> mvc project ^^</p>
{ "pile_set_name": "github" }
--- abstract: 'Using first-principles calculations, we investigate the positional dependence of trace elements such as O and Cu on the crystal field parameter $A_2^0$, proportional to the magnetic anisotropy constant $K_u$ of Nd ions placed at the surface of Nd$_2$Fe$_{14}$B grains. The results suggest the possibility that the $A_2^0$ parameter of Nd ions at the (001) surface of Nd$_2$Fe$_{14}$B grains exhibits a negative value when the O or Cu atom is located near the surface, closer than its equilibrium position. At the (110) surface, however, O atoms located at the equilibrium position provide a negative $A_2^0$, while for Cu additions $A_2^0$ remains positive regardless of Cu’s position. Thus, Cu atoms are expected to maintain a positive local $K_u$ of surface Nd ions more frequently than O atoms when they approach the grain surfaces in the Nd-Fe-B grains.' author: - Yuta Toga - Tsuneaki Suzuki - Akimasa Sakuma title: 'Effects of trace elements on the crystal field parameters of Nd ions at the surface of Nd$_2$Fe$_{14}$B grains' --- INTRODUCTION ============ Rapidly increasing demand for efficient electric motors motivates the development of high-performance Nd-Fe-B magnets. In sintered Nd-Fe-B magnets, Nd is frequently substituted with Dy, owing to Dy’s suppression of coercivity ($H_c$) degradation. However, Dy is expensive and decreases the magnetization of Nd-Fe-B magnets. To realize Dy-free high-performance Nd-Fe-B magnets, we must understand the $H_c$ mechanism of rare-earth (Re) permanent magnets. Although the $H_c$ mechanism has been discussed for sintered Nd-Fe-B magnets,[@r1; @r2; @r3] our understanding of $H_c$ is incomplete, requiring further examination. Recent papers emphasized that the development of high-coercivity Nd-Fe-B magnets requires understanding their microstructure, especially the grain boundary (GB) phase surrounding the Nd$_2$Fe$_{14}$B grains.[@r4; @r5] Researchers have reported that the intergranular Nd-rich phase includes neodymium oxides (NdO$_x$) with diverse crystal structures (e.g., fcc, hcp), suggesting that O atoms must exist near Nd atoms at the interface between the Nd-rich phase and Nd$_2$Fe$_{14}$B grains.[@r6; @r7] Recently, Amin [*et al.*]{}[@r8] confirmed the segregation of Cu to the NdO$_x$/Nd$_2$Fe$_{14}$B interface after annealing, suggesting that the magnetic anisotropy constant $K_u$ grains are lapped by a Cu-rich layer. Thus, O and Cu atoms are considered to stay around the Nd ions at the interfaces, playing a key role for the coercive force of Nd-Fe-B sintered magnets. From theoretical viewpoint, it is widely accepted that the magnetic anisotropy of rare earth magnets is mainly dominated by the 4f electrons in the rare earth ions, because of their strongly anisotropic distribution due to their strong intra-atomic interactions such as electron-correlation and $L$-$S$ coupling. The one-electron treatments based on the local density functional approximation have not yet been successfully reproducing this feature in a quantitative level, in contrast to the 3d electronic systems. To overcome this problem, crystalline electric field theory combined with the atomic many-body theory has been adopted by many workers to study the magnetic anisotropy. In 1988, Yamada [*et al.*]{}[@yamada] successfully reproduced magnetization curves reflecting magnetic anisotropy of the series of Re$_2$Fe$_{14}$B, using the crystal field parameters $A_l^m$ as adjustable parameters. In 1992, the first principles calculations to obtain the $A_l^m$ were performed by Richter [*et al.*]{}[@a20_1] for ReCo$_5$ system and Fähnle [*et al.*]{}[@a20_2] for Re$_2$Fe$_{14}$B system. Based on this concept, Moriya [*et al.*]{}[@r9] showed via first-principles calculations that the crystal field parameter $A_2^0$ on the Nd ion exhibits a negative value when the Nd ion in the (001) planes is exposed to vacuum. As shown by Yamada [*et al.*]{}[@yamada], the magnetic anisotropy constant $K_u$ originated from rare earth ion is approximately proportional to $A_2^0$ when the exchange field acting on the 4f electrons in a rare earth ion is sufficiently strong. Since the proportional coefficient is positive for Nd ion, negative $A_2^0$ implies the planar magnetic anisotropy. Furthermore, Mitsumata [*et al.*]{}[@r10] demonstrated that a single surface atomic layer with a negative $K_u$ value dramatically decreases $H_c$. In physical sintered magnets, however, the grain surfaces are not exposed to vacuum but instead face GB phases. Therefore, our next step is studying how interface elements in the GB phases adjacent to Nd ions affect the $A_2^0$ values at the Nd-Fe-B grain surface. However, note that, in an actual system, many atoms interact with surface Nd ions and many possible configurations exist near the interface between GB and Nd$_2$Fe$_{14}$B phases. In this case, it is important from a theoretical standpoint to provide separate information on the positional ($r, \theta$) dependence of individual atoms on the local $K_u$ (i.e., $A_2^0$) of Nd ions at the grain surface. These analyses may provide useful information to judge the factors dominating the coercive force during experimental observation of atomic configurations in real systems in the future. In this study, we investigate the influence of O and Cu atoms on the $A_2^0$ of the Nd ion placed at the surface of Nd$_2$Fe$_{14}$B grains via first-principles calculations. As the sign of the single-site $K_u$ of an Nd ion is the same as that of $A_2^0$, the evaluation of $A_2^0$ is useful to understand how trace elements affect $K_u$ at the surface of Nd${_2}$Fe${_{14}}$B. We select O and Cu as trace elements on the basis of experimental results. COMPUTATIONAL DETAILS ===================== ![\[f1\]The geometric relationship between the Nd ion on the surface of Nd$_2$Fe$_{14}$B and the trace element for the (a)(001) and (b)(110) surface slab models. Here, $r$ indicates the distance between the Nd ion and the trace element, and $\theta$ indicates the angle between the c-axis of Nd$_2$Fe$_{14}$B and the direction of $r$. This figure is plotted by using VESTA.[@vesta]](fig/rfig01.pdf){width="8.5cm"} Electronic structure calculations were performed by density functional theory using the Vienna ab initio simulation package (VASP 4.6). The 4f electrons in the Nd ions were treated as core electrons in the electronic structure calculations for the valence electrons, based on the concept mentioned in the previous section. The ionic potentials are described by projection-augmented-wave (PAW) method[@paw] and the exchange-correlation energy of the electrons is described within a generalized gradient approximation (GGA). We used the exchange-correlation function determined by Ceperly and Alder and parametrized by Perdew and Zunger.[@GGA] We examined the (001) and (110) surfaces of Nd$_2$Fe$_{14}$B with the addition of the trace element using slab models. Figure\[f1\] shows the geometric relationship between the Nd ion at the surface of Nd$_2$Fe$_{14}$B and the trace element for the (001) and (110) surface models. We placed the trace element at various distances $r$ around the Nd ion at the surface, and at various angles $\theta$ between the c-axis of Nd$_2$Fe$_{14}$B and the direction of $r$. In the (001) surface model (Fig.\[f1\](a)), this unit cell has a vacuum layer equivalent to the thickness of the Nd$_2$Fe$_{14}$B unit cell along the c-axis (12.19[Å]{}) and consists of 12 Nd, 58 Fe, and six B atoms. In the (110) surface model (Fig.\[f1\](b)), we restructured the Nd$_2$Fe$_{14}$B unit cell to expose Nd atoms on the (110) surface. The (110) direction in the original unit cell corresponds to the a-axis in the restructured unit cell. The restructured unit cell consists of 12 Nd, 68 Fe, and six B atoms. The lattice constant of the a-axis parallel to the (110) surface is $\sqrt{2}$ and perpendicular to it is $1/\sqrt{2}$ times that of the original Nd$_2$Fe$_{14}$B unit cell. The (110) surface model has a vacuum layer with the thickness of $8.8/\sqrt{2}=6.22$[Å]{} along the (110) direction. The mesh of the numerical integration was provided by a discrete Monkhorst-Pack $k$-point sampling. To investigate the magnetic anisotropy of this system, we calculated $A_2^0$ for Nd ion adjacent to a trace element at the Nd$_2$Fe$_{14}$B surface. The value of the magnetic anisotropy constant $K_u$ originated from rare earth ion is approximately given by $K_u=-3J(J-1/2)\alpha\langle r^2 \rangle A_2^0$ when the exchange field acting on the 4f electrons in a rare earth ion is sufficiently strong. Here, $J$ is the angular momentum of 4f electronic system, $\alpha$ means the Stevens factor characterizing the rare earth ion, and $\langle r^2 \rangle$ is the spatial average of $r^2$, as given by Eq. below. For Nd ion, $J=9/2$ and $\alpha$ is negative, and then positive $A_2^0$ leads to positive $K_u$. The physical role of $A_2^0$ is to reflect the electric field from surrounding charge distribution acting on the 4f electrons whose spatial distribution differs from spherical one due to the strong $L$-$S$ coupling. Therefore, to obtain the value of $A_2^0$, one needs the charge distribution surrounding the 4f electronic system with the following equation:[@a20_1; @a20_2] $$\begin{aligned} A_2^0&=&-\frac{e}{4\pi\epsilon_0}\frac{4\pi a}{5} \int d\bm{R} \rho(\bm{R})Z_2^0(\bm{R})\nonumber\\&& \times \int dr r^2\frac{r_<^2}{r_>^3}4\pi\rho_{4f}(r)/\langle r^2\rangle, \label{eq1}\\ % Z_2^0 (\bm{R})&=&a(3R_z^2-|\bm{R}|^2)/|\bm{R}|^2, \label{eq2}\\ % \langle r^2 \rangle &=&\int dr r^2 4\pi\rho_{4f}(r)r^2, \label{eq3}\end{aligned}$$ where, $a=(1/4)(5/\pi)^{1/2}$, $r_<=\min(r,|\bm{R}|)$, and $r_>=\max(r,|\bm{R}|)$. Here, $\rho_{4f}(r)$ is the radial part of the 4f electron probability density in the Nd ion, and $\rho(\bm{R})$ is the charge density including the nuclei and electrons. The integral range of $\bm{R}$ in Eq. is within a sphere with a radius of 70[Å]{} from the surface Nd site. Here, the valence electron density was calculated by VASP 4.6, and $\rho_{4f}(r)$ was calculated using an isolated Nd atom. In $\rho(\bm{R})$, the core electrons (including the 4f electron) forming pseudo-potentials in VASP were treated as point charges as well as nuclei. RESULTS AND DISCUSSION ====================== As shown in a previous paper,[@r12] our computational procedure produces $A_2^0$ values of bulk Nd$_2$Fe$_{14}$B reasonably consistent with those obtained using the full-potential linear-muffin-tin-orbital method.[@r13] In addition, our calculated $A_2^0$ for the Nd ion at the (001) surface of Nd$_2$Fe$_{14}$B exhibits a negative value around $-300$$\mathrm{K}/a_B^2$ where $a_B$ represents the Bohr radius. This corresponds to that calculated using the full-potential linearized augmented plane wave plus local orbitals (APW+lo) method implemented in the WIEN2k code.[@r9] Therefore, our numerical calculation methods described in section II should be sufficiently reliable to quantitatively evaluate the influence of a trace element on $A_2^0$ acting on the Nd ion at the surface of Nd$_2$Fe$_{14}$B. ![\[f3\](a)The $r$ dependence of crystal field parameter $A_2^0$ and the total energy for O addition on the (001) surface of Nd$_2$Fe$_{14}$B for $\theta=0^\circ$. The closed squares and triangles indicate $A_2^0$ and total energies, respectively. (b)The $\theta$ dependence of $A_2^0$ for $r=$ 2.2[Å]{}, 2.5[Å]{}, 2.7[Å]{}, and 3.0[Å]{}. ](fig/rfig02.pdf){width="8cm"} Figure \[f3\](a) shows variations in $A_2^0$ and electronic total energy when the O atom approaches the (001) surface Nd ion with angle $\theta=0^\circ$ (see Fig.\[f1\](a)). We confirm, as predicted by the previous work,[@r9] that $A_2^0$ exhibits a negative value when O is positioned at $r>$ 3.5[Å]{} from the surface. At this situation, the total energy is confirmed to be almost equal to the summation of separated system of (001)-slab and O atom, each of which is -561.838 and -1.826eV. Thus the deviation energies from the summation of these values represent the interaction energies between the slab and O atom. When O nears the Nd ion at $\theta=0^\circ$, $A_2^0$ becomes positive and increases to a peak value around 1000$\mathrm{K}/a_B^2$ at $r=$ 2.4[Å]{}. Interestingly, with further decrease of $r$, $A_2^0$ abruptly drops into negative values. ![\[f4\]Calculated valence electron density distributions for O addition on the (001) surface for $r=$ (a)4.0[Å]{}, (b)2.4[Å]{}, and (c)1.6[Å]{}, together with schematics corresponding to these three cases.](fig/rfig03.pdf){width="8.5cm"} To understand these behaviors, we show in Fig.\[f4\] the calculated electron density distributions for $r=$ (a)4.0[Å]{}, (b)2.4[Å]{}, and (c)1.8[Å]{}, together with schematics corresponding to these three cases. The increase of $A_2^0$ with decreasing $r$ for $r>2.4$[Å]{} can be explained by the change in 5d electron distribution surrounding the 4f electrons of the Nd ion. That is, the 5d electron cloud extends towards the O atom through hybridization (Fig.\[f4\](b)), which repositions the 4f electron cloud within the c-plane in order to avoid the repulsive force from the 5d electrons; this produces positive values of $A_2^0$. The decreasing behavior for $r<2.2$[Å]{} is attributed to the influence of the positively charged nucleus in the O atom exceeding the hybridization effect of the valence electrons (Fig.\[f4\](c)); the attractive Coulomb force from O nucleus could rotate the 4f electron cloud so as to minimize the electro-static energy, resulting in negative values of $A_2^0$. The variation of electronic total energy in Fig.\[f3\](a) indicates a stable distance around $r=$ 2.0[Å]{}, at which the value of $A_2^0$ is still positive. However, the value of $A_2^0$ easily becomes negative with only a 0.2[Å]{} decrease from the equilibrium position, with an energy cost less than 1eV. This deviation can take place due to stresses, defects, or deformations around grain boundaries in an actual system. This implies that the value of $A_2^0$ may exhibit a negative value at real grain surfaces adjacent to GB phases, rather than to vacuum space. In Fig.\[f3\](b), we show the $\theta$-dependences of $A_2^0$ for various values of $r$. Starting from positive values at $\theta=0^\circ$, $A_2^0$ decreases monotonically with increasing $\theta$ and reaches negative values for $\theta>50^\circ$. It is meaningful to compare these behaviors with those of the point charge model. In the point charge model, $A_2^0$ is proportional to $Z_2^0\propto –eq(3\cos\theta^2-1)\ (e>0)$ where $q$ is the valence number of the point charge. Thus the gross features of these results can be realized by the point charge model, if one assumes a negative charge ($q<0$) for the O ion. Note, however, that we do not clearly identify the negative charge on the O atom when $r\geq 2.2$[Å]{}. Therefore, the oxygen atom should be considered to influence the redistribution of valence electrons within the Nd atomic sphere such that the point charge model is applicable as if O were negatively charged. This can be regarded as a sort of screening effect.[@r13_2] Similar explanations were proposed for the N effects in Re$_2$Fe$_{17}$N$_3$[@r14] and ReFe$_{12}$N,[@r15; @r16] systems, where the N atoms changed the magnetocrystalline anisotropy energies of these systems. ![\[f5\](a)The $r$ dependence of crystal field parameter $A_2^0$ and the total energy for O addition on the (110) surface of Nd$_2$Fe$_{14}$B for $\theta=90^\circ$. The closed squares and triangles indicate $A_2^0$ and total energies, respectively. (b)$\theta$ dependence of $A_2^0$ for $r=$ 2.2[Å]{}, 2.5[Å]{}, 2.7[Å]{}, and 3.0[Å]{}.](fig/rfig04.pdf){width="8cm"} Figure \[f5\](a) shows the $r$-dependence of the values of $A_2^0$ and electronic total energy when the O atom approaches the Nd ion in the (110) surface with an angle $\theta=90^\circ$ from the c-axis. Contrary to the case for the (001) surface, $A_2^0$ is positive when Nd ions in the (110) surface are exposed to vacuum, and when the O atom is far from the surface. In this case, since the thickness of the vacuum space is only 6.22[Å]{}, the total energy exhibits symmetric variation with respect to r=3.1[Å]{}. From this reason, we present total energies within the range $r\leq 3.1$[Å]{}. When the O atom approaches the Nd ion, the value of $A_2^0$ becomes negative for $r<$ 3.5[Å]{} and stabilizes around $r=2.0$[Å]{}, where the value remains negative. Notably, the value of $A_2^0$ becomes positive with only a 0.2[Å]{} decrease in $r$. This is due to the attraction of the O nucleus to the 4f electron cloud, which aligns the 4f moment with the direction of the c-axis. The energy consumption for this reduction of $r$ is less than 1eV. In Fig.\[f5\](b), we show the $\theta$-dependence of the values of $A_2^0$ for varying $r$. As shown in Fig.\[f5\](a), the values of $A_2^0$ for $\theta=90^\circ$ are negative in the range of 2.2[Å]{}$<r<$3.0[Å]{}. Similarly to the (001) surface in Fig.\[f3\](b), $A_2^0$ increases with decreasing $\theta$. This can also be understood from the O atom’s redistribution of the valence electrons within the Nd atomic sphere, as if a negative ion exists in the direction of the O atom. The reason for the slight decrease in $A_2^0$ at $\theta=0^\circ$ is not clear at this stage. ![\[f6\](a)The $r$ dependence of crystal field parameter $A_2^0$ and the total energy for Cu addition on the (001) surface of Nd$_2$Fe$_{14}$B for $\theta=0^\circ$. The closed squares and triangles indicate $A_2^0$ and total energies, respectively. (b)The $\theta$ dependence of $A_2^0$ for $r=$ 2.7[Å]{}, 3.0[Å]{}, 3.5[Å]{}.](fig/rfig05.pdf){width="8cm"} Next we proceed to the case of Cu addition. Figure\[f6\](a) shows the $r$-dependence of the values of $A_2^0$ and total energy for Cu addition with $\theta=0^\circ$. The behaviors of $A_2^0$ and total energy are almost the same as those for O addition, shown in Fig.\[f3\](a). However, the variations are less dramatic compared to the O case. This may reflect the weaker hybridization of the Nd atom with Cu than with O. The peak position of $r$ (around 3.0[Å]{}) is greater than that with O addition, which may be due to the large atomic radius of Cu. In this case, a decrease of about 0.5[Å]{} in $r$ from the equilibrium position causes $A_2^0$ to become negative. This move costs around 0.3eV in energy, which is less than that for O addition. The $\theta$-dependence of $A_2^0$ for various values of $r$ is shown in Fig.\[f6\](b). The behavior can be explained through the hybridization between valence electrons of both Nd and Cu atoms, as in the case for O addition.\ ![(a)The $r$ dependence of crystal field parameter $A_2^0$ and the total energy for Cu addition on the (110) surface of Nd$_2$Fe$_{14}$B for $\theta=90^\circ$. The closed squares and triangles indicate $A_2^0$ and total energies, respectively. (b) The $\theta$ dependence of $A_2^0$ for $r=$ 2.7[Å]{}, 3.0[Å]{}, 3.5[Å]{}.[]{data-label="f7"}](fig/rfig06.pdf){width="8cm"} Figure \[f7\](a) shows the $r$-dependence of the values of $A_2^0$ and electronic total energy for the case of Cu addition to the (110) surface with the angle $\theta=90^\circ$. Contrary to the case of O addition in Fig.\[f5\](a), $A_2^0$ maintains positive values for all $r$. This indicates weak interactions between Nd and Cu atoms. The steep increase of $A_2^0$ with decreasing $r$ for $r<2.6$[Å]{} reflects the significance of the nucleus effect of Cu on the 4f electron cloud. The weak hybridization between Nd and Cu atoms for $r>2.6$[Å]{} can be seen in the $\theta$-dependence of $A_2^0$, as shown in Fig.\[f7\](b). SUMMARY ======= Motivated by a recent theoretical work[@r10] demonstrating that a surface atomic layer with negative magnetic anisotropy constant $K_u$ can drastically decrease the coercivity $H_c$, we evaluated the influence of trace elements O or Cu on the crystal field parameter $A_2^0$ of the Nd ion, at both the (001) and (110) surface of the Nd$_2$Fe$_{14}$B grain. In both cases of O and Cu additions to the (001) surface, decreasing the distance $r$ to the surface with constant $\theta=0^\circ$ changes $A_2^0$ from negative to positive values. At the equilibrium position, the value of $A_2^0$ is found to remain positive. With further decrease of $r$, $A_2^0$ abruptly becomes negative again with an energy cost less than 1eV. This is due to the positive charge of the nucleus in O and Cu, which gains influence with decreased distance from Nd. Therefore, the value of $A_2^0$ may exhibit a negative value due to stresses, defects, or deformations around grain boundaries adjacent to the GB phases in an actual system. The $\theta$-dependence of $A_2^0$ can be roughly expressed as $3\cos^2\theta-1$ for both O and Cu additions. This suggests that these elements redistribute the valence electrons within the Nd atomic sphere such that the negative point charge model is applicable, as if these species had a negative charge. We observed that the strength of these addition effects is larger with O than with Cu. This different behavior of O and Cu atoms is clearly seen in the case of (110) surface. For O-addition, the $r$-dependence of $A_2^0$ is opposite to that in the (001) surface case, as is expected from geometrical effects. Actually, the surface $K_u$ potentially decreases to negative values at the equilibrium position of O. However, For Cu-addition in the case of (110) surface, the variation is small compared to O addition, and $A_2^0$ remains positive for all $r$. Therefore, O is expected to produce negative interfacial $K_u$ more frequently than Cu when it approaches the Nd ion at the grain surface. The analysis of the total energy showed that local stable positions of the trace element exist for the special configurations considered here. However, due to the complex interatomic interactions and local stresses in real multi-grain structures of Nd-Fe-B magnets, many possible configurations exist in the local crystalline structure near the interface between GB and Nd$_2$Fe$_{14}$B phases. In this sense, the ($r$, $\theta$) dependences of the local $K_u$ (i.e., $A_2^0$) shown in this study may apply when we consider the effect of individual atoms adjacent to Nd ions at the interfaces of GBs. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== This work was supported by CREST-JST. [99]{} H. Kronmüller, K.-D. Durst, and G. Martinek, J. Magn. Magn. Mater. [**69**]{}, 149 (1987). J. F. Herbst, Rev. Mod. Phys. [**63**]{}, 819 (1991). A. Sakuma, S. Tanigawa, and M. Tokunaga, J. Magn. Magn. Mater. [**84**]{}, 52 (1990). K. Hono and H. Sepehri-Amin. Scripta Mater. [**67**]{}, 530 (2012). T. G. Woodcock, Y. Zhang, G. HrKac, G. Ciuta, N. M. Dempsey, T. Schrefl, O. Gutfleisch, and D. Givord, Scripta Mater. [**67**]{}, 536 (2012). M. Sagawa, S. Hirosawa, H. Yamamoto, S. Fujimura, and Y. Matsuura, Jpn. J. Appl. Phys. [**26**]{}, 785 (1987). J. Fidler and K. G. Knoch, J. Magn. Magn. Mater. [**80**]{}, 48 (1989). H. Sepehri-Amin, T. Ohkubo, T. Shima, K. Hono, Acta Mater. [**60**]{}, 819 (2012). M. Yamada, H. Kato, H. Yamamoto and Y. Nakagawa, Phys. Rev. B [**38**]{}, 620 (1988). M. Richter, P. M. Oppeneer, H. Eschrig, and B. Johansson, Phys. Rev. B [**46**]{}, 13919 (1992). M. Fähnle, K. Hummler, M. Liebs, T. Beuerle, Appl. Phys. A [**57**]{} 67 (1993). H. Moriya, H. Tsuchiura, and A. Sakuma, J. Appl. Phys. [**105**]{}, 07A740 (2009). C. Mitsumata, H. Tsuchiura, and A. Sakuma, Appl. Phys. Express [**4**]{}, 113002 (2011). P. E. Blöchl., Phys. Rev. B, [**50**]{}, 17953 (1994), J. P. Perdew, J. A. Chevary, S. H. Vosko, K. A. Jackson, M. R. Pederson, D. J. Singh, and C. Fiolhais, Phys. Rev. B [**46**]{}, 6671 (1992), K. Momma and F. Izumi, J. Appl. Crystallogr. [**44**]{}, 1272 (2011). T. Suzuki, Y. Toga, and A. Sakuma, J. Appl. Phys. [**115**]{}, 17A703 (2014). K. Hummler and M. Fähnle, Phys. Rev. B [**53**]{}, 3290 (1996). R. Skomski, J. M. D. Coey, J. Magn. Magn. Mater. [**140**]{}, 965 (1995). M. Yamaguchi and S. Asano, J. Phys. Soc. Jpn. [**63**]{}, 1071 (1994). A. Sakuma, J. Phys. Soc. Jpn. [**61**]{}, 4119 (1992). T. Miyake, K. Terakura, Y. Harashima, H. Kino, and S. Ishibashi, J. Phys. Soc. Jpn. [**83**]{}, 043702 (2014).
{ "pile_set_name": "arxiv" }
Jabal Omar 'Jabal Omar (جبل عمر ) is a neighbourhood located in Makkah, Saudi Arabia south of the Al Haram district. Description Jabal Omar is named for the hill Mount Omar that traditionally stood on the southern outskirts of Mecca and currently consists of a group of old housing units that were built randomly over the years. There are currently no facilities in the Jabal Omar area, especially sanitation facilities. However, in late 2006, a clearance program was begun in Jabal Omar to provide the necessary space for the establishment of the Jabal Omar project. Jabal Omar is in the Sub Municipality of Ajyad (بلدية أجياد). References Category:Neighborhoods of Mecca
{ "pile_set_name": "wikipedia_en" }
Rajki Rajki is a village in the administrative district of Gmina Bielsk Podlaski, within Bielsk County, Podlaskie Voivodeship, in north-eastern Poland. It lies approximately south of Bielsk Podlaski and south of the regional capital Białystok. See also Béla Rajki, Hungarian swimming coach and water polo coach References Rajki
{ "pile_set_name": "wikipedia_en" }
Neighbors (novel) Neighbors is a 1980 novel by American author Thomas Berger. It is a satire of manners and suburbia, and a comment on emotional alienation with echoes of the works of Franz Kafka. Earl Keese’s character and situation begin realistically but become increasingly fantastic. Keese is an Everyman whose life is swiftly turned upside down. As he scrambles to reclaim his sense of normalcy and dignity, he comes to think that everyone, including his family, is against him. Plot summary Earl Keese is a middle-aged, middle-class suburbanite with a wife, Enid, and teenage daughter, Elaine. Earl is content with his dull, unexceptional life, but this changes when a younger, less sophisticated couple, Harry and Ramona, move in next door. Harry is physically intimidating and vulgar; Ramona is sexually aggressive, and both impose themselves on the Keese household. Their free-spirited personalities and overbearing and boorish behavior endear them to Enid and Elaine, but Earl fears that he is losing control of his life and his family. Over the course of one night, the antagonism between Earl and his new neighbors escalates into suburban warfare. Analysis Berger's off-kilter tone blurs the line between paranoia and reality, defense and offense, action and intention, ally and adversary. Harry and Ramona seem to constantly undergo changes in their respective personalities and Enid and Elaine appear to choose sides against Earl at random, but Berger also implies that it is Earl’s sense of reality that is skewed and deluded. Earl is frustrated because he can never prove that Harry and Ramona are doing anything wrong on purpose, and the more he attempts to expose them, the more ridiculous he makes himself. Yet Earl comes to realize that Harry and Ramona have served as the crucible of his redemption: being forced out of his comfort zone of complacency and habit has provided him with an excitement he has never known before. As Earl comes to recognize value in his neighbors, he realizes that his wife is a distrustful alcoholic, his daughter is an underachiever and petty thief, and that his new neighbors can provide him with an escape from his existence of insignificance and emotional impotence. From a nightmare comes hope and a strengthened resolve to survive. In his study of Berger, writer Stanley Trachtenberg describes Neighbors as an existentialist parable in which "the loss of coherence between various aspects of self comically fragments the notion of identity and thus fictionalizes the existential concept of authenticity as a shaping condition of it." In a 1980 newspaper interview, Berger said of Neighbors, "As my 10th novel, begun at the close of my 20th year as a published novelist, it is appropriately a bizarre celebration of whatever gift I have, the strangest of all my narratives . . . the morality of this work, like that of all my other volumes, will be in doubt until the end of the narrative – and perhaps to the end of eternity, now that I think about it." Characters Earl Keese Enid Keese Elaine Keese Harry Ramona Adaptations A film version was released in 1981, starring John Belushi and Dan Aykroyd. It was also adapted into a play by Eve Summer, which premiered in Worcester, Massachusetts in 2007. References External links NPR.org | Tom Perrotta Hails Suburban Sendup 'Neighbors' Category:1980 American novels Category:American novels adapted into films Category:American novels adapted into plays Category:Novels by Thomas Berger (novelist)
{ "pile_set_name": "wikipedia_en" }
--- abstract: 'Spermatozoa self-propel by propagating bending waves along an active elastic flagellum. The structure in the distal flagellum is likely incapable of actively bending, and as such is largely neglected. Through elastohydrodynamic modeling we show that an inactive distal region confers a significant propulsive advantage when compared with a fully active flagellum of the same length. The optimal inactive length, typically 2–5% (but up to 37% in extremes), depends on both wavenumber and viscous-elastic ratio. Potential implications in evolutionary biology and clinical assessment are discussed.' author: - 'Cara V. Neal' - 'Atticus L. Hall-McNair' - 'Meurig T. Gallagher' - 'Jackson Kirkman-Brown' - 'David J. Smith' date: October 2019 title: 'Doing more with less: the flagellar end piece enhances the propulsive effectiveness of spermatozoa' --- Spermatozoa, alongside their crucial role in sexual reproduction, are a principal motivating example of inertialess propulsion in the very low Reynolds number regime. The time-irreversible motion required for effective motility is achieved through the propagation of bending waves along the eukaryotic axoneme, which forms the active elastic internal core of the slender flagellum. While sperm morphology varies significantly between species [@austin1995evolution; @fawcett1975mammalian; @werner2008insect; @mafunda2017sperm; @nelson2010tardigrada; @anderson1975form], there are clear conserved features which can be seen in humans, most mammals, and also our evolutionary ancestors [@cummins1985mammalian]. In gross structural terms, sperm comprise (i) the head, which contains the genetic cargo; (ii) the midpiece of the flagellum, typically a few microns in length, containing the ATP-generating mitochondria; (iii) the principal piece of the flagellum, typically 40–50$\,\mu$m in length (although much longer in some species [@bjork2006intensity]), the core of which is a “9+2” axoneme, producing and propagating active bending waves through dynein-ATPase activity [@machin1958wave]; and (iv) the end piece, typically a few microns in length, which consists of singleton microtubules only [@zabeo2019axonemal]. Lacking the predominant “9+2” axonemal structure it appears unlikely that the end piece is a site of molecular motor activity. Since the end piece is unactuated, we will refer to it as ‘inactive’, noting however that this does not mean it is necessarily ineffective. Correspondingly, the actuated principal piece will be referred to as ‘active’. A detailed review of human sperm morphology can be found in [@gaffney2011mammalian; @lauga2009hydrodynamics]. While the end piece can be observed through transmission electron and atomic force microscopy [@fawcett1975mammalian; @ierardi2008afm], live imaging to determine its role in propelling the cell is currently challenging. Futhermore, because the end piece has been considered to not have a role in propelling the cell, it has received relatively little attention. However, we know that waveform has a significant impact on propulsive effectiveness, and moreover changes to the waveform have an important role in enabling cells to penetrate the highly viscous cervical mucus encountered in internal fertilization [@smith2009bend]. This leads us to ask: *does the presence of a mechanically inactive region at the end of the flagellum help or hinder the cell’s progressive motion?* The emergence of elastic waves on the flagellum can be described by a constitutively linear, geometrically nonlinear filament, with the addition of an active moment per unit length $m$, which models the internal sliding produced by dynein activity, and a hydrodynamic term $\bm{f}$ which describes the force per unit length exerted by the filament onto the fluid. Many sperm have approximately planar waveforms, especially upon approaching and collecting at surfaces [@gallagher2018casa; @woolley2003motility]. As such, their shape can be fully described by the angle made between the tangent and the head centreline, denoted $\theta$, as shown in Fig. \[fig:sperm-schematic\]. Following [@moreau2018asymptotic; @hall2019efficient] we parameterize the filament by arclength $s$, with $s=0$ corresponds to the head-flagellum joint and $s=L^{*}$ to the distal end of the flagellum, and apply force and moment free boundary conditions at $s=L^{*}$ to get $$E(s)\,\partial_s \theta(s,t) - \bm{e}_3\cdot\int_s^{L^{*}} \partial_{s'} \bm{X}(s',t) \times \left(\int_{s'}^{L^{*}} \bm{f}(s'',t) ds''\right) ds' - \int_s^{L^{*}} m(s',t)\,ds' =0, \label{eq:elasticity0}$$ with the elastic stiffness given, following [@gaffney2011mammalian], by $$E(s)= \begin{cases} (E_p^*-E_d^*)\left( \frac{s-s_d^*}{s_d^*}\right)^2 + E_d^* & s \leq s_d^*, \\ E_d^* & s>s_d^*, \end{cases}$$ where parameters $E_p^* = 8\times10^{-21}$Nm$^2$, $E_d^*=2.2\times10^{-21}\,$Nm$^2$ and $s_d^*=39\,\mu$m$=3.9\times10^{-5}\,$m have been chosen to model the tapering structure of mammalian sperm flagella and match to experimental stiffness measurements [@Lesich2004; @Lesich2008]. The position vector $\bm{X}=\bm{X}(s,t)$ describes the flagellar waveform at time $t$, so that $\partial_s\bm{X}$ is the tangent vector, and $\bm{e}_3$ is a unit vector pointing perpendicular to the plane of beating. Integrating by parts leads to the elasticity integral equation $$E(s)\,\partial_s \theta(s,t) + \bm{e}_3\cdot\int_s^{L^{*}} (\bm{X}(s',t)-\bm{X}(s,t)) \times \bm{f}(s',t) \, ds' - \int_s^{L^{*}} m(s',t)\,ds' =0. \label{eq:elasticity}$$ The active moment density can be described to a first approximation by a sinusoidal traveling wave $m(s,t)=m_0^* \cos(k^*s-\omega^* t)$, where $k^*$ is wavenumber and $\omega^*$ is radian frequency. The inactive end piece can be modeled by taking the product with a Heaviside function, so that $m(s,t) = m_0^* \cos(k^*s-\omega^* t)H(\ell^* -s)$ where $0<\ell^*\leqslant L^*$ is the length of the active tail segment. At very low Reynolds number, neglecting non-Newtonian influences on the fluid, the hydrodynamics are described by the Stokes flow equations $$-\bm{\nabla}p + \mu^* \nabla^2\bm{u} = \bm{0}, \quad \bm{\nabla}\cdot \bm{u} = 0,$$ where $p=p(\bm{x},t)$ is pressure, $\bm{u}=\bm{u}(\bm{x},t)$ is velocity and $\mu^*$ is dynamic viscosity. These equations are augmented by the no-slip, no-penetration boundary condition ${\bm{u}(\bm{X}(s,t),t)=\partial_t\bm{X}(s,t)}$, i.e. the fluid in contact with the filament moves at the same velocity as the filament. A convenient and accurate numerical method to solve these equations for biological flow problems with deforming boundaries is based on the ‘regularized stokeslet’ [@cortez2001method; @cortez2005method], i.e. the solution to the exactly incompressible Stokes flow equations driven by a spatially-concentrated but smoothed force $$-\bm{\nabla}p + \mu^* \nabla^2\bm{u} + \psi_\varepsilon(\bm{x},\bm{y})\bm{e}_3 = 0, \quad \bm{\nabla}\cdot \bm{u} = 0,$$ where $\varepsilon\ll 1$ is a regularization parameter, $\bm{y}$ is the location of the force, $\bm{x}$ is the evaluation point and $\psi_\varepsilon$ is a smoothed approximation to a Dirac delta function. The choice $$\psi_\varepsilon(\bm{x},\bm{y})=15\varepsilon^4/r_\varepsilon^{7},$$ leads to the regularized stokeslet [@cortez2005method] $$S_{ij}^\varepsilon(\bm{x},\bm{y})=\frac{1}{8\pi\mu}\left(\frac{\delta_{ij}(r^2+2\varepsilon^2)+r_ir_j}{r_\varepsilon^3} \right) ,$$ where $r_i=x_i-y_i$, $r^2=r_i r_i$, $r_\varepsilon^2=r^2+\varepsilon^2$. The flow $u_j(\bm{x},t)$ produced by a filament $\bm{X}(s,t)$ exerting force per unit length $\bm{f}(s,t)$ is then given by the line integral $\int_0^{L^*} S_{jk}^\varepsilon(\bm{x},\bm{X}(s,t))f_k(s,t)\,ds$. The flow due to the surface of the sperm head $\partial H$, exerting force per unit area $\bm{\varphi}(\bm{Y},t)$ for $\bm{Y}\in\partial H$, is given by the surface integral $\iint_{\partial H} S_{jk}^\varepsilon(\bm{x},\bm{Y})\varphi_k(\bm{Y})\,dS_{\bm{Y}}$, yielding the boundary integral equation [@smith2009boundaryelement] for the hydrodynamics, namely $$u_j(\bm{x},t)=\int_0^{L^*} S^\varepsilon_{jk}(\bm{x},\bm{X}(s,t))f_k(s,t)\,ds+\iint_{\partial H} S_{jk}^\varepsilon(\bm{x},\bm{Y})\varphi_k(\bm{Y},t)\,dS_{\bm{Y}}. \label{eq:flow}$$ The position and shape of the cell can be described by the location $\bm{X}_0(t)$ of the head-flagellum join and the waveform $\theta(s,t)$, so that the flagellar curve is $$\bm{X}(s,t)=\bm{X}_0(t)+\int_0^s [\cos\theta(s',t),\sin\theta(s',t),0]^T \,ds'. \label{eq:geometry}$$ Differentiating with respect to time, the flagellar velocity is then given by $$\bm{u}(\bm{X}(s,t),t)=\dot{\bm{X}}_0(t)+\int_0^s\partial_t\theta(s',t)[-\sin\theta(s',t),\cos\theta(s',t),0]^T \,ds'. \label{eq:kinematic1}$$ Modeling the head as undergoing rigid body motion around the head-flagellum joint, the surface velocity of a point $\bm{Y}\in\partial H$ is given by $$\bm{u}(\bm{Y},t)=\dot{\bm{X}}_0(t)+\partial_t \theta(0,t)\,\bm{e}_3 \times (\bm{Y}-\bm{X}_0). \label{eq:kinematic2}$$ Eqs.  and couple with fluid mechanics (Eq. ), active elasticity (Eq. ), and total force and moment balance across the cell to yield a model for the unknowns $\theta(s,t)$, $\bm{X}_0(t)$, $\bm{f}(s,t)$ and $\bm{\varphi}(\bm{Y},t)$. Non-dimensionalising with lengthscale $L^*$, timescale $1/\omega^*$ and force scale $\mu^*\omega^* {L^*}^2$ yields the equation in scaled variables (dimensionless variables denoted with $\,\hat{}\,$ ) $$\partial_{\hat{s}} \theta(\hat{s},\hat{t})+ \bm{e}_3\cdot\mathcal{S}^4\int_{\hat{s}}^1 (\hat{\bm{X}}(\hat{s}',\hat{t})\,-\hat{\bm{X}}(\hat{s},\hat{t})) \times \hat{\bm{f}}(\hat{s}',\hat{t}) \,d\hat{s}' - \mathcal{M}\int_{\hat{s}}^1 \cos(\hat{k}\hat{s}'-\hat{t})H(\ell-\hat{s}') \,d\hat{s}' =0, \label{eq:elasticityND}$$ where $\mathcal{S}=L^*(\mu^*\omega^*/E^{L})^{1/4}$ is a dimensionless group comparing viscous and elastic forces (related, but not identical to, the commonly-used ‘sperm number’), $\mathcal{M}=m_0^*{L^*}^2/E^{L}$ is a dimensionless group comparing active and elastic forces, and $\ell=\ell^*/L^*$ is the dimensionless length of the active segment. Here $E^L$ is the stiffness at the distal tip of the flagellum ($\hat{s}=1$) and the dimensionless wavenumber is $\hat{k}=k^*L^*$. The problem is numerically discretised as described by Hall-McNair *et al*. [@hall2019efficient], accounting for non-local hydrodynamics via the method of regularized stokeslets [@cortez2005method]. This framework is modified to take into account the presence of the head via the nearest-neighbor discretisation of Gallagher & Smith [@gallagher2018meshfree]. The head-flagellum coupling is enforced via the dimensionless moment balance boundary condition $$\partial_{\hat{s}}\theta(0,\hat{t}\,)-\bm{e}_3\cdot\mathcal{S}^4\iint_{\partial \hat{H}}(\hat{\bm{Y}}(\hat{t}\,)-\hat{\bm{X}}_0(\hat{t}\,))\times\hat{\bm{\varphi}}(\hat{\bm{Y}},\hat{t}\,)\,dS_{\hat{\bm{Y}}}=0.$$ Note that for the remainder of this letter we will work with the dimensionless model but drop the $\,\hat{}\,$ notation used to represent dimensionless variables for clarity. The initial value problem for the flagellar trajectory, discretised waveform and force distributions is solved in MATLAB^^ using the built in solver $\mathtt{ode15s}$. At any point in time, the sperm cell’s position and shape can be reconstructed completely from $\bm{X}_0(t)$ and $\theta(s,t)$ through equation (\[eq:geometry\]).\ *Results.* The impact of the length of the inactive end piece on propulsion is quantified by the swimming speed and efficiency. Velocity along a line (VAL) is used as a measure of swimming speed, calculated via $$% \text{VAL}^{(j)} = \frac{||\bm{X}_0^{(j)}-\bm{X}_0^{(j-1)}||}{T}, \text{VAL}^{(j)} = \|\bm{X}_0^{(j)}-\bm{X}_0^{(j-1)}\| / T ,$$ where $T=2\pi$ is the period of the driving wave and $\bm{X}_0^{(j)}$ represents the position of the head-flagellum joint after $j$ periods. Lighthill efficiency [@lighthill1975mathematical] is calculated as $$% \eta^{(j)} = \frac{\left(\text{VAL}^{(j)}\right)^2}{\overline{W}}, \eta^{(j)} = \left(\text{VAL}^{(j)}\right)^2 / \,\overline{W}^{(j)},$$ where $\overline{W}^{(j)} = \left< \int_{0}^{1}\bm{u} \cdot \bm{f} ds' \right>$ is the average work done by the cell over the $j$^th^ period. In the following, $ j $ is chosen sufficiently large so that the cell has established a regular beat before its statistics are calculated ($j=3$ is sufficient for what follows). ![ The effect of the inactive end piece length on swimming speed and efficiency of propulsion at various wavenumbers, along with velocity-optimal waveforms and example data for human sperm. (column 1) Velocity along a line (VAL) versus active length $\ell$ for viscous-elastic parameter choices $\mathcal{S}=18,\,13.5,\,9$, and wavenumbers $k=3\pi,\,4\pi,\,5\pi$; (column 2) Lighthill efficiency versus active length $\ell$ for the same choices of $\mathcal{S}$ and $k$; (column 3) velocity-optimal cell waveforms for each $\mathcal{S}$ and $k$; (column 4) experimental data showing the instantaneous waveform of a human sperm in high viscosity medium, with centerline plotted in purple (tracked with FAST [@gallagher2019rapid]), scale bar denotes 5$\mu$m. []{data-label="fig:results-main"}](results_main_mtg.png){width="90.00000%"} The effects of varying the dimensionless active tail length on sperm swimming speed and efficiency for three choices of dimensionless wavenumber $k$ are shown in Fig. \[fig:results-main\]. Here $ \ell=1 $ corresponds to an entirely active flagellum and $ \ell = 0 $ to an entirely inactive flagellum. Values $ 0.5 \leqslant \ell \leqslant 1 $ are considered so that the resulting simulations produce cells that are likely to be biologically realistic. Higher wavenumbers are considered as they are typical of mammalian sperm flagella in higher viscosity media [@smith2009bend]. Results are calculated by taking $ m_0^* = 0.01 \,\mu^* \omega^* {L^{*}}^2 k / \mathcal{S} $ ($k$ dimensionless, $m_0^*$ dimensional) and hence $ \mathcal{M} = 0.01\, k \mathcal{S}^3 $, the effect of which is to produce waveforms of realistic amplitude across a range of values of $k$ and $\mathcal{S}$. Optimal active lengths for swimming speed, $\ell_{\text{VAL}}$, and efficiency, $\ell_{\eta}$, occur for each parameter pair $(\mathcal{S},k)$; crucially, in all cases considered in Fig. \[fig:results-main\], the optima are less than 1, indicating that by either measure some length of inactive flagellum is generally always better than a fully active flagellum. Values of $ \ell_{\text{VAL}} $ and $ \ell_{\eta} $ for the $ (\mathcal{S},k) $ parameter pairs considered in Fig. \[fig:results-main\] are given in Table \[table:lopt\]. Typically $ \ell_{\text{VAL}} \neq \ell_{\eta} $ for a given swimmer. For each metric, optimum active length remains approximately consistent regardless of the choice of $ \mathcal{S} $ when $ k=3\pi $ or $4\pi $. When $k=5\pi$, much higher variability in optimum active length is observed. In Fig. \[fig:results-colormaps\], the relationship between optimum active flagellum length and each of VAL and $ \eta $ is further investigated by simulating cells over a finer gradation in $ \mathcal{S} \in [9,18] $. When $ k=3\pi $ and $ 4\pi $, we again observe that a short inactive distal region is beneficial to the cell regardless of $ \mathcal{S} $. For $ k=5\pi $, there is a clear sensitivity of $ \ell_{\text{VAL}} $ to $ \mathcal{S} $, which is not observed between $ \ell_{\eta} $ and $ \mathcal{S} $. In all cases, the optimum values $ \ell_{\text{VAL}} $ and $ \ell_{\eta} $ are strictly less than 1. ![ Normalized VAL (top row) and normalized Lighthill efficiency (bottom row) values for varying $ 0.5 \leqslant \ell \leqslant 1 $ and $ 9 \leqslant \mathcal{S} \leqslant 18 $, for three values of dimensionless wavenumber $ k $. Values in each subplot are normalized with respect to either the maximum VAL or maximum $ \eta $ for each $ k $. []{data-label="fig:results-colormaps"}](results_colormaps.pdf){width="65.00000%"} The waveform and velocity field associated with flagella that are fully-active and optimally-inactive for propulsion are shown in Fig. \[fig:results-streamlines\]. The qualitative features of both waveform and the velocity field are similar, however the optimally-inactive flagellar waveform has reduced curvature and tangent angle in the distal region, and increased velocity in both ‘oblique’ regions (i.e. where $\theta\approx\pm\pi/4$).\ ![ Comparison of the normalized flow fields around the end of a simulated sperm for (a) a fully active flagellum, and (b) a flagellum featuring an inactive distal region of length $ 1-\ell_{\text{VAL}} $. The active part of the flagellum is drawn in red and the inactive region in black. Fluid velocity is scaled against the maximum velocity across both frames, with magnitude indicated by the colorbar and direction by the field lines. Here, $ k=4\pi $, $ \mathcal{S}=13.5 $ and $ \ell_{\text{VAL}} = 0.95$. []{data-label="fig:results-streamlines"}](results_streamlines.pdf){width="75.00000%"} *Discussion.* In simulations, we observe that spermatozoa which feature a short, inactive region at the end of their flagellum swim faster and more efficiently than those without. For $k=3\pi $ and $ k=4\pi $, cell motility is optimized when $ \approx 5\% $ of the distal flagellum length is inactive, regardless of $\mathcal{S}$. Experimental measurements of human sperm indicate an average combined length of the midpiece and principal piece of $ \approx 54\mu $m and an average end piece length of $ \approx 3\mu $m [@cummins1985mammalian], suggesting that the effects uncovered here are biologically important. Results for waveforms that are characteristic of those in higher viscosity fluids indicate that in some cases ($k=5\pi$) much longer inactive regions are optimal, up to $ \approx\SI{22}{\micro\meter}/\SI{57}{\micro\meter} $ or $\approx 37\%$ of the flagellum – substantially longer than the end piece observed of human spermatozoa. Sperm move through a variety of fluids during migration, in particular encountering a step change in viscosity when penetrating the interface between semen and cervical mucus, and having to swim against physiological flows [@Tung2015]. Cells featuring an optimally-sized inactive end piece may form better candidates for fertilization, being able to swim faster and for longer when traversing the female reproductive tract [@holt2015sperm]. The basic mechanism by which the flagellar wave produces propulsion is through the interaction of segments of the filament moving obliquely through the fluid [@gray1955propulsion]. Analysis of the flow field (Fig. \[fig:results-streamlines\]) suggests that the lower curvature associated with the inactive end piece enhances the strength of the interaction between the obliquely moving region and the fluid. At high viscous-elastic ratio and wavenumber, a ‘normal’ flagellar waveform can be produced by a relatively large inactive region of around one wavelength (Fig. \[fig:results-main\]). This effect may have physiological relevance in respect of biochemical energy transport requirements from the mitochondria, which are situated in the midpiece. An inactive region of flagellum is not a feature unique to human gametes - its presence can also be observed in the sperm of other species [@fawcett1975mammalian], as well as other microorganisms. In particular, the axomenal structures of the bi-flagellated algae *chlamydomonas reinhardtii* deplete at the distal tips [@jeanneret2016], suggesting the presence of an inactive region. The contribution to swimming speed and cell efficiency due to the inactive distal section in these cases remains unknown. By contrast, the tip of the “9+2" cilium is a more organized “crown" structure [@kuhn1978structure], which will interact differently with fluid than the flagellar end piece modeled here. Understanding this distinction between cilia and flagella, as well as the role of the inactive region in other microorganisms, may provide further insight into underlying biological phenomena, such as chemotaxis [@ALVAREZ2014198] and synchronization [@guo2018bistability; @goldstein2016elastohydrodynamic]. Further work should investigate how this phenomenon changes when more detailed models of the flagellar ultrastructure are considered, taking into account the full “9+2” structure [@ishijima2019modulatory], sliding resistance associated with filament connections [@coy2017counterbend], and the interplay of these factors with biochemical signalling in the cell [@carichino2018]. The ability to qualitatively assess and model the inactive end piece of a human spermatozoon could have important clinical applications. In live imaging for diagnostic purposes, the end piece is often hard to resolve due to its depleted axonemal structure. Lacking more sophisticated imaging techniques, which are often expensive or impractical in a clinical environment, modeling of the end piece combined with flagellar tracking software, such as FAST [@gallagher2019rapid], could enable more accurate sperm analysis, and help improve cell selection in assisted reproductive technologies. Furthermore, knowledge of the function of an inactive distal region has wider applications across synthetic microbiology, particularly in the design and fabrication of artificial swimmers [@dreyfus2005microscopic] and flexible filament microbots used in targeted drug delivery [@montenegro2018microtransformers].\ *Summary and Conclusions.* In this letter, we have revealed the propulsive advantage conferred by an inactive distal region of a unipolar “pusher” actuated elastic flagellum, characteristic of mammalian sperm. The optimal inactive flagellum length depends on the balance between elastic stiffness and viscous resistance, and the wavenumber of actuation. The optimal inactive fraction mirrors that seen in humans sperm ($\approx 3\,\mu$m$/57\,\mu$m, or $ \approx 5\% $). These findings have a range of potential applications. They motivate the development of new methodology for improving the analysis of flagellar imaging data; by model fitting the experimentally visible region it may be possible to resolve the difficult to image distal segment. Inclusion of an inactive region may be an interesting avenue to explore when improving the efficiency of artificial microswimmer design. Finally, important biological questions may now be posed, for example does the presence of the inactive end piece confer an advantage to cells penetrating highly viscous cervical mucus?\ *Acknowledgments.* D.J.S. and M.T.G. acknowledge funding from the Engineering and Physical Sciences Research Council (EPSRC) Healthcare Technologies Award (EP/N021096/1). C.V.N. and A.L.H-M. acknowledge support from the EPSRC for funding via PhD scholarships (EP/N509590/1). J.C.K-B. acknowledges support from the National Institute for Health Research (NIHR) U.K. The authors also thank Hermes Gadêlha (University of Bristol, UK), Thomas D. Montenegro-Johnson (University of Birmingham, UK) for stimulating discussions around elastohydrodynamics and Gemma Cupples (University of Birmingham, UK) for the experimental still in Fig. \[fig:results-main\] (provided by a donor recruited at Birmingham Women’s and Children’s NHS Foundation Trust after giving informed consent). [10]{} C.R. Austin. Evolution of human gametes: spermatozoa. , 1995:1–19, 1995. D.W. Fawcett. The mammalian spermatozoon. , 44(2):394–436, 1975. M. Werner and L.W. Simmons. Insect sperm motility. , 83(2):191–208, 2008. P. S. Mafunda, L. Maree, A. Kotze, and G. van der Horst. Sperm structure and sperm motility of the african and rockhopper penguins with special reference to multiple axonemes of the flagellum. , 99:1–9, 2017. D. R. Nelson, R. Guidetti, and L. Rebecchi. Tardigrada. In [*Ecology and classification of North American freshwater invertebrates*]{}, pages 455–484. Elsevier, 2010. W. A. Anderson, P. Personne, and A. BA. The form and function of spermatozoa: a comparative view. , pages 3–13, 1975. J.M. Cummins and P.F. Woodall. On mammalian sperm dimensions. , 75(1):153–175, 1985. A. Bjork and S. Pitnick. Intensity of sexual selection along the anisogamy–isogamy continuum. , 441(7094):742, 2006. K.E. Machin. Wave propagation along flagella. , 35(4):796–806, 1958. D. Zabeo, J.T. Croft, and J.L. H[ö]{}[ö]{}g. Axonemal doublet microtubules can split into two complete singlets in human sperm flagellum tips. , 593(9):892–902, 2019. E.A. Gaffney, H. Gad[ê]{}lha, D.J. Smith, J.R. Blake, and J.C. Kirkman-Brown. Mammalian sperm motility: observation and theory. , 43:501–528, 2011. E. Lauga and T.R. Powers. The hydrodynamics of swimming microorganisms. , 72(9):096601, 2009. V. Ierardi, A. Niccolini, M. Alderighi, A. Gazzano, F. Martelli, and R. Solaro. characterization of rabbit spermatozoa. , 71(7):529–535, 2008. D.J. Smith, E.A. Gaffney, H. Gad[ê]{}lha, N. Kapur, and J.C. Kirkman-Brown. Bend propagation in the flagella of migrating human sperm, and its modulation by viscosity. , 66(4):220–236, 2009. M.T. Gallagher, D.J. Smith, and J.C. Kirkman-Brown. : tracking the past and plotting the future. , 30(6):867–874, 2018. D.M. Woolley. Motility of spermatozoa at surfaces. , 126(2):259–270, 2003. C. Moreau, L. Giraldi, and H. Gad[ê]{}lha. The asymptotic coarse-graining formulation of slender-rods, bio-filaments and flagella. , 15(144):20180235, 2018. A.L. Hall-McNair, T.D. Montenegro-Johnson, H. Gad[ê]{}lha, D.J. Smith, and M.T. Gallagher. Efficient [I]{}mplementation of [E]{}lastohydrodynamics via [I]{}ntegral [O]{}perators. , 2019. K. Lesich and C. Lindemann. Direct measurement of the passive stiffness of rat sperm and implications to the mechanism of the calcium response. , 59:169–79, 11 2004. K. Lesich, D. Pelle, and C. Lindemann. Insights into the [M]{}echanism of [ADP]{} [A]{}ction on [F]{}lagellar [M]{}otility [D]{}erived from [S]{}tudies on [B]{}ull [S]{}perm. , 95:472–82, 08 2008. R. Cortez. The method of regularized [S]{}tokeslets. , 23(4):1204–1225, 2001. R. Cortez, L. Fauci, and A. Medovikov. The method of regularized [S]{}tokeslets in three dimensions: analysis, validation, and application to helical swimming. , 17(3):031504, 2005. D. J. Smith. A boundary element regularized stokeslet method applied to cilia- and flagella-driven flow. , 465(2112):3605–3626, 2009. M. T. Gallagher and D. J. Smith. Meshfree and efficient modeling of swimming cells. , 3(5):053101, 2018. J. Lighthill. . SIAM, 1975. M.T. Gallagher, G. Cupples, E.H Ooi, J.C Kirkman-Brown, and D.J. Smith. Rapid sperm capture: high-throughput flagellar waveform analysis. , 34(7):1173–1185, 06 2019. C-k. Tung, F. Ardon, A. Roy, D. L. Koch, S. Suarez, and M. Wu. Emergence of upstream swimming via a hydrodynamic transition. , 114:108102, 2015. W. V. Holt and A. Fazeli. Do sperm possess a molecular passport? mechanistic insights into sperm selection in the female reproductive tract. , 21(6):491–501, 2015. J. Gray and G.J. Hancock. The propulsion of sea-urchin spermatozoa. , 32(4):802–814, 1955. R. Jeanneret, M. Contino, and M. Polin. A brief introduction to the model microswimmer [*hlamydomonas reinhardtii*]{}. , 225, 06 2016. C. Kuhn III and W. Engleman. The structure of the tips of mammalian respiratory cilia. , 186(3):491–498, 1978. L. Alvarez, B.M. Friedrich, G. Gompper, and K. U.Benjamin. The computational sperm cell. , 24(3):198 – 207, 2014. H. Guo, L. Fauci, M. Shelley, and E. Kanso. Bistability in the synchronization of actuated microfilaments. , 836:304–323, 2018. R. E. Goldstein, E. Lauga, A. I. Pesci, and M.R.E Proctor. Elastohydrodynamic synchronization of adjacent beating flagella. , 1(7):073201, 2016. S. Ishijima. Modulatory mechanisms of sliding of nine outer doublet microtubules for generating planar and half-helical flagellar waves. , 25(6):320–328, 2019. R. Coy and H. Gad[ê]{}lha. The counterbend dynamics of cross-linked filament bundles and flagella. , 14(130):20170065, 2017. L. Carichino and S. D. Olson. Emergent three-dimensional sperm motility: coupling calcium dynamics and preferred curvature in a kirchhoff rod model. , 2018. R. Dreyfus, J. Baudry, M.L. Roper, M. Fermigier, H.A. Stone, and J. Bibette. Microscopic artificial swimmers. , 437(7060):862, 2005. T.D. Montenegro-Johnson. Microtransformers: [C]{}ontrolled microscale navigation with flexible robots. , 3(6):062201, 2018.
{ "pile_set_name": "arxiv" }
Columbia Center (Troy) The Columbia Center is a pair of twin towers on Big Beaver Road on Troy, Michigan. Both buildings were designed by Minoru Yamasaki & Associates, designers of One Woodward Avenue and the now-destroyed World Trade Center. Both buildings stand 14 floors and are 193 ft (59m) tall. At one time Northwest Airlines had a ticket office in Suite 115 of the complex. Columbia Center East Columbia Center East is located at 101 W. Big Beaver Road. The building was constructed in 1998, and finished in 2000. It stands at 15 floors in height, with 14 above-ground floors, and 1 basement floor. The high-rise is used as offices for a number of local and regional businesses. The building was designed in the modern architectural style, and uses mainly brick and glass. Columbia Center West Columbia Center West is located at 201 W. Big Beaver Road. The building was built in 1989, and has the same number of floors and basements as its younger twin. The high-rise is used for offices, restaurants, retail, and includes a fitness center. Like Columbia Center East, it was designed in the architectural style, and uses mainly brick and glass. References External links Category:Skyscrapers in Troy, Michigan Category:Skyscraper office buildings in Michigan Category:Office buildings completed in 2000 Category:2000 establishments in Michigan Category:Minoru Yamasaki buildings
{ "pile_set_name": "wikipedia_en" }
#1 Free Stationery Download Site Kathy and I would like to formally welcome you to our new free holiday and special occasion stationery website. We are working hard to add as many new stationery papers as we can as quickly as we can. We are adding new paper designs at least weekly when possible. We add them in Microsoft Word, PDF and a JPG format so that you can use them with just about any word processor, office suite or journaling program. Just as a note, our digital stationery papers can also be used as free scrapbook background papers too. Registration is FREE to leave comments (Required to keep out SPAM) and it doesn't put you on any lists of any kind. So, one more time... tell us what you think! What's on The #1 FREE-Stationery Download Site: Here, you'll have access to the best FREE digital computer stationery or scrapbook background paper site around with literally hundreds of free digital downloadable holiday, letter, special occasion, business, newsletter and every other kind of stationeries and templates you can think of. No registration or other info needed... We also NOW have the tutorial for how to create your own stationeries with MS Word. Along with the OpenOffice Suite digital stationery training, that we already had for you, you can now learn to create your own digital stationeries with MS Office (Microsoft Word (TM)) software. Tired of paying high prices for the same stationery that everyone else has? Now you don't have to. Just download any of our high quality stationeries and print as many as you need anytime you like. Try our new revolutionary way to get quality stationery. No storing blank sheets of stationery in a desk drawer or box taking up valuable space. Instead, save it directly onto your computer. You can also use our stationery papers as free scrapbook background papers? You get to download many quality stationeries whenever you want, all for FREE! New stationery added regularly, so book mark and check back often... Printable PDF Stationery Our FREE PDF Stationery formats are perfect for those that want to add that personal touch to letters, notes, flyers or even sales letters. Just pick the file and download it. After you have downloaded the file all you have to do is print it out and start handwriting on the page(s). You can even put it back in the printer and use any text editor or word processor to print your correspondence right on your newly created stationery... just set the margins to stay inside the borders. Save money Save that valuable storage space in the house Never run out of stationery pages Perfect way to personalize and make memorable Print and handwrite your letter Printable on any printer like any pre-printed stationery you get in the stores. Visit our Services Page to see a few examples of our PDF and Word templates. Just select examples to see what we did with a few of them and then try them out for yourself. You can also print our free pdf stationery papers and use them as free background scrapbook papers. Printable Microsoft Word Stationery Our Microsoft Word (MS Word) templates are perfect for those of you that like to type your letters and other correspondence directly on your computer. Simply open the template right inside Microsoft Word (tm) or most other office compatible programs that can read Microsoft Office (tm) files. Then just start typing. The text will automatically stay within the stationery borders and even go around any special graphics on the page. Now all you have to do is to print it out. Save 30% or more over the cost of pre-printed stationery Save that valuable Storage space in the house and office Never run out of stationery pages when you need them most Always legible and ready for the copier See what flyers and letter looks like before you hit print Typo or error, just fix it then reprint - no hassles Visit our Services Page to see a few examples of our PDF and Word templates in action. See some of the things we found to do with them. Just select the example you want to see and then try it out for yourself. OpenOffice Suite Templates FREE-Stationery.com is your first stop to turn those plain boring letters into a fun, personal and memorable way to communicate with friends, family and business relations. Create your own FREE Custom Stationery for use with OpenOffice (OOo) 2.0 and better. FREE Tutorial on how to make that happen is just moments away here on the site.
{ "pile_set_name": "pile-cc" }
XEMS-AM XEMS (branded as Radio Mexicana) is a Regional Mexican radio station that serves the Brownsville, Texas (United States) / Matamoros, Tamaulipas (Mexico) border area. History XEMS began broadcasting on 1500 kHz in 1952. It soon moved to 1490. External links radioavanzado.com raiostationworld.com; Radio stations in the Rio Grande Valley References Category:Spanish-language radio stations Category:Radio stations in Matamoros
{ "pile_set_name": "wikipedia_en" }
Kralingse Zoom metro station Kralingse Zoom is a subway station on lines A, B, and C of the Rotterdam Metro, in the Kralingen neighbourhood of eastern Rotterdam. The station is located just west of the A16 motorway on the east side of Kralingse Zoom, the road it is named after. At Kralingse Zoom station, transfer is available to several bus lines, as well as to the ParkShuttle, a people mover to a nearby business district. Kralingse Zoom is an above-ground station and is located just to the east of the metro tunnel in which the trains cross the city center. The station has two centre platforms, each with two tracks running alongside them. For most of the day, only the inner two tracks are used. Kralingse Zoom is the metro stop to get to the Erasmus University and to the University of Applied Sciences (Economic Studies). References External links www.eur.nl www.hr.nl Category:Rotterdam Metro
{ "pile_set_name": "wikipedia_en" }
Update: Sree Narasimha Jayanthi – May 17th 2019 This day signifies the appearance of Lord Narasimha on the planet. Lord Narasimha is the fourth and the greatest incarnation of Lord Vishnu. He is believed to have appeared to protect his devotee, Prahlada, from his father Hiranyakashyapu . If you listen to the song ” Narashima Nembo davana” posted on youtube, there is a paragraph which explains how Lord Narashima came from the pillar. Hiranyakashyapu pointed out at a pillar in his palace and asked Prahalada whether Lord Vishnu was present in it pillar. Prahalada who was a great devotee said yes. Next, you know Hiranyakashyapu uses his Gadha and broke open the pillar and there emerged our Lord Narasimha who than slained Hiranyakashyapu using his sharp paws. The day signifies the triumph of good over evil and the eagerness of the lord to protect his devotees from evil. When my Parents visited us last year, my Father recited the “Bhagavatha Purana” for three weeks at my place. The way my Dad explained this part of the Purana, brings tears to my eyes every time I listen “Narashima Nembo Devana”. The greatest mistake I made was to not record the purana. Hopefully will record the next time my Father visits us. About Narasimha Jayanthi: Ms. Lakshmi’s write up summary from the book “ಭಾರತೀಯ ಹಬ್ಬ ಹರಿದಿನಗಳು” by ಶ್ರೀ ಶ್ರೀ ರಂಗಪ್ರಿಯ ಮಹಾದೇಷಿಕ ಸ್ವಾಮಿಗಳು. “The pillar in evil Hiranyakashipu’s royal court, signifies the ṃērusthambha, the central backbone system. Through the central nervous system called ṣuśumna, the brilliant light emanated. The energy associated came out in its full fierce glory as Narasimha and later transformed into the peaceful form after the destruction of evil force.” When is Narasimha Jayanthi celebrated? Narasimha Jayanthi is one of the important festivals of Vaiśnavas. It is celebrated in the vaiśāka māsam,on ṣukla pakśa chaturdaśi, after akśhaya thrithēya, svāti nakśatra, siddha Yōga, vanija karanam. The date is determined according to chāndramāna system. Since the inception of this avatāram is in the evening, the worship is done in the evening. It is considered to be more auspicious if this day coincides with sōmavasarē, Monday or ṣanivasarē, Saturday. It is a sacred day to remember Lord Narasimha’s sarva vyāpakatva- all pervading – spiritual knowledge, wealth, strength, valor, splendor, and compassion toward His devotees. Which are the major manifestations of Lord Narasimha? There are three major forms of Lord Narasimha. 1. Ugra-Narasimha – His fierce form: In this form He has conch – ṣanka, discus – chakra, mace – gada, bow – chāpa, bells – ghanta-, trident like – ankuśa, and his two hands ready to chisel the demon heart, as His āyudhams – weapons. This form is worshipped in the evenings. 2. Lakshmi Narasimha: In this form he is very peaceful – ṣanta – with His consort – Sri Lakshmi sitting on his left lap, with ādiseṣa on his head as umbrella, and Prahallāda standing in his front praying with his folded hands. This manifestation is followed after his fierce form– as a result, he is worshipped in the morning. 3. Yoga Narasimha – meditative: Here, He is in his Meditative form for those who aspire for results of Yoga. – Incense, lamplight, flowers and ṣrigandha or perfume.- tuḻasi or Holy Basil — Since he has the element of both viśnu and rudra- kamalam – Lotus, bilva daḻam – leaves of stone apple, japākusumam- red hibiscus can be offered to the Lord. What kinds of food can be offered to Lord Narasimha? Athirasa – fried cake made with jaggery and rice flour. Pāyasa – milk puddings made out of any of moong dal, channa dal, and – – – Pānaka – typically made with jaggery water and pepper, or fruit juice like lemonade. Any sātvik food can be offered with great devotion by chanting the Narasimha Mantra. sātvik food is that food creates an internal calmness and balance, when one consumes it. The lord graciously accepts devotees’ service and reverence. All worship should be followed by distribution of Prasāda or food offered to God. How do we worship Lord Narasimha? Lord Narasimha can be worshipped in two ways. The first is detailed below: Along with firm faith, worship of Narasimha is associated with rigid criteria. This is explained in detail below 1. ācharana ṣuddhi – Requires strict adherence to performance. 2. Cleanliness of ḍravya- physical things like utensils, lamps, ingredients. 3. Inner sanctity of five senses – body, intellect, mind, place, and act of worship 4. Chastity of Mind, speech and deeds unification, compassion, calmness of virtuous soul within. These are to prevent any glitches that might distract the mind while doing the worship. 5. Taking bath and starting with daily routine worship followed by pūja sankalpa-declaration to do worship, and completion without break with little or no food intake till then. 6. Bodily calmness in turn calms the mind. This helps to open up the channel that leads to internal visualization and realization of Narasimha. These are things that keep body, mind and intellect in balance to further the pursuit of (worship til)attaining perfection. 7. Those who seek Salvation, do fasting till worship is over in the evening.Those who seek all types of desire fulfillment do fasting till the next day morning worship and follow with chanting of Narasimha mantra. 8. To do further, people could do hōma – oblation and read “Nrsimhatāpini ” upaniśad. 9. An idol of Narasimha can be donated to someone who has great respect and potential to worship the idol. The second means of worship is detailed for those who find it difficult to adhere to the above strict rules, can do the following steps 1. ḍhyāna or meditation – along with imagining the worship of Narasimha with the above said flowers etc. 2. Acts of mind and soul offered to God is also oblation. This is equivalent to hōmam. 3.Various purifying actions performed during puja a. snānha – ablution, pāna – intake of water and food offered to God b. pādya – chant with clean and clear heart and intellect c. arghya – water offered with respect from the river of faith d. āchamana – water used for sipping same river e. abhiśeka – for the lord f. Mind filled with the flow of thoughts of God from the river of faith, ṣraddha – as the water for cleansing. 4. ātma – Offer sacred soul which is inside the body. 5. bow and perform salutation with state of equanimity 6. ṣaranāgati– self surrender to the Lord With pure mind and internal contemplation there is no need for external rituals to be performed. Pāramārthika or worship of realizing supreme alone with internal purity surpasses all other forms of worship. That is, true worship with internal purity is more powerful than the external rituals. As part of worship, singing, listening to discourse is recommended at the end of the worship. Insight about this incarnation of ṣri mahāviśnu: The brilliant form of Narasimha is described as comprising of three entities. 1. Brahma from feet to naval, 2. Vishnu–naval to neck 3. Rudra–neck unto head., From there onwards– it is the supreme Godhead, Parabrahma. Narasimha is personification of truth–perceived and experienced by those who are in the state of union with divine, and that, which is beyond senses – Yoga Samādhi. The brilliant form of Narasimha is also considered and viewed as an internal phenomenon. Dear Smt.Meera Raghu, Thanks for giving us a very comprehensive information on Narasimha Jayanti. Your efforts are always excellent, perfect and highly useful to all in Madhwa community.Rayaru will keep you and your family always happy hale and healthy to continue this sacred service. I just thought of sharing a very sacred stotra on Narasimha which I learnt from somebody (I dont remeber right now.) and chanting this stotra 5 time at any distressful/critical situation will clear the situation with wonderful solution to the problem causing the distress/crisis. I have reproduced this stotra below for the benefit of everyone Hare Krsna dear Meera Mataji! Dandavat pranaam! Thank you for the wonderful article but the word “Idol” is completely wrong, it is to be mentioned as “Deity”. The word Idol, which means “an object of false worship”, was introduced by the British to destroy our Vedic Culture. Thank you 🙂 About Narasimha Jayanthi: Here is my write up summary from the book “ಭಾರತೀಯ ಹಬ್ಬ ಹರಿದಿನಗಳು” by ಶ್ರೀ ಶ್ರೀ ರಂಗಪ್ರಿಯ ಮಹಾದೇಷಿಕ ಸ್ವಾಮಿಗಳು “The pillar in evil Hiranyakashipu’s royal court, signifies the ṃērusthambha, the central backbone system. Through the central nervous system called ṣuśumna, the brilliant light emanated. The energy associated came out in its full fierce glory as Narasimha and later transformed into the peaceful form after the destruction of evil force.” When is Narasimha Jayanthi celebrated? Narasimha Jayanthi is one of the important festivals of Vaiśnavas. It is celebrated in the vaiśāka māsam,on ṣukla pakśa chaturdaśi, after akśhaya thrithēya, svāti nakśatra, siddha Yōga, vanija karanam. The date is determined according to chāndramāna system. Since the inception of this avatāram is in the evening, the worship is done in the evening. It is considered to be more auspicious if this day coincides with sōmavasarē, Monday or ṣanivasarē, Saturday. It is a sacred day to remember Lord Narasimha’s sarva vyāpakatva- all pervading – spiritual knowledge, wealth, strength, valor, splendor, and compassion toward His devotees. Which are the major manifestations of Lord Narasimha? There are three major forms of Lord Narasimha. 1. Ugra-Narasimha – His fierce form: In this form He has conch – ṣanka, discus – chakra, mace – gada, bow – chāpa, bells – ghanta-, trident like – ankuśa, and his two hands ready to chisel the demon heart, as His āyudhams – weapons. This form is worshipped in the evenings. 2. Lakshmi Narasimha: In this form he is very peaceful – ṣanta – with His consort – Sri Lakshmi sitting on his left lap, with ādiseṣa on his head as umbrella, and Prahallāda standing in his front praying with his folded hands. This manifestation is followed after his fierce form– as a result, he is worshipped in the morning. 3. Yoga Narasimha – meditative: Here, He is in his Meditative form for those who aspire for results of Yoga. – Incense, lamplight, flowers and ṣrigandha or perfume.- tuḻasi or Holy Basil — Since he has the element of both viśnu and rudra- kamalam – Lotus, bilva daḻam – leaves of stone apple, japākusumam- red hibiscus can be offered to the Lord. What kinds of food can be offered to Lord Narasimha? Athirasa – fried cake made with jaggery and rice flour. Pāyasa – milk puddings made out of any of moong dal, channa dal, and – – – Pānaka – typically made with jaggery water and pepper, or fruit juice like lemonade. Any sātvik food can be offered with great devotion by chanting the Narasimha Mantra. sātvik food is that food creates an internal calmness and balance, when one consumes it. The lord graciously accepts devotees’ service and reverence. All worship should be followed by distribution of Prasāda or food offered to God. How do we worship Lord Narasimha? Lord Narasimha can be worshipped in two ways. The first is detailed below: Along with firm faith, worship of Narasimha is associated with rigid criteria. This is explained in detail below 1. ācharana ṣuddhi – Requires strict adherence to performance. 2. Cleanliness of ḍravya- physical things like utensils, lamps, ingredients. 3. Inner sanctity of five senses – body, intellect, mind, place, and act of worship 4. Chastity of Mind, speech and deeds unification, compassion, calmness of virtuous soul within. These are to prevent any glitches that might distract the mind while doing the worship. 5. Taking bath and starting with daily routine worship followed by pūja sankalpa-declaration to do worship, and completion without break with little or no food intake till then. 6. Bodily calmness in turn calms the mind. This helps to open up the channel that leads to internal visualization and realization of Narasimha. These are things that keep body, mind and intellect in balance to further the pursuit of (worship til)attaining perfection. 7. Those who seek Salvation, do fasting till worship is over in the evening.Those who seek all types of desire fulfillment do fasting till the next day morning worship and follow with chanting of Narasimha mantra. 8. To do further, people could do hōma – oblation and read “Nrsimhatāpini ” upaniśad. 9. An idol of Narasimha can be donated to someone who has great respect and potential to worship the idol. The second means of worship is detailed for those who find it difficult to adhere to the above strict rules, can do the following steps 1. ḍhyāna or meditation – along with imagining the worship of Narasimha with the above said flowers etc. 2. Acts of mind and soul offered to God is also oblation. This is equivalent to hōmam. 3.Various purifying actions performed during puja a. snānha – ablution, pāna – intake of water and food offered to God b. pādya – chant with clean and clear heart and intellect c. arghya – water offered with respect from the river of faith d. āchamana – water used for sipping same river e. abhiśeka – for the lord f. Mind filled with the flow of thoughts of God from the river of faith, ṣraddha – as the water for cleansing. 4. ātma – Offer sacred soul which is inside the body. 5. bow and perform salutation with state of equanimity 6. ṣaranāgati– self surrender to the Lord With pure mind and internal contemplation there is no need for external rituals to be performed. Pāramārthika or worship of realizing supreme alone with internal purity surpasses all other forms of worship. That is, true worship with internal purity is more powerful than the external rituals. As part of worship, singing, listening to discourse is recommended at the end of the worship. Insight about this incarnation of ṣri mahāviśnu: The brilliant form of Narasimha is described as comprising of three entities. 1. Brahma from feet to naval, 2. Vishnu–naval to neck 3. Rudra–neck unto head., From there onwards– it is the supreme Godhead, Parabrahma. Narasimha is personification of truth–perceived and experienced by those who are in the state of union with divine, and that, which is beyond senses – Yoga Samādhi. The brilliant form of Narasimha is also considered and viewed as an internal phenomenon.
{ "pile_set_name": "pile-cc" }
Purdy was chatting to her bezzie mate who works at Colchester Hospital last night, and was impressed to hear that the Hospital wants more people to car share! Her mate, inspired by all the money she knows Purdy is saving, [...] Loveurcar The Loveurcar campaign is brought to you by the Colchester Travel Plan Club, Colchester Borough Council Air Quality Team and V102 as part of a Defra funded project to encourage more sustainable driving for those journeys that have to be made by car.
{ "pile_set_name": "pile-cc" }
--- abstract: 'The number $R(4,3,3)$ is often presented as the unknown Ramsey number with the best chances of being found “soon”. Yet, its precise value has remained unknown for almost 50 years. This paper presents a methodology based on *abstraction* and *symmetry breaking* that applies to solve hard graph edge-coloring problems. The utility of this methodology is demonstrated by using it to compute the value $R(4,3,3)=30$. Along the way it is required to first compute the previously unknown set ${{\cal R}}(3,3,3;13)$ consisting of 78[,]{}892 Ramsey colorings.' author: - Michael Codish - Michael Frank - Avraham Itzhakov - Alice Miller title: 'Computing the Ramsey Number R(4,3,3) using Abstraction and Symmetry breaking[^1]' --- Introduction {#sec:intro} ============ This paper introduces a general methodology that applies to solve graph edge-coloring problems and demonstrates its application in the search for Ramsey numbers. These are notoriously hard graph coloring problems that involve assigning colors to the edges of a complete graph. An $(r_1,\ldots,r_k;n)$ Ramsey coloring is a graph coloring in $k$ colors of the complete graph $K_n$ that does not contain a monochromatic complete sub-graph $K_{r_i}$ in color $i$ for each $1\leq i\leq k$. The set of all such colorings is denoted ${{\cal R}}(r_1,\ldots,r_k;n)$. The Ramsey number $R(r_1,\ldots,r_k)$ is the least $n>0$ such that no $(r_1,\ldots,r_k;n)$ coloring exists. In particular, the number $R(4,3,3)$ is often presented as the unknown Ramsey number with the best chances of being found “soon”. Yet, its precise value has remained unknown for more than 50 years. It is currently known that $30\leq R(4,3,3)\leq 31$. Kalbfleisch [@kalb66] proved in 1966 that $R(4,3,3)\geq 30$, Piwakowski [@Piwakowski97] proved in 1997 that $R(4,3,3)\leq 32$, and one year later Piwakowski and Radziszowski [@PR98] proved that $R(4,3,3)\leq 31$. We demonstrate how our methodology applies to computationally prove that $R(4,3,3)=30$. Our strategy to compute $R(4,3,3)$ is based on the search for a $(4,3,3;30)$ Ramsey coloring. If one exists, then because $R(4,3,3)\leq 31$, it follows that $R(4,3,3) = 31$. Otherwise, because $R(4,3,3)\geq 30$, it follows that $R(4,3,3) = 30$. In recent years, Boolean SAT solving techniques have improved dramatically. Today’s SAT solvers are considerably faster and able to manage larger instances than were previously possible. Moreover, encoding and modeling techniques are better understood and increasingly innovative. SAT is currently applied to solve a wide variety of hard and practical combinatorial problems, often outperforming dedicated algorithms. The general idea is to encode a (typically, NP) hard problem instance, $\mu$, to a Boolean formula, $\varphi_\mu$, such that the satisfying assignments of $\varphi_\mu$ correspond to the solutions of $\mu$. Given such an encoding, a SAT solver can be applied to solve $\mu$. Our methodology in this paper combines SAT solving with two additional concepts: *abstraction* and *symmetry breaking*. The paper is structured to let the application drive the presentation of the methodology in three steps. Section \[sec:prelim\] presents: preliminaries on graph coloring problems, some general notation on graphs, and a simple constraint model for Ramsey coloring problems. Section \[sec:embed\] presents the first step in our quest to compute $R(4,3,3)$. We introduce a basic SAT encoding and detail how a SAT solver is applied to search for Ramsey colorings. Then we describe and apply a well known embedding technique, which allows to determine a set of partial solutions in the search for a $(4,3,3;30)$ Ramsey coloring such that if a coloring exists then it is an extension of one of these partial solutions. This may be viewed as a preprocessing step for a SAT solver which then starts from a partial solution. Applying this technique we conclude that if a $(4,3,3;30)$ Ramsey coloring exists then it must be ${\langle 13,8,8 \rangle}$ regular. Namely, each vertex in the coloring must have 13 edges in the first color, and 8 edges in each of the other two colors. This result is already considered significant progress in the research on Ramsey numbers as stated in [@XuRad2015]. To further apply this technique to determine if there exists a ${\langle 13,8,8 \rangle}$ regular $(4,3,3;30)$ Ramsey coloring requires to first compute the currently unknown set ${{\cal R}}(3,3,3;13)$. Sections \[sec:symBreak\]—\[sec:33313b\] present the second step: computing ${{\cal R}}(3,3,3;13)$. Section \[sec:symBreak\] illustrates how a straightforward approach, combining SAT solving with *symmetry breaking*, works for smaller instances but not for ${{\cal R}}(3,3,3;13)$. Then Section \[sec:abs\] introduces an *abstraction*, called degree matrices, Section \[sec:33313\] demonstrates how to compute degree matrices for ${{\cal R}}(3,3,3;13)$, and Section \[sec:33313b\] shows how to use the degree matrices to compute ${{\cal R}}(3,3,3;13)$. Section \[sec:433\_30\] presents the third step re-examining the embedding technique described in Section \[sec:embed\] which given the set ${{\cal R}}(3,3,3;13)$ applies to prove that there does not exist any $(4,3,3;30)$ Ramsey coloring which is also ${\langle 13,8,8 \rangle}$ regular. Section \[sec:conclude\] presents a conclusion. Preliminaries and Notation {#sec:prelim} ========================== In this paper, graphs are always simple, i.e. undirected and with no self loops. For a natural number $n$ let $[n]$ denote the set $\{1,2,\ldots,n\}$. A graph coloring, in $k$ colors, is a pair $(G,\kappa)$ consisting of a simple graph $G=(V,E)$ and a mapping $\kappa\colon E\to[k]$. When $G$ is clear from the context we refer to $\kappa$ as the graph coloring. We typically represent $G=([n],E)$ as a (symmetric) $n\times n$ adjacency matrix, $A$, defined such that $$A_{i,j}= \begin{cases} \kappa((i,j)) & \mbox{if } (i,j) \in E\\ 0 & \mbox{otherwise} \end{cases}$$ Given a graph coloring $(G,\kappa)$ in $k$ colors with $G=(V,E)$, the set of neighbors of a vertex $u\in V$ in color $c\in [k]$ is $N_c(u) = {\left\{~v \left| \begin{array}{l}(u,v)\in E, \kappa((u,v))=c\end{array} \right. \right\}} $ and the color-$c$ degree of $u$ is $deg_{c}(u) = |N_c(u)|$. The color degree tuple of $u$ is the $k$-tuple $deg(u)={\langle deg_{1}(u),\ldots,deg_{k}(u) \rangle}$. The sub-graph of $G$ on the $c$ colored neighbors of $x\in V$ is the projection of $G$ to vertices in $N_c(x)$ defined by $G^c_x = (N_c(x),{\left\{~(u,v)\in E \left| \begin{array}{l}u,v\in N_c(x)\end{array} \right. \right\}})$. For example, take as $G$ the graph coloring depicted by the adjacency matrix in Figure \[embed\_12\_8\_8\] with $u$ the vertex corresponding to the first row in the matrix. Then, $N_1(u) = \{2,3,4,5,6,7,8,9,10,11,12,13\}$, $N_2(u) = \{14,15,16,17,18,19,20,21\}$, and $N_3(u)=\{22,23,24,25,26,27,28,29\}$. The subgraphs $G^1_u$, $G^2_u$, and $G^3_u$ are highlighted by the boldface text in Figure \[embed\_12\_8\_8\]. An $(r_1,\ldots,r_k;n)$ Ramsey coloring is a graph coloring in $k$ colors of the complete graph $K_n$ that does not contain a monochromatic complete sub-graph $K_{r_i}$ in color $i$ for each $1\leq i\leq k$. The set of all such colorings is denoted ${{\cal R}}(r_1,\ldots,r_k;n)$. The Ramsey number $R(r_1,\ldots,r_k)$ is the least $n>0$ such that no $(r_1,\ldots,r_k;n)$ coloring exists. In the multicolor case ($k>2$), the only known value of a nontrivial Ramsey number is $R(3,3,3)=17$. Prior to this paper, it was known that $30\leq R(4,3,3)\leq 31$. Moreover, while the sets of $(3,3,3;n)$ colorings were known for $14\leq n\leq 16$, the set of colorings for $n=13$ was never published.[^2] More information on recent results concerning Ramsey numbers can be found in the electronic dynamic survey by Radziszowski [@Rad]. $$\begin{aligned} \varphi_{adj}^{n,k}(A) &=& \hspace{-2mm}\bigwedge_{1\leq q<r\leq n} \left(\begin{array}{l} 1\leq A_{q,r}\leq k ~~\land~~ A_{q,r} = A_{r,q} ~~\land ~~ A_{q,q} = 0 \end{array}\right) \label{constraint:simple} \\ \varphi_{r}^{n,c}(A) &=& \bigwedge_{I\in \wp_r([n])} \bigvee {\left\{~A_{i,j}\neq c \left| \begin{array}{l}i,j \in I, i<j\end{array} \right. \right\}} \label{constraint:nok}\end{aligned}$$ $$\begin{aligned} \small \label{constraint:coloring} \varphi_{(r_1,\ldots,r_k;n)}(A) & = & \varphi_{adj}^{n,k}(A) \land \hspace{-2mm} \bigwedge_{1\leq c\leq k} \hspace{-1mm} \varphi_{r_c}^{n,c}(A)\end{aligned}$$ A graph coloring problem on $k$ colors is about the search for a graph coloring which satisfies a given set of constraints. Formally, it is specified as a formula, $\varphi(A)$, where $A$ is an $n\times n$ adjacency matrix of integer variables with domain $\{0\}\cup [k]$ and $\varphi$ is a constraint on these variables. A solution is an assignment of integer values to the variables in $A$ which satisfies $\varphi$ and determines both the graph edges and their colors. We often refer to a solution as an integer adjacency matrix and denote the set of solutions as $sol(\varphi(A))$. Figure \[fig:gcp\] presents the $k$-color graph coloring problems we focus on in this paper: $(r_1,\ldots,r_k;n)$ Ramsey colorings. Constraint (\[constraint:simple\]), $\varphi_{adj}^{n,k}(A)$, states that the graph represented by matrix $A$ has $n$ vertices, is $k$ colored, and is simple. Constraint (\[constraint:nok\]) $\varphi_{r}^{n,c}(A)$ states that the $n\times n$ matrix $A$ has no embedded sub-graph $K_r$ in color $c$. Each conjunct, one for each set $I$ of $r$ vertices, is a disjunction stating that one of the edges between vertices of $I$ is not colored $c$. Notation: $\wp_r(S)$ denotes the set of all subsets of size $r$ of the set $S$. Constraint (\[constraint:coloring\]) states that $A$ is a $(r_1,\ldots,r_k;n)$ Ramsey coloring. For graph coloring problems, solutions are typically closed under permutations of vertices and of colors. Restricting the search space for a solution modulo such permutations is crucial when trying to solve hard graph coloring problems. It is standard practice to formalize this in terms of graph (coloring) isomorphism. Let $G=(V,E)$ be a graph (coloring) with $V=[n]$ and let $\pi$ be a permutation on $[n]$. Then $\pi(G) = (V,{\left\{~ (\pi(x),\pi(y)) \left| \begin{array}{l} (x,y) \in E\end{array} \right. \right\}})$. Permutations act on adjacency matrices in the natural way: If $A$ is the adjacency matrix of a graph $G$, then $\pi(A)$ is the adjacency matrix of $\pi(G)$ and $\pi(A)$ is obtained by simultaneously permuting with $\pi$ both rows and columns of $A$. \[def:weak\_iso\] Let $(G,{\kappa_1})$ and $(H,{\kappa_2})$ be $k$-color graph colorings with $G=([n],E_1)$ and $H=([n],E_2)$. We say that $(G,{\kappa_1})$ and $(H,{\kappa_2})$ are weakly isomorphic, denoted $(G,{\kappa_1})\approx(H,{\kappa_2})$ if there exist permutations $\pi \colon [n] \to [n]$ and $\sigma \colon [k] \to [k]$ such that $(u,v) \in E_1 \iff (\pi(u),\pi(v)) \in E_2$ and $\kappa_1((u,v)) = \sigma(\kappa_2((\pi(u), \pi(v))))$. We denote such a weak isomorphism: $(G,{\kappa_1})\approx_{\pi,\sigma}(H,{\kappa_2})$. When $\sigma$ is the identity permutation, we say that $(G,{\kappa_1})$ and $(H,{\kappa_2})$ are isomorphic. The following lemma emphasizes the importance of weak graph isomorphism as it relates to Ramsey numbers. Many classic coloring problems exhibit the same property. \[lemma:closed\] Let $(G,{\kappa_1})$ and $(H,{\kappa_2})$ be graph colorings in $k$ colors such that $(G,\kappa_1) \approx_{\pi,\sigma} (H,\kappa_2)$. Then, $$(G,\kappa_1) \in {{\cal R}}(r_1,r_2,\ldots,r_k;n) \iff (H,\kappa_2) \in {{\cal R}}(\sigma(r_1),\sigma(r_2),\ldots,\sigma(r_k);n)$$ We make use of the following theorem from [@PR98]. \[thm:433\] $30\leq R(4,3,3)\leq 31$ and, $R(4,3,3)=31$ if and only if there exists a $(4,3,3;30)$ coloring $\kappa$ of $K_{30}$ such that: (1) For every vertex $v$ and $i\in\{2,3\}$, $5\leq deg_{i}(v)\leq 8$, and $13\leq deg_{1}(v)\leq 16$. (2) Every edge in the third color has at least one endpoint $v$ with $deg_{3}(v)=13$. (3) There are at least 25 vertices $v$ for which $deg_{1}(v)=13$, $deg_{2}(v)=deg_{3}(v)=8$. \[cor:degrees\] Let $G=(V,E)$ be a $(4,3,3;30)$ coloring, $v\in V$ a selected vertex, and assume without loss of generality that $deg_2(v)\geq deg_3(v)$. Then, $deg(v)\in{\left\{ \begin{array}{l}{\langle 13, 8, 8 \rangle},{\langle 14, 8, 7 \rangle},{\langle 15, 7, 7 \rangle},{\langle 15, 8, 6 \rangle},{\langle 16, 7, 6 \rangle},{\langle 16, 8, 5 \rangle}\end{array} \right\}}$. Consider a vertex $v$ in a $(4,3,3;n)$ coloring and focus on the three subgraphs induced by the neighbors of $v$ in each of the three colors. The following states that these must be corresponding Ramsey colorings. \[obs:embed\] Let $G$ be a $(4,3,3;n)$ coloring and $v$ be any vertex with $deg(v)={\langle d_1,d_2,d_3 \rangle}$. Then, $d_1+d_2+d_3=n-1$ and $G^1_v$, $G^2_v$, and $G^3_v$ are respectively $(3,3,3;d_1)$, $(4,2,3;d_2)$, and $(4,3,2;d_3)$ colorings. Note that by definition a $(4,2,3;n)$ coloring is a $(4,3;n)$ Ramsey coloring in colors 1 and 3 and likewise a $(4,3,2;n)$ Ramsey coloring is a $(4,3;n)$ coloring in colors 1 and 2. This is because the “2” specifies that the coloring does not contain a subgraph $K_2$ in the corresponding color and this means that it contains no edge with that color. For $n\in\{14,15,16\}$, the sets ${{\cal R}}(3,3,3;n)$ are known and consist respectively of 115, 2, and 2 colorings. Similarly, for $n\in\{5,6,7,8\}$ the sets ${{\cal R}}(4,3;n)$ are known and consist respectively of 9, 15, 9, and 3 colorings. In this paper computations are performed using the CryptoMiniSAT [@Crypto] SAT solver. SAT encodings (CNF) are obtained using the finite-domain constraint compiler  [@jair2013]. The use of  facilitates applications to find a single (first) solution, or to find all solutions for a constraint, modulo a specified set of variables. When solving for all solutions, our implementation iterates with the SAT solver, adding so called *blocking clauses* each time another solution is found. This technique, originally due to McMillan [@McMillan2002], is simplistic but suffices for our purposes. All computations were performed on a cluster with a total of $228$ Intel E8400 cores clocked at 2 GHz each, able to run a total of $456$ parallel threads. Each of the cores in the cluster has computational power comparable to a core on a standard desktop computer. Each SAT instance is run on a single thread. Basic SAT Encoding and Embeddings {#sec:embed} ================================= Throughout the paper we apply a SAT solver to solve CNF encodings of constraints such as those presented in Figure \[fig:gcp\]. In this way it is straightforward to find a Ramsey coloring or prove its non-existence. Ours is a standard encoding to CNF. To this end: nothing new. For an $n$ vertex graph coloring problem in $k$ colors we take an $n\times n$ matrix $A$ where $A_{i,j}$ represents in $k$ bits the edge $(i,j)$ in the graph: exactly one bit is true indicating which color the edge takes, or no bit is true indicating that the edge $(i,j)$ is not in the graph. Already at the representation level, we use the same Boolean variables to represent the color in $A_{i,j}$ and in $A_{j,i}$ for each $1\leq i<j\leq n$. We further fix the variables corresponding to $A_{i,i}$ to ${\mathit{false}}$. The rest of the SAT encoding is straightforward. Constraint (\[constraint:simple\]) is encoded to CNF by introducing clauses to state that for each $A_{i,j}$ with $1\leq i<j\leq n$ at most one of the $k$ bits representing the color of the edge $(i,j)$ is true. In our setting typically $k=3$. For three colors, if $b_1,b_2,b_3$ are the bits representing the color of an edge, then three clauses suffice: $(\bar b_1\lor \bar b_2),(\bar b_1\lor \bar b_3),(\bar b_2\lor \bar b_3)$. Constraint (\[constraint:nok\]) is encoded by a single clause per set $I$ of $r$ vertices expressing that at least one of the bits corresponding to an edge between vertices in $I$ does not have color $c$. Finally Constraint (\[constraint:coloring\]) is a conjunction of constraints of the previous two forms. In Section \[sec:symBreak\] we will improve on this basic encoding by introducing symmetry breaking constraints (encoded to CNF). However, for now we note that, even with symmetry breaking constraints, using the basic encoding, a SAT solver is currently not able to solve any of the open Ramsey coloring problems such as those considered in this paper. In particular, directly applying a SAT solver to search for a $(4,3,3;30)$ Ramsey coloring is hopeless. To facilitate the search for $(4,3,3;30)$ Ramsey coloring using a SAT encoding, we apply a general approach where, when seeking a $(r_1,\ldots,r_k;n)$ Ramsey coloring one selects a “preferred” vertex, call it $v_1$, and based on its degrees in each of the $k$ colors, embeds $k$ subgraphs which are corresponding smaller colorings. Using this approach, we apply Corollary \[cor:degrees\] and Observation \[obs:embed\] to establish that a $(4,3,3;30)$ coloring, if one exists, must be ${\langle 13,8,8 \rangle}$ regular. Specifically, all vertices must have 13 edges in the first color and 8 each, in the second and third colors. This result is considered significant progress in the research on Ramsey numbers [@XuRad2015]. This “embedding” approach is often applied in the Ramsey number literature where the process of completing (or trying to complete) a partial solution (an embedding) to a Ramsey coloring is called *gluing*. See for example the presentations in [@PiwRad2001; @FKRad2004; @PR98]. \[thm:regular\] Any $(4,3,3;30)$ coloring, if one exists, is ${\langle 13,8,8 \rangle}$ regular. By computation as described in the rest of this section. [r]{}[.460]{} We seek a $(4,3,3;30)$ coloring of $K_{30}$, represented as a $30\times 30$ adjacency matrix $A$. Let $v_1$ correspond to the the first row in $A$ with $deg(v_1)={\langle d_1,d_2,d_3 \rangle}$ as prescribed by Corollary \[cor:degrees\]. For each possible triplet ${\langle d_1,d_2,d_3 \rangle}$, except ${\langle 13,8,8 \rangle}$, we take each of the known corresponding colorings for the subgraphs $G^1_{v_1}$, $G^2_{v_1}$, and $G^3_{v_1}$ and embed them into $A$. We then apply a SAT solver, to (try to) complete the remaining cells in $A$ to satisfy $\varphi_{4,3,3;30}(A)$ as defined by Constraint (\[constraint:coloring\]) of Figure \[fig:gcp\]. If the SAT solver fails, then no such completion exists. To illustrate the approach, consider the case where $deg(v_1)={\langle 14,8,7 \rangle}$. Figure \[embed\_14\_8\_7\] details one of the embeddings corresponding to this case. The first row and column of $A$ specify the colors of the edges of the 29 neighbors of $v_1$ (in bold). The symbol “$\_$” indicates an integer variable that takes a value between 1 and 3. The neighbors of $v_1$ in color 1 form a submatrix of $A$ embedded in rows (and columns) 2–15 of the matrix in the Figure. By Corollary \[obs:embed\] these are a $(3,3,3;14)$ Ramsey coloring and there are 115 possible such colorings modulo weak isomorphism. The Figure details one of them. Similarly, there are 3 possible $(4,2,3;8)$ colorings which are subgraphs for the neighbors of $v_1$ in color 2. In Figure \[embed\_14\_8\_7\], rows (and columns) 16–23 detail one such coloring. Finally, there are 9 possible $(4,3,2;7)$ colorings which are subgraphs for the neighbors of $v_1$ in color 3. In Figure \[embed\_14\_8\_7\], rows (and columns) 24–30 detail one such coloring. To summarize, Figure \[embed\_14\_8\_7\] is a partially instantiated adjacency matrix. The first row determines the degrees of $v_1$, in the three colors, and 3 corresponding subgraphs are embedded. The uninstantiated values in the matrix must be completed to obtain a solution that satisfies $\varphi_{4,3,3;30}(A)$ as specified in Constraint (\[constraint:coloring\]) of Figure \[fig:gcp\]. This can be determined using a SAT solver. For the specific example in Figure \[embed\_14\_8\_7\], the CNF generated using our tool set consists of 33[,]{}959 clauses, involves 5[,]{}318 Boolean variables, and is shown to be unsatisfiable in 52 seconds of computation time. For the case where $v_1$ has degrees ${\langle 14,8,7 \rangle}$ in the three colors this is one of $115\times 3\times 9 = 3105$ instances that need to be checked. Table \[table:regular\] summarizes the experiment which proves Theorem \[thm:regular\]. For each of the possible degrees of vertex 1 in a $(4,3,3;30)$ coloring as prescribed by Corollary \[cor:degrees\], except ${\langle 13,8,8 \rangle}$, and for each possible choice of colorings for the derived subgraphs $G^1_{v_1}$, $G^2_{v_1}$, and $G^3_{v_1}$, we apply a SAT solver to show that the instance $\varphi_{(4,3,3;30)}(A)$ of Constraint (\[constraint:coloring\]) of Figure \[fig:gcp\] cannot be satisfied. The table details for each degree triple, the number of instances, their average size (number of clauses and Boolean variables), and the average and total times to show that the constraint is not satisfiable. $v_1$ degrees \# clauses (avg.) \# vars (avg.) unsat (avg) unsat (total) --------------- ------ ------------------- ---------------- ------------- --------------- ------------- (16,8,5) 54 = 2\*3\*9 32432 5279 51 sec. 0.77 hrs. (16,7,6) 270 = 2\*9\*15 32460 5233 420 sec. 31.50 hrs. (15,8,6) 90 = 2\*3\*15 33607 5450 93 sec. 2.32 hrs. (15,7,7) 162 = 2\*9\*9 33340 5326 1554 sec. 69.94 hrs. (14,8,7) 3105 = 115\*3\*9 34069 5324 294 sec. 253.40 hrs. : Proving that any $(4,3,3;30)$ Ramsey coloring is ${\langle 13,8,8 \rangle}$ regular (summary).[]{data-label="table:regular"} All of the SAT instances described in the experiment summarized by Table \[table:regular\] are unsatisfiable. The solver reports “unsat”. To gain confidence in our implementation, we illustrate its application on a satisfiable instance: to find a, known to exist, $(4,3,3;29)$ coloring. This experiment involves some reverse engineering. [r]{}[.460]{} In 1966 Kalbfleisch [@kalb66] reported the existence of a circulant $(3,4,4;29)$ coloring. Encoding instance $\varphi_{(4,3,3;29)}(A)$ of Constraint (\[constraint:coloring\]) together with a constraint that states that the adjacency matrix $A$ is circulant, results in a CNF with 146[,]{}506 clauses and 8[,]{}394 variables. Using a SAT solver, we obtain a corresponding $(4,3,3;29)$ coloring in less than two seconds of computation time. The solution is ${\langle 12,8,8 \rangle}$ regular and isomorphic to the adjacency matrix depicted as Figure \[embed\_12\_8\_8\]. Now we apply the embedding approach. We take the partial solution (the boldface elements) corresponding to the three subgraphs: $G^1_{v_1}$, $G^2_{v_1}$ and $G^3_{v_1}$ which are respectively $(3,3,3;12)$, $(4,2,3;8)$ and $(4,3,2;8)$ Ramsey colorings. Applying a SAT solver to complete this partial solution to a $(4,3,3;29)$ coloring satisfying Constraint (\[constraint:coloring\]) involves a CNF with 30[,]{}944 clauses and 4[,]{}736 variables and requires under two hours of computation time. Figure \[embed\_12\_8\_8\] portrays the solution (the gray elements). To apply the embedding approach described in this section to determine if there exists a $(4,3,3;30)$ Ramsey coloring which is ${\langle 13,8,8 \rangle}$ regular would require access to the set ${{\cal R}}(3,3,3;13)$. We defer this discussion until after Section \[sec:33313b\] where we describe how we compute the set of all 78[,]{}892 $(3,3,3;13)$ Ramsey colorings modulo weak isomorphism. Symmetry Breaking: Computing ${{\cal R}}(r_1,\ldots,r_k;n)$ {#sec:symBreak} =========================================================== In this section we prepare the ground to apply a SAT solver to find the set of all $(r_1,\ldots,r_k;n)$ Ramsey colorings modulo weak isomorphism. The constraints are those presented in Figure \[fig:gcp\] and their encoding to CNF is as described in Section \[sec:embed\]. Our final aim is to compute the set of all $(3,3,3;13)$ colorings modulo weak isomorphism. Then we can apply the embedding technique of Section \[sec:embed\] to determine the existence of a ${\langle 13,8,8 \rangle}$ regular $(4,3,3;30)$ Ramsey coloring. Given Theorem \[thm:regular\], this will determine the value of $R(4,3,3)$. Solving hard search problems on graphs, and graph coloring problems in particular, relies heavily on breaking symmetries in the search space. When searching for a graph, the names of the vertices do not matter, and restricting the search modulo graph isomorphism is highly beneficial. When searching for a graph coloring, on top of graph isomorphism, solutions are typically closed under permutations of the colors: the names of the colors do not matter and the term often used is “weak isomorphism” [@PR98] (the equivalence relation is weaker because both node names and edge colors do not matter). When the problem is to compute the set of all solutions modulo (weak) isomorphism the task is even more challenging. Often one first attempts to compute all the solutions of the coloring problem, and to then apply one of the available graph isomorphism tools, such as `nauty` [@nauty] to select representatives of their equivalence classes modulo (weak) isomorphism. This is a *generate and test* approach. However, typically the number of solutions is so large that this approach is doomed to fail even though the number of equivalence classes itself is much smaller. The problem is that tools such as `nauty` apply after, and not during, generation. To this end, we follow [@CodishMPS14] where Codish [[*et al.*]{}]{} show that the symmetry breaking approach of [@DBLP:conf/ijcai/CodishMPS13] holds also for graph coloring problems where the adjacency matrix consists of integer variables. This is a *constrain and generate approach*. But, as symmetry breaking does not break all symmetries, it is still necessary to perform some reduction using a tool like `nauty`.[^3] This form of symmetry breaking is an important component in our methodology. **[@DBLP:conf/ijcai/CodishMPS13].** \[def:SBlexStar\] Let $A$ be an $n\times n$ adjacency matrix. Then, $$\label{eq:symbreak} {\textsf{sb}}^*_\ell(A) = \bigwedge{\left\{~A_{i}\preceq_{\{i,j\}}A_{j} \left| \begin{array}{l}i<j\end{array} \right. \right\}}$$ where $A_{i}\preceq_{\{i,j\}}A_{j}$ denotes the lexicographic order between the $i^{th}$ and $j^{th}$ rows of $A$ (viewed as strings) omitting the elements at positions $i$ and $j$ (in both rows). We omit the precise details of how Constraint (\[eq:symbreak\]) is encoded to CNF. In our implementation this is performed by the finite domain constraint compiler  and details can be found in [@jair2013]. Table \[tab:333n1\] illustrates the impact of the symmetry breaking Constraint (\[eq:symbreak\]) on the search for the Ramsey colorings required in the proof of Theorem \[thm:regular\]. The first four rows in the table portray the required instances of the forms $(4,3,2;n)$ and $(4,2,3;n)$ which by definition correspond to $(4,3;n)$ colorings (respectively in colors 1 and 3, and in colors 1 and 2). The next three rows correspond to $(3,3,3;n)$ colorings where $n\in\{14,15,16\}$. The last row illustrates our failed attempt to apply a SAT encoding to compute ${{\cal R}}(3,3,3;13)$. The first column in the table specifies the instance. The column headed by “\#${\setminus}_{\approx}$” specifies the known (except for the last row) number of colorings modulo weak isomorphism [@Rad]. The columns headed by “vars” and “clauses” indicate, the numbers of variables and clauses in the corresponding CNF encodings of the coloring problems with and without the symmetry breaking Constraint (\[eq:symbreak\]). The columns headed by “time” indicate the time (in seconds) to find all colorings iterating with a SAT solver. The timeout assumed here is 24 hours. The column headed by “\#” specifies the number of colorings found by iterated SAT solving. In the first four rows, notice the impact of symmetry breaking which reduces the number of solutions by 1–3 orders of magnitude. In the next three rows the reduction is more acute. Without symmetry breaking the colorings cannot be computed within the 24 hour timeout. The sets of colorings obtained with symmetry breaking have been verified to reduce, using `nauty` [@nauty], to the known number of colorings modulo weak isomorphism indicated in the second column. Abstraction: Degree Matrices for Graph Colorings {#sec:abs} ================================================ This section introduces an abstraction on graph colorings defined in terms of *degree matrices*. The motivation is to solve a hard graph coloring problem by first searching for its degree matrices. Degree matrices are to graph coloring problems as degree sequences [@ErdosGallai1960] are to graph search problems. A degree sequence is a monotonic nonincreasing sequence of the vertex degrees of a graph. A graphic sequence is a sequence which can be the degree sequence of some graph. The idea underlying our approach is that when the combinatorial problem at hand is too hard, then possibly solving an abstraction of the problem is easier. In this case, a solution of the abstract problem can be used to facilitate the search for a solution of the original problem. \[def:dm\] Let $A$ be a graph coloring on $n$ vertices with $k$ colors. The *degree matrix* of $A$, denoted $dm(A)$ is an $n\times k$ matrix, $M$ such that $M_{i,j} = deg_j(i)$ is the degree of vertex $i$ in color $j$. [r]{}[.33]{} Figure \[fig:dm\] illustrates the degree matrix of the graph coloring given as Figure \[embed\_12\_8\_8\]. The three columns correspond to the three colors and the 29 rows to the 29 vertices. The degree matrix consists of 29 identical rows as the corresponding graph coloring is ${\langle 12,8,8 \rangle}$ regular. A degree matrix $M$ represents the set of graphs $A$ such that $dm(A)=M$. Due to properties of weak-isomorphism (vertices as well as colors can be reordered) we can exchange both rows and columns of a degree matrix without changing the set of graphs it represents. In the rest of our construction we adopt a representation in which the rows and columns of a degree matrix are sorted lexicographically. For an $n\times k$ degree matrix $M$ we denote by $lex(M)$ the smallest matrix with rows and columns in the lexicographic order (non-increasing) obtained by permuting rows and columns of $M$. \[def:abs\] Let $A$ be a graph coloring on $n$ vertices with $k$ colors. The *abstraction* of $A$ to a degree matrix is $\alpha(A)=lex(dm(A))$. For a set ${{\cal A}}$ of graph colorings we denote $\alpha({{\cal A}}) = {\left\{~\alpha(A) \left| \begin{array}{l}A\in{{\cal A}}\end{array} \right. \right\}}$. Note that if $A$ and $A'$ are weakly isomorphic, then $\alpha(A)=\alpha(A')$. \[def:conc\] Let $M$ be an $n\times k$ degree matrix. Then, $\gamma(M) = {\left\{~A \left| \begin{array}{l}\alpha(A)=M\end{array} \right. \right\}}$ is the set of graph colorings represented by $M$. For a set ${{\cal M}}$ of degree matrices we denote $\gamma({{\cal M}}) = \cup{\left\{~\gamma(M) \left| \begin{array}{l}M\in{{\cal M}}\end{array} \right. \right\}}$. Let $\varphi(A)$ be a graph coloring problem in $k$ colors on an $n\times n$ adjacency matrix, $A$. Our strategy to compute ${{\cal A}}=sol(\varphi(A))$ is to first compute an over-approximation ${{\cal M}}$ of degree matrices such that $\gamma({{\cal M}})\supseteq{{\cal A}}$ and to then use ${{\cal M}}$ to guide the computation of ${{\cal A}}$. We denote the set of solutions of the graph coloring problem, $\varphi(A)$, which have a given degree matrix, $M$, by $sol_M(\varphi(A))$. Then $$\begin{aligned} \label{eq:approx} sol(\varphi(A)) &=& \bigcup_{M\in{{\cal M}}} sol_M(\varphi(A))\\ \label{eq:solM} sol_M(\varphi(A)) & = & sol(\varphi(A)\wedge\alpha(A){=}M)\end{aligned}$$ Equation (\[eq:approx\]) implies that, we can compute the solutions to a graph coloring problem $\varphi(A)$ by computing the independent sets $sol_M(\varphi(A))$ for any over approximation ${{\cal M}}$ of the degree matrices of the solutions of $\varphi(A)$. This facilitates the computation for two reasons: (1) The problem is now broken into a set of independent sub-problems for each $M\in{{\cal M}}$ which can be solved in parallel, and (2) The computation of each individual $sol_M(\varphi(A))$ is now directed using $M$. The constraint $\alpha(A){=}M$ in the right side of Equation (\[eq:solM\]) is encoded to SAT by introducing (encodings of) cardinality constraints. For each row of the matrix $A$ the corresponding row in $M$ specifies the number of elements with value $c$ (for $1\leq c\leq k$) that must be in that row. We omit the precise details of the encoding to CNF. In our implementation this is performed by the finite domain constraint compiler  and details can be found in [@jair2013]. When computing $sol_M(\varphi(A))$ for a given degree matrix we can no longer apply the symmetry breaking Constraint (\[eq:symbreak\]) as it might constrain the rows of $A$ in a way that contradicts the constraint $\alpha(A)=M$ in the right side of Equation (\[eq:solM\]). However, we can refine Constraint (\[eq:symbreak\], to break symmetries on the rows of $A$ only when the corresponding rows in $M$ are equal. Then $M$ can be viewed as inducing an ordered partition of $A$ and Constraint (\[eq:sbdm\]) is, in the terminology of [@DBLP:conf/ijcai/CodishMPS13], a partitioned lexicographic symmetry break. In the following, $M_i$ and $M_j$ denote the $i^{th}$ and $j^{th}$ rows of matrix $M$. $$\label{eq:sbdm} {\textsf{sb}}^*_\ell(A,M) = \bigwedge_{i<j} \left(\begin{array}{l} \big(M_i=M_j\Rightarrow A_i\preceq_{\{i,j\}} A_j\big) \end{array}\right)$$ The following refines Equation (\[eq:solM\]) introducing the symmetry breaking predicate. $$\label{eq:scenario1} sol_M(\varphi(A)) = sol(\varphi(A)\wedge (\alpha(A){=}M) \wedge{\textsf{sb}}^*_\ell(A,M))$$ To justify that Equations (\[eq:solM\]) and (\[eq:scenario1\]) both compute $sol_M(\varphi(A))$, modulo weak isomorphism, we must show that if ${\textsf{sb}}^*_\ell(A,M)$ excludes a solution then there is another weakly isomorphic solution that is not excluded. \[thm:sbl\_star\] Let $A$ be an adjacency matrix with $\alpha(A) = M$. Then, there exists $A'\approx A$ such that $\alpha(A')=M$ and ${\textsf{sb}}^{*}_\ell(A',M)$ holds. Computing Degree Matrices for $R(3,3,3;13)$ {#sec:33313} =========================================== This section describes how we compute a set of degree matrices that approximate those of the solutions of instance $\varphi_{(3,3,3;13)}(A)$ of Constraint (\[constraint:coloring\]). We apply a strategy mixing SAT solving with brute-force enumeration as follows. The computation of the degree matrices is summarized in Table \[tab:333\_computeDMs\]. In the first step, we compute bounds on the degrees of the nodes in any $R(3,3,3;13)$ coloring. \[lemma:db\] Let $A$ be an $R(3,3,3;13)$ coloring then for every vertex $x$ in $A$, and color $c\in\{1,2,3\}$, $2\leq deg_{c}(x)\leq 5$. By solving instance $\varphi_{(3,3,3;13)}(A)$ of Constraint (\[constraint:coloring\]) seeking a graph with some degree less than 2 or greater than 5. The CNF encoding is of size 13[,]{}672 clauses with 2[,]{}748 Boolean variables and takes under 15 seconds to solve and yields an UNSAT result which implies that such a graph does not exist. In the second step, we enumerate the degree sequences with values within the bounds specified by Lemma \[lemma:db\]. Recall that the degree sequence of an undirected graph is the non-increasing sequence of its vertex degrees. Not every non-increasing sequence of integers corresponds to a degree sequence. A sequence that corresponds to a degree sequence is said to be graphical. The number of degree sequences of graphs with 13 vertices is 836[,]{}315 (see Sequence number `A004251` of The On-Line Encyclopedia of Integer Sequences published electronically at <http://oeis.org>). However, when the degrees are bound by Lemma \[lemma:db\] there are only 280. \[lemma:ds\] There are 280 degree sequences with values between $2$ and $5$. Straightforward enumeration using the algorithm of Erd[ö]{}s and Gallai [@ErdosGallai1960]. In the third step, we test the 280 degree sequences identified by Lemma \[lemma:ds\] to determine which of them might occur as the left column in a degree matrix. \[lemma:ds2\] Let $A$ be a $R(3,3,3;13)$ coloring and let $M=\alpha(A)$. Then, (a) the left column of $M$ is one of the 280 degree sequences identified in Lemma \[lemma:ds\]; and (b) there are only 80 degree sequences from the 280 which are the left column of $\alpha(A)$ for some coloring $A$ in $R(3,3,3;13)$. By solving instance $\varphi_{(3,3,3;13)}(A)$ of Constraint (\[constraint:coloring\]). For each degree sequence from Lemma \[lemma:ds\], seeking a solution with that degree sequence in the first color. This involves 280 instances with average CNF size: 10861 clauses and 2215 Boolean variables. The total solving time is 375.76 hours and the hardest instance required about 50 hours. Exactly 80 of these instances were satisfiable. In the fourth step we extend the 80 degree sequences identified in Lemma \[lemma:ds2\] to obtain all possible degree matrices. \[lemma:dm\] Given the 80 degree sequences identified in Lemma \[lemma:ds2\] as potential left columns of a degree matrix, there are 11[,]{}933 possible degree matrices. By enumeration. For a degree matrix: the rows and columns are lex sorted, the rows must sum to 12, and the columns must be graphical (when sorted). We enumerate all such degree matrices and then select their smallest representatives under permutations of rows and columns. The computation requires a few seconds. In the fifth step, we test the 11[,]{}933 degree matrices identified by Lemma \[lemma:dm\] to determine which of them are the abstraction of some $R(3,3,3;13)$ coloring. \[lemma:dm2\] From the 11[,]{}933 degree matrices identified in Lemma \[lemma:dm\], 999 are $\alpha(A)$ for a coloring $A$ in ${{\cal R}}(3,3,3;13)$. By solving instance $\varphi_{(3,3,3;13)}(A)$ of Constraint (\[constraint:coloring\]) together with a given degree matrix to test if it is satisfiable. This involves 11[,]{}933 instances with average CNF size: 7632 clauses and 1520 Boolean variables. The total solving time is 126.55 hours and the hardest instance required 0.88 hours. Step ------- -------------------------------------------------------------- -------------------- ------------ ------------ compute degree bounds (Lemma \[lemma:db\]) \#Vars \#Clauses (1 instance, unsat)   2748 13672 enumerate 280 possible degree sequences (Lemma \[lemma:ds\]) test degree sequences (Lemma \[lemma:ds2\]) 16.32 hrs. \#Vars \#Clauses (280 instances: 200 unsat, 80 sat) hardest: 1.34 hrs 1215 (avg) 7729(avg) [4]{} enumerate 11[,]{}933 degree matrices (Lemma \[lemma:dm\]) test degree matrices (Lemma \[lemma:dm2\]) 126.55 hrs. \#Vars \#Clauses (11[,]{}933 instances: 10[,]{}934 unsat, 999 sat) hardest: 0.88 hrs. 1520 (avg) 7632 (avg) : Computing the degree matrices for ${{\cal R}}(3,3,3;13)$ step by step.[]{data-label="tab:333_computeDMs"} Computing ${{\cal R}}(3,3,3;13)$ from Degree Matrices {#sec:33313b} ===================================================== We describe the computation of the set ${{\cal R}}(3,3,3;13)$ starting from the 999 degree matrices identified in Lemma \[lemma:dm2\]. Table \[tab:333\_times\] summarizes the two step experiment. Step ------- ------------------------------------------------------ -------------------- compute all $(3,3,3;13)$ Ramsey colorings per total:  136.31 hr. degree matrix (999 instances, 129[,]{}188 solutions) hardest:4.3 hr. [2]{} reduce modulo $\approx$ (78[,]{}892 solutions) : Computing ${{\cal R}}(3,3,3;13)$ step by step.[]{data-label="tab:333_times"} #### **step 1:** For each degree matrix we compute, using a SAT solver, all corresponding solutions of Equation (\[eq:scenario1\]), where $\varphi(A)=\varphi_{(3,3,3;13)}(A)$ of Constraint (\[constraint:coloring\]) and $M$ is one of the 999 degree matrices identified in (Lemma \[lemma:dm2\]). This generates in total 129[,]{}188 $(3,3,3;13)$ Ramsey colorings. Table \[tab:333\_times\] details the total solving time for these instances and the solving times for the hardest instance for each SAT solver. The largest number of graphs generated by a single instance is 3720. #### **step 2:** The 129[,]{}188 $(3,3,3;13)$ colorings from step 1 are reduced modulo weak-isomorphism using `nauty` [@nauty]. This process results in a set with 78[,]{}892 graphs. We note that recently, the set ${{\cal R}}(3,3,3;13)$ has also been computed independently by Stanislaw Radziszowski, and independently by Richard Kramer and Ivan Livinsky [@stas:personalcommunication]. There is no ${\langle 13,8,8 \rangle}$ Regular $(4,3,3;30)$ Coloring {#sec:433_30} ==================================================================== In order to prove that there is no ${\langle 13,8,8 \rangle}$ regular $(4,3,3;30)$ coloring using the embedding approach of Section \[sec:embed\], we need to check that $78{,}892\times 3\times 3 = 710{,}028$ corresponding instances are unsatisfiable. These correspond to the elements in the cross product of ${{\cal R}}(3,3,3;13)$, ${{\cal R}}(4,2,3;8)$ and ${{\cal R}}(4,3,2)$. $\left\{ \fbox{$\begin{scriptsize}\begin{smallmatrix} 0 & 1 & 1 & 1 & 3 & 3 & 3 & 3 \\ 1 & 0 & 3 & 3 & 1 & 1 & 3 & 3 \\ 1 & 3 & 0 & 3 & 1 & 3 & 1 & 3 \\ 1 & 3 & 3 & 0 & 3 & 3 & 1 & 1 \\ 3 & 1 & 1 & 3 & 0 & 3 & 3 & 1 \\ 3 & 1 & 3 & 3 & 3 & 0 & 1 & 1 \\ 3 & 3 & 1 & 1 & 3 & 1 & 0 & 3 \\ 3 & 3 & 3 & 1 & 1 & 1 & 3 & 0 \end{smallmatrix}\end{scriptsize}$}, \fbox{$\begin{scriptsize}\begin{smallmatrix} 0 & 1 & 1 & 1 & 3 & 3 & 3 & 3 \\ 1 & 0 & 3 & 3 & 1 & 3 & 3 & 3 \\ 1 & 3 & 0 & 3 & 3 & 1 & 1 & 3 \\ 1 & 3 & 3 & 0 & 3 & 1 & 3 & 1 \\ 3 & 1 & 3 & 3 & 0 & 1 & 1 & 3 \\ 3 & 3 & 1 & 1 & 1 & 0 & 3 & 3 \\ 3 & 3 & 1 & 3 & 1 & 3 & 0 & 1 \\ 3 & 3 & 3 & 1 & 3 & 3 & 1 & 0 \end{smallmatrix}\end{scriptsize}$}, \fbox{$\begin{scriptsize}\begin{smallmatrix} 0 & 1 & 1 & 1 & 3 & 3 & 3 & 3 \\ 1 & 0 & 3 & 3 & 1 & 3 & 3 & 3 \\ 1 & 3 & 0 & 3 & 3 & 1 & 1 & 3 \\ 1 & 3 & 3 & 0 & 3 & 1 & 3 & 1 \\ 3 & 1 & 3 & 3 & 0 & 1 & 3 & 3 \\ 3 & 3 & 1 & 1 & 1 & 0 & 3 & 3 \\ 3 & 3 & 1 & 3 & 3 & 3 & 0 & 1 \\ 3 & 3 & 3 & 1 & 3 & 3 & 1 & 0 \end{smallmatrix}\end{scriptsize}$}\right\} \subseteq \left\{ \fbox{$\begin{scriptsize}\begin{smallmatrix} 0 & 1 & 1 & 1 & 3 & 3 & 3 & 3 \\ 1 & 0 & 3 & 3 & 1 & {\mathtt{A}}& 3 & 3 \\ 1 & 3 & 0 & 3 & {\mathtt{A}}& {\mathtt{B}}& 1 & 3 \\ 1 & 3 & 3 & 0 & 3 & {\mathtt{B}}& {\mathtt{A}}& 1 \\ 3 & 1 & {\mathtt{A}}& 3 & 0 & {\mathtt{B}}& {\mathtt{C}}& {\mathtt{A}}\\ 3 & {\mathtt{A}}& {\mathtt{B}}& {\mathtt{B}}& {\mathtt{B}}& 0 & {\mathtt{A}}& {\mathtt{A}}\\ 3 & 3 & 1 & {\mathtt{A}}& {\mathtt{C}}& {\mathtt{A}}& 0 & {\mathtt{B}}\\ 3 & 3 & 3 & 1 & {\mathtt{A}}& {\mathtt{A}}& {\mathtt{B}}& 0 \\ \end{smallmatrix}\end{scriptsize}$} \left| \begin{scriptsize}\begin{array}{l} {\tiny {\mathtt{A}},{\mathtt{B}},{\mathtt{C}}\in\{1,3\}} \\ {\mathtt{A}}\neq {\mathtt{B}}\end{array}\end{scriptsize} \right.\right\}$ To decrease the number of instances by a factor of $9$, we approximate the three $(4,2,3;8)$ colorings by a single description as demonstrated in Figure \[figsubsumer\]. The constrained matrix on the right has four solutions which include the three $(4,2,3;8)$ colorings on the left. We apply a similar approach for the $(4,3,2;8)$ colorings. So, in fact we have a total of only $78{,}892$ embedding instances to consider. In addition to the constraints in Figure \[fig:gcp\], we add constraints to specify that each row of the adjacency matrix has the prescribed number of edges in each color (13, 8 and 8). By application of a SAT solver, we have determined all [r]{}[5cm]{} $78{,}892$ instances to be unsatisfiable. The average size of an instance is 36[,]{}259 clauses with 5187 variables. The total solving time is 128.31 years (running in parallel on 456 threads). The average solving time is 14 hours while the median is 4 hours. Only 797 instances took more than one week to solve. The worst-case solving time is 96.36 days. The two hardest instances are detailed in Appendix \[apdx:hardest\]. Table \[hpi\] specifies, in the second column, the total number of instances that can be shown unsatisfiable within the time specified in the first column. The third column indicates the increment in percentage (within 10 hours we solve 71.46%, within 20 hours we solve an additional 12.11%, etc). The last rows in the table indicate that there are 4 instances which require between 1500 and 2000 hours of computation, and 2 that require between 2000 and 2400 hours. Conclusion {#sec:conclude} ========== We have applied SAT solving techniques together with a methodology using abstraction and symmetry breaking to construct a computational proof that the Ramsey number $R(4,3,3)=30$. Our strategy is based on the search for a $(4,3,3;30)$ Ramsey coloring, which we show does not exist. This implies that $R(4,3,3)\leq 30$ and hence, because of known bounds, that $R(4,3,3) = 30$. The precise value $R(4,3,3)$ has remained unknown for almost 50 years. We have applied a methodology involoving SAT solving, abstraction, and symmetry to compute $R(4,3,3)=30$. We expect this methodology to apply to a range of other hard graph coloring problems. The question of whether a computational proof constitutes a [ *proper*]{} proof is a controversial one. Most famously the issue caused much heated debate after publication of the computer proof of the Four Color Theorem [@appel76]. It is straightforward to justify an existence proof (i.e. a [*SAT*]{} result), as it is easy to verify that the witness produced satisfies the desired properties. Justifying an [*UNSAT*]{} result is more difficult. If nothing else, we are certainly required to add the proviso that our results are based on the assumption of a lack of bugs in the entire tool chain (constraint solver, SAT solver, C-compiler etc.) used to obtain them. Most modern SAT solvers, support the option to generate a proof certificate for UNSAT instances (see e.g. [@HeuleHW14]), in the DRAT format [@WetzlerHH14], which can then be checked by a Theorem prover. This might be useful to prove the lack of bugs originating from the SAT solver but does not offer any guarantee concerning bugs in the generation of the CNF. Moreover, the DRAT certificates for an application like that described in this paper are expected to be of unmanageable size. Our proofs are based on two main “computer programs”. The first was applied to compute the set ${{\cal R}}(3,3,3;13)$ with its $78{,}892$ Ramsey colorings. The fact that at least two other groups of researchers (Stanislaw Radziszowski, and independently Richard Kramer and Ivan Livinsky) report having computed this set and quote [@stas:personalcommunication] the same number of elements is reassuring. The second program, was applied to complete partially instantiated adjacency matrices, embedding smaller Ramsey colorings, to determine if they can be extended to Ramsey colorings. This program was applied to show the non-existence of a $(4,3,3;30)$ Ramsey coloring. Here we gain confidence from the fact that the same program does find Ramsey colorings when they are known to exist. For example, the $(4,3,3;29)$ coloring depicted as Figure \[embed\_12\_8\_8\]. All of the software used to obtain our results is publicly available, as well as the individual constraint models and their corresponding encodings to CNF. For details, see the appendix. Acknowledgments {#acknowledgments .unnumbered} --------------- We thank Stanislaw Radziszowski for his guidance and comments which helped improve the presentation of this paper. In particular Stanislaw proposed to show that our technique is able to find the $(4,3,3;29)$ coloring depicted as Figure \[embed\_12\_8\_8\]. [10]{} K. Appel and W. Haken. Every map is four colourable. , 82:711–712, 1976. M. Codish, A. Miller, P. Prosser, and P. J. Stuckey. Breaking symmetries in graph representation. In F. Rossi, editor, [*Proceedings of the 23rd International Joint Conference on Artificial Intelligence, Beijing, China*]{}. [IJCAI/AAAI]{}, 2013. M. Codish, A. Miller, P. Prosser, and P. J. Stuckey. Constraints for symmetry breaking in graph representation. Full version of [@DBLP:conf/ijcai/CodishMPS13] (in preparation)., 2014. P. Erd[ö]{}s and T. Gallai. Graphs with prescribed degrees of vertices (in [H]{}ungarian). , pages 264–274, 1960. Available from <http://www.renyi.hu/~p_erdos/1961-05.pdf>. S. E. Fettes, R. L. Kramer, and S. P. Radziszowski. An upper bound of 62 on the classical [R]{}amsey number r(3, 3, 3, 3). , 72, 2004. M. Heule, W. A. H. Jr., and N. Wetzler. Bridging the gap between easy generation and efficient verification of unsatisfiability proofs. , 24(8):593–607, 2014. J. G. Kalbfleisch. . PhD thesis, University of Waterloo, January 1966. B. McKay. *nauty* user’s guide (version 1.5). Technical Report TR-CS-90-02, Australian National University, Computer Science Department, 1990. K. L. McMillan. Applying [SAT]{} methods in unbounded symbolic model checking. In E. Brinksma and K. G. Larsen, editors, [*Computer Aided Verification, 14th International Conference, Proceedings*]{}, volume 2404 of [*Lecture Notes in Computer Science*]{}, pages 250–264. Springer, 2002. A. Metodi, M. Codish, and P. J. Stuckey. Boolean equi-propagation for concise and efficient [SAT]{} encodings of combinatorial problems. , 46:303–341, 2013. K. Piwakowski. On [R]{}amsey number r(4, 3, 3) and triangle-free edge-chromatic graphs in three colors. , 164(1-3):243–249, 1997. K. Piwakowski and S. P. Radziszowski. $30 \leq {R}(3,3,4) \leq 31$. , 27:135–141, 1998. K. Piwakowski and S. P. Radziszowski. Towards the exact value of the [R]{}amsey number r(3, 3, 4). In [*Proceedings of the 33-rd Southeastern International Conference on Combinatorics, Graph Theory, and Computing*]{}, volume 148, pages 161–167. Congressus Numerantium, 2001. <http://www.cs.rit.edu/~spr/PUBL/paper44.pdf>. S. P. Radziszowski. Personal communication. January, 2015. S. P. Radziszowski. Small [R]{}amsey numbers. , 1994. Revision \#14: January, 2014. M. Soos. , v2.5.1. <http://www.msoos.org/cryptominisat2>, 2010. D. Stolee. Canonical labelings with nauty. Computational Combinatorics (Blog), Entry from September 20, 2012. <http://computationalcombinatorics.wordpress.com> (viewed October 2015). N. Wetzler, M. Heule, and W. A. H. Jr. Drat-trim: Efficient checking and trimming using expressive clausal proofs. In C. Sinz and U. Egly, editors, [*Theory and Applications of Satisfiability Testing, 17th International Conference, Proceedings*]{}, volume 8561 of [*Lecture Notes in Computer Science*]{}, pages 422–429. Springer, 2014. X. Xu and S. P. Radziszowski. On some open questions for [R]{}amsey and [F]{}olkman numbers. , 2015. (to appear). Selected Proofs {#proofs} =============== **Lemma**  \[lemma:closed\].     \[**${{\cal R}}(r_1,r_2,\ldots,r_k;n)$ is closed under $\approx$**\] Let $(G,{\kappa_1})$ and $(H,{\kappa_2})$ be graph colorings in $k$ colors such that $(G,\kappa_1) \approx_{\pi,\sigma} (H,\kappa_2)$. Then, $$(G,\kappa_1) \in {{\cal R}}(r_1,r_2,\ldots,r_k;n) \iff (H,\kappa_2) \in {{\cal R}}(\sigma(r_1),\sigma(r_2),\ldots,\sigma(r_k);n)$$ Assume that $(G,\kappa_1) \in {{\cal R}}(r_1,r_2,\ldots,r_k;n)$ and in contradiction that $(H,\kappa_2) \notin {{\cal R}}(\sigma(r_1),\sigma(r_2),\ldots,\sigma(r_k);n)$. Let $R$ denote a monochromatic clique of size $r_s$ in $H$ and $R^{-1}$ the inverse of $R$ in $G$. From Definition \[def:weak\_iso\], $(u,v) \in R \iff (\pi^{-1}(u), \pi^{-1}(v))\in R^{-1}$ and $\kappa_2(u,v) = \sigma^{-1}(\kappa_1(u,v))$. Consequently $R^{-1}$ is a monochromatic clique of size $r_s$ in $(G,\kappa_1)$ in contradiction to $(G,\kappa_1)$ $\in$ ${{\cal R}}(r_1,r_2,\ldots,r_k;n)$. **Theorem**  \[thm:sbl\_star\].     \[**correctness of ${\textsf{sb}}^{*}_\ell(A,M)$**\] Let $A$ be an adjacency matrix with $\alpha(A) = M$. Then, there exists $A'\approx A$ such that $\alpha(A')=M$ and ${\textsf{sb}}^{*}_\ell(A',M)$ holds. Let $C={\left\{~A' \left| \begin{array}{l}A'\approx A \wedge \alpha(A')=M\end{array} \right. \right\}}$. Obviously $C\neq \emptyset$ because $A\in C$ and therefore there exists a $A_{min}=min_{\preceq} C$. Therefore, $A_{min} \preceq A'$ for all $A' \in C$. Now we can view $M$ as inducing an ordered partion on $A$: vertices $u$ and $v$ are in the same component if and only if the corresponding rows of $M$ are equal. Relying on Theorem 4 from [@DBLP:conf/ijcai/CodishMPS13], we conclude that ${\textsf{sb}}^{*}_\ell(A_{min},M)$ holds. The Two Hardest Instances {#apdx:hardest} ========================= The following partial adjacency matrices are the two hardest instances described in Section \[sec:433\_30\], from the total 78[,]{}892. Both include the constraints: ${\mathtt{A}},{\mathtt{B}},{\mathtt{C}},\in\{1,3\}$, ${\mathtt{D}},{\mathtt{E}},{\mathtt{F}}\in\{1,2\}$, ${\mathtt{A}}\neq {\mathtt{B}}$, ${\mathtt{D}}\neq {\mathtt{E}}$. The corresponding CNF representations consist in 5204 Boolean variables (each), 36[,]{}626 clauses for the left instance and 36[,]{}730 for the right instance. SAT solving times to show these instances UNSAT are 8[,]{}325[,]{}246 seconds for the left instance and 7[,]{}947[,]{}257 for the right. Making the Instances Available ============================== The statistics from the proof that $R(4,3,3)=30$ are available from the domain: > <http://cs.bgu.ac.il/~mcodish/Benchmarks/Ramsey334>. Additionally, we have made a small sample (30) of the instances available. Here we provide instances with the degrees ${\langle 13,8,8 \rangle}$ in the three colors. The selected instances represent the varying hardness encountered during the search. The instances numbered $\{27765$, $39710$, $42988$, $36697$, $13422$, $24578$, $69251$, $39651$, $43004$, $75280\}$ are the hardest, the instances numbered $\{4157$, $55838$, $18727$, $43649$, $26725$, $47522$, $9293$, $519$, $23526$, $29880\}$ are the median, and the instances numbered $\{78857$, $78709$, $78623$, $78858$, $28426$, $77522$, $45135$, $74735$, $75987$, $77387\}$ are the easiest. A complete set of both the  models and the DIMACS CNF files are available upon request. Note however that they weight around 50GB when zipped. The files in [bee\_models.zip](bee_models.zip) detail constraint models, each one in a separate file. The file named `r433_30_Instance#.bee` contains a single Prolog clause of the form > `model(Instance#,Map,ListOfConstraints) :- {...details...} .` where `Instance#` is the instance number, `Map` is a partially instantiated adjacency matrix associating the unknown adjacency matrix cells with variable names, and `ListOfConstraints` are the finite domain constraints defining their values. The syntax is that of , however the interested reader can easily convert these to their favorite fininte domain constraint language. Note that the Boolean values ${\mathit{true}}$ and ${\mathit{false}}$ are represented in  by the constants $1$ and $-1$. Figure \[fig:bee\] details the  constraints which occur in the above mentioned models. ----- ------------------------------------------------------ ------------- ------------------------------------------------------------------ (1) $\mathtt{new\_int(I,c_1,c_2)}$ declare integer: $\mathtt{c_1\leq I\leq c_2}$ (2) $\mathtt{bool\_array\_or([X_1,\ldots,X_n])}$ clause: $\mathtt{X_1 \vee X_2 \cdots \vee X_n}$ (3) $\mathtt{bool\_array\_sum\_eq([X_1,\ldots,X_n],~I)}$ Boolean cardinality: $\mathtt{(\Sigma ~X_i) = I}$ (4) $\mathtt{int\_eq\_reif(I_1,I_2,~X)}$ reified integer equality: $\mathtt{I_1 = I_2 \Leftrightarrow X}$ (5) $\mathtt{int\_neq(I_1,I_2)}$ $\mathtt{}$ $\mathtt{I_1 \neq I_2}$ (6) $\mathtt{int\_gt(I_1,I_2)}$ $\mathtt{}$ $\mathtt{I_1 > I_2}$ ----- ------------------------------------------------------ ------------- ------------------------------------------------------------------ The files in [cnf\_models.zip](cnf_models.zip) correspond to CNF encodings for the constraint models. Each instance is associated with two files: `r433_30_instance#.dimacs` and `r433_30_instance#.map`. These consist respectively in a DIMACS file and a map file which associates the Booleans in the DIMACS file with the integer variables in a corresponding partially instantiated adjacency matrix. The map file specifies for each pair $(i,j)$ of vertices a triplet $[B_1,B_2,B_3]$ of Boolean variables (or values) specifying the presence of an edge in each of the three colors. Each such $B_i$ is either the name of a DIMACS variable, if it is greater than 1, or a truth value $1$ (${\mathit{true}}$), or $-1$ (${\mathit{false}}$). [^1]: Supported by the Israel Science Foundation, grant 182/13. [^2]: Recently, the set ${{\cal R}}(3,3,3;13)$ has also been computed independently by: Stanislaw Radziszowski, Richard Kramer and Ivan Livinsky [@stas:personalcommunication]. [^3]: Note that `nauty` does not directly handle edge colored graphs and weak isomorphism directly. We applied an approach called $k$-layering described by Derrick Stolee [@Stolee].
{ "pile_set_name": "arxiv" }
GREATER ....a study of the book of Hebrews A sermon series by Jay Lovelace....beginning February 18, 2018 ANNOUNCEMENTS AND HAPPENINGS... This week and next... AWANA...February 21 - Regular Night! Begins at 6:30 p.m.Small Groups…The Friday Small Groups will meet on February 23 at 6:30 p.m.; the Lovelace group at the Wilson’s home and the Barker group at the church. AWANA Grand Prix Garage...Kids, if you want advice on your race car for the Grand Prix or time to test the ride, come to the church at 9:00 a.m. on Saturday, February 24. Volunteers will be there to help you with any finishing touches. (Note: the Grand Prix will be held on Saturday, March 3, beginning at 10:00 a.m.). Prayer Meeting...February 25 at 4:30 p.m. at the church Defending Your Faith...Join Marty Engel on Monday, February 26 for discussion time about everyday questions that Christians are asked of nonbelievers. Mark your calendars!
{ "pile_set_name": "pile-cc" }
Tag: Eloy Casados Original US release date: December 5, 2008 Production budget: $25,000,000 Worldwide gross: $27,426,335 There are timely films and then there are films that are before their time. Ron Howard is probably seen by most as a director who frequently makes good or very good films and occasionally makes a great one. Most recently, a lot... Continue Reading →
{ "pile_set_name": "pile-cc" }
Stefan Priebe Stefan Priebe is a psychologist and psychiatrist of German and British nationality. He grew up in West-Berlin, studied in Hamburg, and was Head of the Department of Social Psychiatry at the Free University Berlin until 1997. He is Professor of Social and Community Psychiatry at Queen Mary, University of London, and Director of a World Health Organization collaborating centre, the only one specifically for Mental Health Services Development. He heads a research group in social psychiatry and has published more than 600 peer-reviewed scientific papers. References External links Category:1953 births Category:Living people Category:Place of birth missing (living people) Category:German psychologists Category:German psychiatrists Category:British psychologists Category:British psychiatrists Category:Free University of Berlin faculty Category:Academics of Queen Mary University of London Category:People from Berlin
{ "pile_set_name": "wikipedia_en" }
Liushan Liushan may refer to these places in China: Liushan Subdistrict (刘山街道), a subdistrict of Xinfu District, Fushun, Liaoning Towns Liushan, Guangxi (流山), in Liuzhou, Guangxi Liushan, Henan (留山), in Nanzhao County, Henan Liushan, Shandong (柳山), in Linqu County, Shandong See also Liu Shan (207–271), Shu Han emperor Liu Shan (Ming dynasty) (died 1427), Ming dynasty general
{ "pile_set_name": "wikipedia_en" }
The date is fast approaching for our spring rally. I have posted the reservation information in the Calendar section, I will post more details in the calendar section as they become available. If you have any questions please e-mail me at txjeff123@gmail.com.
{ "pile_set_name": "pile-cc" }
jOOQ on The ORM Foundation? I am the developer of jOOQ, a Java database abstraction framework. I was wondering whether jOOQ might be an interesting tool for discussion on your website, even if it is not exactly an ORM in the classic meaning (as in mapping objects to the relational world > ORM). Instead, jOOQ uses a reverse engineering paradigm (as in mapping relational entities to objects > "ROM"). Re: jOOQ on The ORM Foundation? Object Role Modeling (the original ORM) is not the same thing as Object/Relational Mapping. Object/Relational Mapping is still kind-of relevant and interesting to us, since Object Role Modeling is used to design databases (which then will require programmatic access). But there are probably better places to discuss it :] Your query DSL looks rather like some of the DSLs available for Ruby, such as through the Sequel gem, or Arel. Interesting to see how well that can work with a statically-types language like Java. Maybe you or I should make a generator for ActiveFacts which generates your DSL from CQL queries? Re: jOOQ on The ORM Foundation? Sorry for my late reply. Apparently I had not really understood the ideas behind your foundation when I wrote my original post. I understand now, that you are concerned with broader concepts than the "common ORM". I actually came across your group because of your linking to ORM Lite (where ORM does stand for Object/Relational Mapping, correct me if I'm wrong). Yes, I have seen some examples for Ruby's Sequel. I personally find statically-typed languages much better for DSL's as the syntax can be formally defined and checked by a compiler - with the limitations an OO language imposes, of course. So if I understand this correctly now, "Object Role Modeling" and CQL are actually a more general way of expressing what SQL calls DDL. Since you can already transform CQL into SQL DDL statements (CREATE TABLE...), and jOOQ can reverse-engineer database schemata into jOOQ generated source code, I don't think there would be need for an additional generator. Does CQL also specify means of querying the data described by the Object Role Model? The examples I found here only seem to describe what SQL calls "constraints" (although with a much broader functionality-range than SQL): Re: jOOQ on The ORM Foundation? "common ORM". I actually came across your group because of your linking to ORM Lite (where ORM does stand for Object/Relational Mapping Object Role Modeling was named before Object Relational Mapping, but the latter is now the more common meaning, as you point out. But ORM Lite is actually so-named by Bryan because it is an implementation of Object Role Modeling, not because it is also an O/RM. Bryan was a student of Terry's at Neumont, where he learnt ORM. Regarding DSLs, I think internal DSLs only work well in very simple cases. I prefer external DSLs for anything complex, and that's where CQL came from. Even the extremely flexible syntax of Ruby wasn't up to the task. lukas.eder: I don't think there would be need for an additional generator The problem is that a huge amount of meaning is lost in the mapping to SQL. SQL is practically (though not theoretically) limited to representing physical models. These are almost always very different from the conceptual model, as many relationships have been condensed (absorbed) into attribute/column relationships, so the semantics of the original relationship are lost. In the process, nullable columns are usually introduced, which adds further to the confusion, as such things cannot easily be correctly constrained (uniqueness, etc) in SQL. So by reverse engineering from the relational form, you're losing most of the benefit of building a conceptual model from the start This may be hard to see for someone used to O-O modeling, and who's authored an O/RM tool. The problem is that O-O suffers from many of the same problems of loss of semantics. The apparently clear notion of "attribute" breaks down when you look at it closely. O-O, although ostensibly behaviour-oriented, introduces attributes to store state, and this attribute orientation is the source of the problem in both cases. Fact-oriented model does not use attributes. Although it may seem obvious that, for example, my surname is an attribute of myself, if the system being modeled accrues the requirement to model families, suddenly surname becomes an attribute of family, and family becomes my attribute. This kind of instability is responsible for much of the rework that's required in evolving legacy systems, as well as many of the mistakes made when they were first modeled. If you want a further example of this loss of semantics, look at my Insurance example, and ask yourself why the VehicleIncident table has a DrivingBloodTestResult column. In fact, if VehicleIncident wasn't explicitly mapped separately, its fields would be in the Claim table. What's needed is not just yet another O/RM tool (which are tuppence a dozen anyhow - I personally have written three) but a tool which supports database programming using only the conceptual model, never exposing the physical model. Surprisingly, I can't think of a single tool which has done a good job of this, but it's where I'm heading with the ActiveFacts API. It's another O/RM, but using a purely conceptual object model that preserves the domain semantics, not a typical O-O one. lukas.eder: Does CQL also specify means of querying the data described by the Object Role Model Yes, though the published implementation doesn't quite handle the full query syntax (aggregate functions are still missing), nor does it yet translate them to SQL. Some examples are given towards the end of the video presentation on the CQL Introduction page. Re: jOOQ on The ORM Foundation? Regarding DSLs, I think internal DSLs only work well in very simple cases. I prefer external DSLs for anything complex, and that's where CQL came from. Even the extremely flexible syntax of Ruby wasn't up to the task. Absolutely. The optimal way to implement SQL in Java would be by extending the Java language itself, such that SQL would be compiled natively by the java compiler, similar to Linq2SQL in C#, or PL/SQL in Oracle databases. So for the complexity of CQL, CQL is certainly the right solution. Clifford Heath: The problem is that a huge amount of meaning is lost in the mapping to SQL. SQL is practically (though not theoretically) limited to representing physical models. You are right. I guess though, that in everyday work, this limitation is not really a problem. Personally, I think if your business rules become so complex that you cannot map them to a relational model easily anymore, then maybe your business rules could be simplified before changing/extending technologies. But that depends on the business, of course. I guess with insurance companies' businesses, I'd be pretty lost, personally ;-) In any case, I don't see jOOQ as a means to solve modelling issues, or the O/R impedance mismatch (which is even bigger when it comes to mapping your understanding of ORM, with CQL). jOOQ should simply make using the full power of SQL in Java as simple as possible. In that way, jOOQ is not really an ORM because it does not map from objects to the relational world, or try to solve any other high-level abstraction issues. It's really a low-level tool to make a developer's life a lot easier, seeing that unfortunately, JPA CriteriaQuery didn't meet the community's expectations. Clifford Heath: What's needed is not just yet another O/RM tool (which are tuppence a dozen anyhow - I personally have written three) but a tool which supports database programming using only the conceptual model, never exposing the physical model. Surprisingly, I can't think of a single tool which has done a good job of this, but it's where I'm heading with the ActiveFacts API. It's another O/RM, but using a purely conceptual object model that preserves the domain semantics, not a typical O-O one. I think you're on the right track with this. I hope for you, that this will soon show nice results with a practical implementation. I'm curious to see how you'll tackle performance issues, too, with all the abstraction. Among all attempts to overcome the old and proven relational models (XML databases, NoSQL databases), this one seems the most promising and focused to me!
{ "pile_set_name": "pile-cc" }
Kaltbrunn railway station Kaltbrunn railway station is a railway station situated in the municipality of Kaltbrunn in the Swiss canton of St. Gallen. It is located on the Uznach to Wattwil line, close to the western portal of the long Ricken Tunnel. The station is served by hourly St. Gallen S-Bahn service S4, which operates in both directions around a loop via Wattwil, St. Gallen, Sargans, Ziegelbrücke and Uznach. References Category:Railway stations in the canton of St. Gallen Category:Swiss Federal Railways stations
{ "pile_set_name": "wikipedia_en" }
--- abstract: 'Healthcare is one of the largest business segments in the world and is a critical area for future growth. In order to ensure efficient access to medical and patient-related information, hospitals have invested heavily in improving clinical mobile technologies and spread their use among doctors. Notwithstanding the benefits of mobile technologies towards a more efficient and personalized delivery of care procedures, there are also indications that their use may have a negative impact on patient-centeredness and often places many cognitive and physical demands on doctors, making them prone to make medical errors. To tackle this issue, in this paper we present the main outcomes of the project TESTMED, which aimed at realizing a clinical system that provides operational support to doctors using mobile technologies for delivering care to patients, in a bid to minimize medical errors. The system exploits concepts from Business Process Management on how to manage a specific class of care procedures, called clinical guidelines, and how to support their execution and mobile orchestration among doctors. As a viable solution for doctors’ interaction with the system, we investigated the use of vocal and touch interfaces. User evaluation results indicate a good usability of the system.' author: - Andrea Marrella - Massimo Mecella - Mahmoud Sharf - Tiziana Catarci bibliography: - 'biblio.bib' date: 'Received: date / Accepted: date' subtitle: | Process-aware Enactment of Clinical Guidelines\ through Multimodal Interfaces title: The TESTMED Project Experience ---
{ "pile_set_name": "arxiv" }
Jalalabad, Ardabil Jalalabad (, also Romanized as Jalālābād) is a village in Shal Rural District, Shahrud District, Khalkhal County, Ardabil Province, Iran. At the 2006 census, its population was 82, in 14 families. References Category:Towns and villages in Khalkhal County
{ "pile_set_name": "wikipedia_en" }
Roosevelt High School (Roosevelt, New York) Roosevelt High School is a four-year public high school located in Roosevelt as part of the Roosevelt School District, serving students in grades 9 through 12. It is located in the hamlet of Roosevelt in the Town of Hempstead, Nassau County, New York, U.S. After years of failing test scores, Roosevelt High School is the first high school in New York to be taken over by the state. As of the 2014-15 school year, the school had an enrollment of 964 students and 56.6 classroom teachers (on an FTE basis), for a student–teacher ratio of 17.0:1. There were 247 students (25.6% of enrollment) eligible for free lunch and 46 (4.8% of students) eligible for reduced-cost lunch. Academics Roosevelt High School has a grading and promotion policy. In order for a student to be admitted to the ninth grade, a student must pass 3 of the 4 major subject areas each year: English Mathematics Science Social Studies The student can fail no more than the equivalent of 1 credit in minor subjects each year (i.e. Technology 1/2 credit, Home Career 1/2 credit, etc.) To be promoted from grade 9 to 10, a student must earn 4 units of credit. These units must include: 1 in English and 1 in Social Studies. To be promoted from grade 10 to grade 11, a student must have earned 9 units of credit. These units must include: 2 units in English, 2 units in Social Studies, 1 unit in Mathematics, 1 unit in Science. A student must receive a minimum grade of 70 in order to advance. Demographics The student body in the 2007-2008 school year consisted of: 2 American Indian or Alaska Native students or 0% of the student body 615 Black or African American students or 77% of the student body 181 Hispanic or Latino students or 23% of the student body 1 Asian or Native Hawaiian/Other Pacific Islander students or 0% of the student body 0 White students or 0% of the student body 0 Multiracial students or 0% of the student body Notable alumni Notable Roosevelt Junior-Senior High School alumni include: Chuck D, political activist and member of the hip hop group Public Enemy. Eddie Murphy, comedian and actor. Julius Erving, otherwise known as "Dr. J", member of the Basketball Hall of Fame who played for the Philadelphia 76ers until his retirement. Howard Stern, radio personality. Gabriel Casseus, Actor (New Jersey Drive, Fallen, Their Eyes Were Watching God), Writer & Producer (Takers). Melvyn M. Sobel, James V. Petrungaro and David D. Weinberg, Members of the Long Island Rock & Roll Band known as The Ravens, popular from 1965-1969. References External links Great Schools Web site information on Roosevelt High School Category:Public high schools in New York (state) Category:Schools in Nassau County, New York Category:Educational institutions established in 1956
{ "pile_set_name": "wikipedia_en" }
// Code generated by go-swagger; DO NOT EDIT. package models // This file was generated by the swagger tool. // Editing this file might prove futile when you re-run the swagger generate command import ( "github.com/go-openapi/errors" "github.com/go-openapi/strfmt" "github.com/go-openapi/swag" "github.com/go-openapi/validate" ) // RegistrationViaAPIResponse The Response for Registration Flows via API // // swagger:model registrationViaApiResponse type RegistrationViaAPIResponse struct { // identity // Required: true Identity *Identity `json:"identity"` // session Session *Session `json:"session,omitempty"` // The Session Token // // This field is only set when the session hook is configured as a post-registration hook. // // A session token is equivalent to a session cookie, but it can be sent in the HTTP Authorization // Header: // // Authorization: bearer ${session-token} // // The session token is only issued for API flows, not for Browser flows! // Required: true SessionToken *string `json:"session_token"` } // Validate validates this registration via Api response func (m *RegistrationViaAPIResponse) Validate(formats strfmt.Registry) error { var res []error if err := m.validateIdentity(formats); err != nil { res = append(res, err) } if err := m.validateSession(formats); err != nil { res = append(res, err) } if err := m.validateSessionToken(formats); err != nil { res = append(res, err) } if len(res) > 0 { return errors.CompositeValidationError(res...) } return nil } func (m *RegistrationViaAPIResponse) validateIdentity(formats strfmt.Registry) error { if err := validate.Required("identity", "body", m.Identity); err != nil { return err } if m.Identity != nil { if err := m.Identity.Validate(formats); err != nil { if ve, ok := err.(*errors.Validation); ok { return ve.ValidateName("identity") } return err } } return nil } func (m *RegistrationViaAPIResponse) validateSession(formats strfmt.Registry) error { if swag.IsZero(m.Session) { // not required return nil } if m.Session != nil { if err := m.Session.Validate(formats); err != nil { if ve, ok := err.(*errors.Validation); ok { return ve.ValidateName("session") } return err } } return nil } func (m *RegistrationViaAPIResponse) validateSessionToken(formats strfmt.Registry) error { if err := validate.Required("session_token", "body", m.SessionToken); err != nil { return err } return nil } // MarshalBinary interface implementation func (m *RegistrationViaAPIResponse) MarshalBinary() ([]byte, error) { if m == nil { return nil, nil } return swag.WriteJSON(m) } // UnmarshalBinary interface implementation func (m *RegistrationViaAPIResponse) UnmarshalBinary(b []byte) error { var res RegistrationViaAPIResponse if err := swag.ReadJSON(b, &res); err != nil { return err } *m = res return nil }
{ "pile_set_name": "github" }
Alexander Bell Donald Alexander Bell Donald (18 August 1842–7 March 1922) was a New Zealand seaman, sailmaker, merchant and ship owner. He was born in Inverkeithing, Fife, Scotland on 18 August 1842. References Category:1842 births Category:1922 deaths Category:Scottish emigrants to New Zealand Category:People from Inverkeithing
{ "pile_set_name": "wikipedia_en" }
Sai Shan Sai Shan () is a hill behind Mayfair Gardens on Tsing Yi Island, Hong Kong. The hill is east of and beneath the northern peak of Tsing Yi Peak. A village, Sai Shan Village is in the valley between Sai Shan and Tsing Yi Peak. A road, Sai Shan Road between Mayfair Gardens and Hong Kong Institute of Vocational Education (Tsing Yi) is named after the hill. Category:Tsing Yi Category:Mountains, peaks and hills of Hong Kong
{ "pile_set_name": "wikipedia_en" }
--- abstract: | We use the method of thermal QCD sum rules to investigate the effects of temperature on the neutron electric dipole moment $d_n$ induced by the vacuum $\bar{\theta}$-angle. Then, we analyze and discuss the thermal behaviour of the ratio $\mid {d_n \over \bar{\theta}}\mid $ in connection with the restoration of the CP-invariance at finite temperature. author: - | M. Chabab$^{1.2}\thanks{e-mail: mchabab@ucam.ac.ma}$, N. El Biaze$^1$ and R. Markazi$^1$\ \ title: | \ \ Note on the Thermal Behavior of the Neutron Electric Dipole Moment from QCD Sum Rules --- 22.5cm16.8cm-.4cm-.9cm = 6pt plus 2pt minus 1pt addtoreset[equation]{}[section]{} =18.6pt plus 0.2pt minus 0.1pt addtoreset[equation]{}[section]{} Introduction ============ The CP symmetry is, without doubt, one of the fundamental symmetries in nature. Its breaking still carries a cloud of mystery in particle physics and cosmology. Indeed, CP symmetry is intimately related to theories of interactions between elementary particles and represents a cornerstone in constructing grand unified and supersymmetric models. It is also necessary to explain the matter-antimatter asymmetry observed in universe. The first experimental evidence of CP violation was discovered in the $K-\bar{K}$ mixing and kaon decays [@C]. According to the CPT theorem, CP violation implies T violation. The latter is tested through the measurement of the neutron electric dipole moment (NEDM)$d_n$. The upper experimental limit gives confidence that the NEDM can be another manifestation of CP breaking. To investigate the CP violation phenomenon many theoretical models were proposed. In the standard model of electroweak interactions, CP violation is parametrized by a single phase in the Cabbibo Kobayashi Maskawa (CKM) quark mixing matrix [@CKM]. Other models exhibiting a CP violation are given by extensions of the standard model; among them, the minimal supersymmetric standard model $MSSM$ includes in general soft complex parameters which provide new additional sources of CP violation [@DMV; @BU]. CP violation can be also investigated in the strong interactions context through QCD framework. In fact, the QCD effective lagrangian contains an additional CP-odd four dimensional operator embedded in the following topological term: $$L_{\theta}=\theta {\alpha_s\over 8 \pi}G_{\mu\nu}\tilde{G}^{\mu\nu},$$ where $G_{\mu\nu}$ is the gluonic field strength, $\tilde{G}^{\mu\nu}$ is its dual and $\alpha_s$ is the strong coupling constant. The $G_{\mu\nu}\tilde{G}^{\mu\nu}$ quantity is a total derivative, consequently it can contribute to the physical observables only through non perturbative effects. The NEDM is related to the $\bar \theta$-angle by the following relation : $$d_n\sim {e\over M_n}({m_q\over M_n})\bar \theta \sim \{ \begin{array}{c} 2.7\times 10^{-16}\overline{\theta }\qquad \cite{Baluni}\\ 5.2\times 10^{-16}\overline{\theta }\qquad \cite{cvvw} \end{array}$$ and consequently, according to the experimental measurements $d_n<1.1\times 10^{-25}ecm$ [@data], the $\bar \theta $ parameter must be less than $2\times10^{-10}$ [@peccei2]. The well known strong CP problem consists in explaining the smallness of $\bar{\theta}$. In this regard, several scenarios were suggested. The most known one was proposed by Peccei and Quinn [@PQ] and consists in implementing an extra $U_A(1)$ symmetry which permits a dynamical suppression of the undesired $\theta $-term. This is possible due to the fact that the axial current $J_5^\mu$ is related to the gluonic field strength through the following relation $\partial_\mu J_5^\mu={\alpha_s\over8 \pi}G_{\mu\nu}\tilde{G}^{\mu\nu}$. The breakdown of the $U_A(1)$ symmetry gives arise to a very light pseudogoldstone boson called axion. This particle may well be important to the puzzle of dark matter and might constitute the missing mass of the universe [@LS]. Motivated by: (a) the direct relation between the $\bar \theta$-angle and NEDM $d_n $, as it was demonstrated firstly in [@cvvw] via the chiral perturbation theory and recently in [@PR; @PR1] within QCD sum rules formalism; (b) the possibility to restore some broken symmetries by increasing the temperature; we shall use the QCD sum rules at $T\ne 0$ [@BS] to derive thermal dependence of the ratio $\mid{ d_n\over \bar{\theta}}\mid$. Then we study its thermal behaviour at low temperatures and discuss the consequences of temperature effects on the restoration of the broken CP symmetry. This paper is organized as follows: Section 2 is devoted to the calculations of the NEDM induced by the $\bar {\theta}$ parameter from QCD sum rules. In section 3, we show how one introduces temperature in QCD sum rules calculations. We end this paper with a discussion and qualitative analysis of the thermal effects on the CP symmetry. NEDM from QCD sum rules ======================== In the two later decades, QCD sum rules à la SVZ [@SVZ] were applied successfully to the investigation of hadronic properties at low energies. In order to derive the NEDM through this approach, many calculations were performed in the literature [@CHM; @KW]. One of them, which turns out to be more practical for our study, has been obtained recently in [@PR; @PR1]. It consists in considering a lagrangian containing the following P and CP violating operators: $$L_{P,CP}=-\theta_q m_* \sum_f \bar{q}_f i\gamma_5 q_f +\theta {\alpha_s\over 8 \pi}G_{\mu\nu}\tilde{G}^{\mu\nu}.$$ $\theta_q$ and $\theta$ are respectively two angles coming from the chiral and the topological terms and $m_*$ is the quark reduced mass given by $m_*$=$m_um_d \over{m_u +m_d} $. The authors of [@PR1] start from the two points correlation function in QCD background with a nonvanishing $\theta$ and in the presence of a constant external electomagnetic field $ F^{\mu\nu}$: $$\Pi(q^2) = i \int d^4x e^{iqx}<0|T\{\eta(x)\bar{\eta}(0)\}|0>_{\theta,F} .$$ $\eta(x)$ is the interpolating current which in the case of the neutron reads as [@I]: $$\eta =2\epsilon_{abc}\{(d^T_aC\gamma_5u_b)d_c+\beta(d^T_aCu_b)\gamma_5d_c\},$$ where $\beta$ is a mixing parameter. Using the operator product expansion (OPE), they have first performed the calculation of $\Pi(q^2)$ as a function of matrix elements and Wilson coefficients and then have confronted the QCD expression of $\Pi(q^2)$ to its phenomenological parametrisation. $\Pi(q^2)$ can be expanded in terms of the electromagnetic charge as[@CHM]: $$\Pi(q^2)=\Pi^{(0)}(q^2)+e \Pi^{(1)}(q^2,F^{\mu\nu})+ O(e^2).$$ The first term $\Pi^{(0)}(q^2)$ is the nucleon propagator which include only the CP-even parameters [@SVZ1; @IS], while the second term $\Pi^{(1)}(q^2,F^{\mu\nu})$ is the polarization tensor which may be expanded through Wilson OPE as: $\sum C_n<0|\bar{q}\Gamma q|0>_{\theta,F}$, where $\Gamma$ is an arbitrary Lorentz structure and $C_n$ are the Wilson coefficient functions calculable in perturbation theory [@SVZ1]. From this expansion, one keeps only the CP-odd contribution piece. By considering the anomalous axial current, one obtains the following $\theta$ dependence of $<0|\bar{q}\Gamma q|0>_{\theta}$ matrix elements [@PR]: $$m_q <0|\bar{q}\Gamma q|0>_{\theta}= i m_*\theta <0|\bar{q}\Gamma q|0> ,$$ where $m_q$ and $m_*$ are respectively the quark and reduced masses. The electromagnetic dependence of these matrix elements can be parametrized through the implementation of the $\kappa$, $\chi $ and $\xi$ susceptibilities defined as [@IS]:\ $$\begin{tabular}{lc} $<0|\bar{q}\sigma^{\mu\nu} q|0>_F= \chi F^{\mu\nu} <0|\bar{q}q|0>$ \\ $g<0|\bar{q}G^{\mu\nu} q|0>_{F}= \kappa F^{\mu\nu} <0|\bar{q}q|0> $ & \\ $2g<0|\bar{q}\tilde{G}^{\mu\nu} q|0>_{F}= \xi F^{\mu\nu} <0|\bar{q}q|0>. $ & \end{tabular}$$ Putting altogether the above ingredients and after a straightforward calculation [@PR1], the following expression of $\Pi^{(1)}(q^2,F^{\mu\nu})$ for the neutron is derived:\ $$\begin{aligned} \Pi(-q^2)&=&-{\bar{\theta}m_* \over {64\pi^2}}<0|\bar{q}q|0>\{\tilde{F}\sigma,\hat q\}[\chi(\beta+1)^2(4e_d-e_u) \ln({\Lambda^2\over -q^2})\nonumber\\ && -4(\beta-1)^2e_d(1+{1\over4} (2\kappa+\xi))(\ln({-q^2\over \mu_{IR}^2})-1){1\over -q^2}\nonumber\\ &&-{\xi\over 2}((4\beta^2-4\beta+2)e_d+(3\beta^2+2\beta+1)e_u){1\over -q^2}...],\end{aligned}$$ where $\bar{\theta}=\theta+\theta_q$ is the physical phase and $\hat q=q_\mu\gamma^\mu$.\ The QCD expression (2.7) will be confronted to the phenomenological parametrisation $\Pi^{Phen}$$(-q^2)$ written in terms of the Neutron hadronic properties. The latter is given by:\ $$\Pi^{Phen}(-q^2)=\{\tilde{F}\sigma,\hat q\} ({\lambda^2d_nm_n\over(q^2-m_n^2)^2} +{A\over (q^2-m_n^2)}+...),$$ where $m_n$ is the neutron mass, $e_q$ is the quark charge. A and $\lambda^2$, which originate from the phenomenological side of the sum rule, represent respectively a constant of dimension 2 and the neutron coupling constant to the interpolating current $\eta(x)$. This coupling is defined via a spinor $v$ as $<0|\eta(x)|n>=\lambda v$. QCD sum rules at finite temperature ==================================== The introduction of finite temperature effects may provide more precision to the phenomenological values of hadronic observables. Within the framework of QCD sum rules, the T-evolution of the correlation functions appear as a thermal average of the local operators in the Wilson expansion[@BS; @BC; @M]. Hence, at nonzero temperature and in the approximation of the non interacting gas of bosons (pions), the vacuum condensates can be written as : $$<O^i>_T=<O^i>+\int{d^3p\over 2\epsilon(2\pi)^3}<\pi(p)|O^i|\pi(p)>n_B({\epsilon\over T})$$ where $\epsilon=\sqrt{p^2+m^2_\pi}$, $n_B={1\over{e^x-1}} $is the Bose-Einstein distribution and $<O^i>$ is the standard vacuum condensate (i.e. at T=0). In the low temperature region, the effects of heavier resonances $(\Gamma= K, \eta,.. etc)$ can be neglected due to their distibution functions $\sim e^{- m_\Gamma \over T}$[@K]. To compute the pion matrix elements, we apply the soft pion theorem given by: $$<\pi(p)|O^i|\pi(p)>=-{1\over f^2_\pi}<0|[F^a_5,[F^a_5,O^i]]|0>+ O({m^2_\pi \over \Lambda^2}),$$ where $ \Lambda$ is a hadron scale and $F^a_5$ is the isovector axial charge: $$F^a_5=\int d^3x \bar{q}(x)\gamma_0\gamma_5{\tau^a\over2}q(x).$$ Direct application of the above formula to the quark and gluon condensates shows that [@GL; @K]:\ (i) Only $<\bar{q}q>$ is sensitive to temperature. Its behaviour at finite T is given by: $$<\bar{q}q>_T\simeq (1-{\varphi(T)\over8})<\bar{q}q>,$$ where $\varphi(T)={T^2\over f^2_\pi}B({m_\pi\over T})$ with $B(z)= {6\over\pi^2}\int_z^\infty dy {\sqrt{y^2-z^2}\over{e^y-1}}$ and $f_\pi$ is the pion decay constant ($f_\pi\simeq 93 MeV$). The variation with temperature of the quark condensate $<\bar{q}q>_T$ results in two different asymptotic behaviours, namely:\ $<\bar{q}q>_T\simeq (1-{T^2\over {8f^2_\pi}})<\bar{q}q>$ for ${m_\pi\over T}\ll 1$, and $<\bar{q}q>_T\simeq (1-{T^2\over {8f^2_\pi}}e^{-m_\pi \over T})<\bar{q}q>$ for ${m_\pi\over T}\gg 1$.\ (ii) The gluon condensate is nearly constant at low temperature and a T dependence occurs only at order $T^8$. As usual, the determination of the ratio ${d_n \over \bar{\theta}}$ sum rules at non zero temperature is now easily performed through two steps. In the first step, we apply Borel operator to both expressions of the Neutron correlation function shown in Eqs. (2.7) and (2.8), where finite temperature effects were introduced as discussed above. Next step, by invoking the quark-hadron duality principle, we deduce the following relation of the $\bar { \theta}$ induced NEDM: $${d_n\over \bar{\theta}}(T)=-{M^2m_* \over 16\pi^2}{1\over \lambda_n^2(T)M_n(T)}(1-{\varphi(T)\over 8})<\bar{q}q>[4\chi(4e_u-e_d)-{\xi\over 2M^2}(4e_u+8e_d)]e^{M_n^2 \over M^2},$$ where M represents the Borel parameter. Note that we have neglected the single pole contribution entering via the constant A, as suggested in [@PR].\ The expression (3.5) is derived with $\beta=1$ which is more appropriate for us since it suppresses the infrared divergences. In fact, the Ioffe choice $(\beta=-1)$ which is rather more useful for the CP even case, removes the leading order contribution in the sum rules (2.7). The coupling constant $\lambda_n^2(T)$ and the neutron mass $M_n(T)$ which appear in (3.5) were determined from the thermal QCD sum rules. For the former, we consider the $\hat q $ sum rules in [@K; @J] with $\beta=1$ and then we extract the following explicit expression of $\lambda_n^2(T)$: $$\lambda_n^2(T)=\{{3\over{8{(2\pi)}^4}}M^6+{3\over{16{(2\pi)}^2}}M^2<{\alpha _s\over\pi} G^2>\}\{1-(1+{g^2_{\pi NN}f^2_\pi\over M_n^2}){\varphi(T) \over 16}\}e^{M^2_n(0)\over M^2}$$ Within the pion gas approximation, Eletsky has demonstrated in [@E] that inclusion of the contribution coming from the pion-nucleon scattering in the nucleon sum rules is mandatory. The latter enters Eq.(3.6) through the coupling constant $g_{\pi NN}$, whose values lie within the range 13.5-14.3 [@PROC].\ Numerical analysis is performed with the following input parameters: the Borel mass has been chosen within the values $M^2=0.55-0.7GeV^2$ which correspond to the optimal range (Borel window) in the $ d_n\over \bar{\theta}$ sum rule at $T=0$ [@PR1]. For the $\chi$ and $\xi$ susceptibilities we take $\chi=-5.7\pm 0.6 GeV^{-2}$ [@BK] and $\xi=-0.74\pm 0.2$ [@KW]. As to the vacuum condensates appearing in (3.5), we fix $<\bar{q}q>$ and $<G^2>$ to their standard values [@SVZ]. Discussion and Conclusion ========================= In the two above sections, we have established the relation between the NEDM and $ \bar{\theta} $ angle at non zero temperature from QCD sum rules. Since the ratio ${ d_ n \over \bar{\theta}}$ is expressed in terms of the pion parameters $f_\pi$, $m_\pi$ and of $g_{\pi NN}$, we briefly recall the main features of their thermal behaviour. Various studies performed either within the framework of the chiral perturbation theory and/or QCD sum rules at low temperature have shown the following features:\ (i) The existence of QCD phase transition temperature $T_c$ which signals both QCD deconfinement and chiral symmetry restoration [@BS; @G].\ (ii) $f_\pi$ and $g_{\pi NN}$ have very small variation with temperature up to $T_c$. So, we shall assume them as constants below $T_c$. However, they vanish if the temperature passes through the critical value $T_c$ [@DVL].\ (iii) The thermal mass shift of the neutron and the pion is absent at order $O(T^2)$ [@K; @BC]. $\delta M_n$ shows up only at the next order $T^4$, but its value is negligible [@E].\ By taking into account the above properties, we plot the ratio defined in Eq. (3.5) as a function of T. From the figure, we learn that the ratio $\mid{ d_ n \over \bar{\theta} }\mid$ survives at finite temperature and it decreases smoothly with T (about 16$\%$ variation for temperature values up to 200 MeV). This means that either the NEDM value decreases or $\bar{\theta}$ increases. Consequently, for a fixed value of $\bar{\theta}$ the NEDM decreases but it does not exhibit any critical behaviour. Furthermore, if we start from a non vanishing $ \bar{\theta} $ value at $T=0$, it is not possible to remove it at finite temperature. We also note that $ \mid{d_n\over \bar{\theta}}\mid$ grows as $M^2$ or $\chi$ susceptibility increases. It also grows with quark condensate rising. However this ratio is insensitive to both the $\xi$ susceptibility and the coupling constant $g_{\pi NN}$. We notice that for higher temperatures, the curve $\mid{d_n\over \bar{\theta}}\mid=f({T\over T_c})$ exhibits a brutal increase justified by the fact that for temperatures beyond the critical value $T_c$, at which the chiral symmetry is restored, the constants $f_\pi$ and $g_{\pi NN}$ become zero and consequently from Eq(3.5) the ratio $ {d_n\over \bar{\theta}}$ behaves as a non vanishing constant. The large difference between the values of the ratio for $T<T_c$ and $T>T_c$ maybe a consequence of the fact that we have neglected other contributions to the the spectral function, like the scattering process $ N+ \pi \to \Delta $. These contributions , which are of the order $T^4$, are negligible in the low temperture region but become substantial for $T\ge T_c$. Moreover, this difference may also originate from the use of soft pion approximation which is valid essentially for low $T$ ($T< T_c$). Therefore it is clear from this qualitative analysis, which is based on the soft pion approximation, that the temperature does not play a fundamental role in the suppression of the undesired $\theta$-term and hence the broken CP symmetry is not restored, as expected. This is not strange, in fact it was shown that more heat does not imply automatically more symmetry [@MS; @DMS]. Moreover, some exact symmetries can be broken by increasing temperature [@W; @MS]. The symmetry non restoration phenomenon, which means that a broken symmetry at T=0 remains broken even at high temperature, is essential for discrete symmetries, CP symmetry in particular. Indeed, the symmetry non restoration allows us to avoid wall domains inherited after the phase transition [@ZKO] and to explain the baryogenesis phenomenon in cosmology [@S]. Furthermore, it can be very useful for solving the monopole problem in grand unified theories [@DMS]. [**Acknowledgments**]{} We are deeply grateful to T. Lhallabi and E. H. Saidi for their encouragements and stimulating remarks. N. B. would like to thank the Abdus Salam ICTP for hospitality and Prof. Goran Senjanovic for very useful discussions.\ This work is supported by the program PARS-PHYS 27.372/98 CNR and the convention CNPRST/ICCTI 340.00. [99]{} J. H. Christensen, J. W. Cronin, V. L. Fitch and R. Turlay, Phys. Rev. Lett. [**13**]{}(1964)138. N. Cabibbo, Phys. Rev. Lett. [**10**]{}(1963) 531;\ M. Kobayashi and T. Maskawa, Prog. Theor. Phys. [**49**]{}(1973) 652. D. A. Demir, A. Masiero, and O. Vives, Phys. Rev. [**D61**]{}(2000) 075009. I. Bigi and N. G. Ural’tsev, Sov. Phys. JETP [**73**]{}(2)(1991) 198;\ I. Bigi, Surveys in High Energy Physics, [**12**]{}(1998) 269. V. Baluni, Phys. Rev [**D19**]{}(1979)2227. R. Crewther, P. di Vecchia, G. Veneziano, E. Witten, Phys. Lett [**B88**]{}(1979)123. R. M. Barnett and al, Phys. Rev [**D54**]{}(1996) 1. R. D. Peccei, hep-ph/9807516. R. Golub and S. K. Lamoreaux, hep-ph/9907282. R. D. Peccei and H.R. Quinn, Phys. Rev [**D16**]{}(1977) 1791. G. Lazarides and Q. Shafi, “Monopoles, Axions and Intermediate Mass Dark Matter”, hep-ph/0006202. M. Pospelov and A. Ritz, Nucl. Phys. [**B558**]{}(1999) 243. M. Pospelov and A. Ritz, Phys. Rev. Lett. [**83**]{}(1999) 2526. M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, Nucl. Phys. [**B147**]{}(1979) 385. V.M. Khatsimovsky, I.B. khriplovich and A.S. Yelkhovsky, Ann. Phys. [**186**]{}(1988)1;\ C. T. Chan, E. M. Henly and T. Meissner, “Nucleon Electric Dipole Moments from QCD Sum Rules”, hep-ph/9905317. B. L. Ioffe, Nucl. Phys. [**B188**]{} (1981) 317;\ Y. Chung, H. G. Dosch, M. Kremer and D. Schall, Phys. Lett.[**B102**]{}(1981)175; Nucl. Phys. [**B197**]{}(1982)55 M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, Nucl. Phys. [**B166**]{}(1980) 493. B.L. Ioffe and A. V. Smilga, Nucl. Phys. [**B232**]{} (1984) 109. M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, Nucl. Phys. [**B166**]{}(1980) 493. A. I. Bochkarev and M. E. Shaposhnikov, Nucl. Phys. [**B268**]{}(1986)220. R. Barducci, R. Casalbuoni, S. de Curtis, R. Gatto and G. Pettini, Phys. Lett. [**B244**]{}(1990) 311. J. Gasser and H. Leutwyler, Phys. Lett. [**B184**]{}(1987) 83;\ H. Leutwyler, in the Proceedings of QCD 20 years later, achen,1992, ed. P.M. Zerwas and H.A. Kastrup (World scientific, Singapore, 1993). Y. Koike, Phys. Rev [**D48**]{} (1993) 2313;\ C. Adami and I. Zahed, Phys.Rev [**D45**]{}(1992) 4312;\ T. Hatsuda, Y. Koike and S.H. Lee, Nucl. Phys. [**B394**]{} (1993)221. S. Mallik and K. Mukherjee, Phys. Rev [**D58**]{} (1998) 096011;\ S. Mallik, Phys. Lett. [**B416**]{}(1998). H.G. Doch, M. Jamin and S. Narison, Phys. Lett. [**B220**]{}(1989) 251;\ M. Jamin, Z. Phys. [**C37**]{}(1988)625. V. L. Eletsky, Phys. Lett. [**B245**]{}(1990) 229; Phys. Lett. [**B352**]{}(1995) 440. Proceeding of the workshop “ A critical issue in the determination of the pion nucleon decay constant”, ed. Jan Blomgten, Phys. Scripta [**T87**]{} (2000)53. V. M. Belyaev and Y. I. Kogan, Sov. J. Nucl. Phys. [**40**]{}(1984) 659. I. I. Kogan and D. Wyler, Phys. Lett. [**B274**]{}(1992) 100. C. A. Dominguez, C. Van Gend and M. Loewe, Phys. Lett. [**B429**]{}(1998) 64;\ V.L. Eletky and I.I. Kogan, Phys. Rev [**D49**]{} (1994)3083. S. Gupta, hep-lat/0001011; A. Ali Khan et al., hep-lat/0008011. S. Weinberg, Phys. Rev [**D9**]{}(1974) 3357. R. N. Mohapatra and G. Senjanovic, Phys. Rev. [**D20**]{} (1979) 3390;\ G. Dvali, A. Melfo, G. Senjanovic, Phys.Rev. [**D54**]{} (1996)7857 and references therein. Ya. B. Zeldovich, I. Yu. Kobzarev and L. B. Okun, JETP. [**40**]{}(1974) 1;\ T. W. Kibble, J. Phys. [**A9**]{}(1976) 1987, Phys. Rep. [**67**]{}(1980) 183. A. Sakharov, JETP Lett. [**5**]{} (1967) 24. G. Dvali, A. Melfo and G. Senjanovic, Phys. Rev. Lett. [**75**]{}(1995) 4559. **Figure Captions** {#figure-captions .unnumbered} =================== Figure: Temperature dependence of the ratio $\mid{d_n \over \bar{\theta}}\mid$
{ "pile_set_name": "arxiv" }
--- abstract: 'The aim of this paper is to establish a global asymptotic equivalence between the experiments generated by the discrete (high frequency) or continuous observation of a path of a Lévy process and a Gaussian white noise experiment observed up to a time $T$, with $T$ tending to $\infty$. These approximations are given in the sense of the Le Cam distance, under some smoothness conditions on the unknown Lévy density. All the asymptotic equivalences are established by constructing explicit Markov kernels that can be used to reproduce one experiment from the other.' address: - '*Laboratoire LJK, Université Joseph Fourier UMR 5224 51, Rue des Mathématiques, Saint Martin d’Hères BP 53 38041 Grenoble Cedex 09*' - 'Corresponding Author, Ester.Mariucci@imag.fr' author: - Ester Mariucci bibliography: - 'refs.bib' title: Asymptotic equivalence for pure jump Lévy processes with unknown Lévy density and Gaussian white noise --- Nonparametric experiments,Le Cam distance,asymptotic equivalence,Lévy processes. 62B15,(62G20,60G51). Introduction ============ Lévy processes are a fundamental tool in modelling situations, like the dynamics of asset prices and weather measurements, where sudden changes in values may happen. For that reason they are widely employed, among many other fields, in mathematical finance. To name a simple example, the price of a commodity at time $t$ is commonly given as an exponential function of a Lévy process. In general, exponential Lévy models are proposed for their ability to take into account several empirical features observed in the returns of assets such as heavy tails, high-kurtosis and asymmetry (see [@tankov] for an introduction to financial applications). From a mathematical point of view, Lévy processes are a natural extension of the Brownian motion which preserves the tractable statistical properties of its increments, while relaxing the continuity of paths. The jump dynamics of a Lévy process is dictated by its Lévy density, say $f$. If $f$ is continuous, its value at a point $x_0$ determines how frequent jumps of size close to $x_0$ are to occur per unit time. Concretely, if $X$ is a pure jump Lévy process with Lévy density $f$, then the function $f$ is such that $$\int_Af(x)dx=\frac{1}{t}{\ensuremath {\mathbb{E}}}\bigg[\sum_{s\leq t}{\ensuremath {\mathbb{I}}}_A(\Delta X_s)\bigg],$$ for any Borel set $A$ and $t>0$. Here, $\Delta X_s\equiv X_s-X_{s^-}$ denotes the magnitude of the jump of $X$ at time $s$ and ${\ensuremath {\mathbb{I}}}_A$ is the characteristic function. Thus, the Lévy measure $$\nu(A):=\int_A f(x)dx,$$ is the average number of jumps (per unit time) whose magnitudes fall in the set $A$. Understanding the jumps behavior, therefore requires to estimate the Lévy measure. Several recent works have treated this problem, see e.g. [@bel15] for an overview. When the available data consists of the whole trajectory of the process during a time interval $[0,T]$, the problem of estimating $f$ may be reduced to estimating the intensity function of an inhomogeneous Poisson process (see, e.g. [@fig06; @rey03]). However, a continuous-time sampling is never available in practice and thus the relevant problem is that of estimating $f$ based on discrete sample data $X_{t_0},\dots,X_{t_n}$ during a time interval $[0,T_n]$. In that case, the jumps are latent (unobservable) variables and that clearly adds to the difficulty of the problem. From now on we will place ourselves in a high-frequency setting, that is we assume that the sampling interval $\Delta_n=t_i-t_{i-1}$ tends to zero as $n$ goes to infinity. Such a high-frequency based statistical approach has played a central role in the recent literature on nonparametric estimation for Lévy processes (see e.g. [@fig09; @comte10; @comte11; @bec12; @duval12]). Moreover, in order to make consistent estimation possible, we will also ask the observation time $T_n$ to tend to infinity in order to allow the identification of the jump part in the limit. Our aim is to prove that, under suitable hypotheses, estimating the Lévy density $f$ is equivalent to estimating the drift of an adequate Gaussian white noise model. In general, asymptotic equivalence results for statistical experiments provide a deeper understanding of statistical problems and allow to single out their main features. The idea is to pass via asymptotic equivalence to another experiment which is easier to analyze. By definition, two sequences of experiments ${\ensuremath {\mathscr{P}}}_{1,n}$ and ${\ensuremath {\mathscr{P}}}_{2,n}$, defined on possibly different sample spaces, but with the same parameter set, are asymptotically equivalent if the Le Cam distance $\Delta({\ensuremath {\mathscr{P}}}_{1,n},{\ensuremath {\mathscr{P}}}_{2,n})$ tends to zero. For ${\ensuremath {\mathscr{P}}}_{i}=({\ensuremath {\mathscr{X}}}_i,{\ensuremath {\mathscr{A}}}_i, \big(P_{i,\theta}:\theta\in\Theta)\big)$, $i=1,2$, $\Delta({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2)$ is the symmetrization of the deficiency $\delta({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2)$ where $$\delta({\ensuremath {\mathscr{P}}}_{1},{\ensuremath {\mathscr{P}}}_{2})=\inf_K\sup_{\theta\in\Theta}\big\|KP_{1,\theta}-P_{2,\theta}\big\|_{TV}.$$ Here the infimum is taken over all randomizations from $({\ensuremath {\mathscr{X}}}_1,{\ensuremath {\mathscr{A}}}_1)$ to $({\ensuremath {\mathscr{X}}}_2,{\ensuremath {\mathscr{A}}}_2)$ and $\| \cdot \|_{TV}$ denotes the total variation distance. Roughly speaking, the Le Cam distance quantifies how much one fails to reconstruct (with the help of a randomization) a model from the other one and vice versa. Therefore, we say that $\Delta({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2)=0$ can be interpreted as “the models ${\ensuremath {\mathscr{P}}}_1$ and ${\ensuremath {\mathscr{P}}}_2$ contain the same amount of information about the parameter $\theta$.” The general definition of randomization is quite involved but, in the most frequent examples (namely when the sample spaces are Polish and the experiments dominated), it reduces to that of a Markov kernel. One of the most important feature of the Le Cam distance is that it can be also interpreted in terms of statistical decision theory (see [@lecam; @LC2000]; a short review is presented in the Appendix). As a consequence, saying that two statistical models are equivalent means that any statistical inference procedure can be transferred from one model to the other in such a way that the asymptotic risk remains the same, at least for bounded loss functions. Also, as soon as two models, ${\ensuremath {\mathscr{P}}}_{1,n}$ and ${\ensuremath {\mathscr{P}}}_{2,n}$, that share the same parameter space $\Theta$ are proved to be asymptotically equivalent, the same result automatically holds for the restrictions of both ${\ensuremath {\mathscr{P}}}_{1,n}$ and ${\ensuremath {\mathscr{P}}}_{2,n}$ to a smaller subclass of $\Theta$. Historically, the first results of asymptotic equivalence in a nonparametric context date from 1996 and are due to [@BL] and [@N96]. The first two authors have shown the asymptotic equivalence of nonparametric regression and a Gaussian white noise model while the third one those of density estimation and white noise. Over the years many generalizations of these results have been proposed such as [@regression02; @GN2002; @ro04; @C2007; @cregression; @R2008; @C2009; @R2013; @schmidt14] for nonparametric regression or [@cmultinomial; @j03; @BC04] for nonparametric density estimation models. Another very active field of study is that of diffusion experiments. The first result of equivalence between diffusion models and Euler scheme was established in 1998, see [@NM]. In later papers generalizations of this result have been considered (see [@C14; @esterdiffusion]). Among others we can also cite equivalence results for generalized linear models [@GN], time series [@GN2006; @NM], diffusion models [@D; @CLN; @R2006; @rmultidimensionale], GARCH model [@B], functional linear regression [@M2011], spectral density estimation [@GN2010] and volatility estimation [@R11]. Negative results are somewhat harder to come by; the most notable among them are [@sam96; @B98; @wang02]. There is however a lack of equivalence results concerning processes with jumps. A first result in this sense is [@esterESAIM] in which global asymptotic equivalences between the experiments generated by the discrete or continuous observation of a path of a Lévy process and a Gaussian white noise experiment are established. More precisely, in that paper, we have shown that estimating the drift function $h$ from a continuously or discretely (high frequency) time inhomogeneous jump-diffusion process: $$\label{ch4X} X_t=\int_0^th(s)ds+\int_0^t\sigma(s)dW_s +\sum_{i=1}^{N_t}Y_i,\quad t\in[0,T_n],$$ is asymptotically equivalent to estimate $h$ in the Gaussian model: $$ dy_t=h(t)dt+\sigma(t)dW_t, \quad t\in[0,T_n].$$ Here we try to push the analysis further and we focus on the case in which the considered parameter is the Lévy density and $X=(X_t)$ is a pure jump Lévy process (see [@carr02] for the interest of such a class of processes when modelling asset returns). More in details, we consider the problem of estimating the Lévy density (with respect to a fixed, possibly infinite, Lévy measure $\nu_0$ concentrated on $I\subseteq {\ensuremath {\mathbb{R}}}$) $f:=\frac{d\nu}{d\nu_0}:I\to {\ensuremath {\mathbb{R}}}$ from a continuously or discretely observed pure jump Lévy process $X$ with possibly infinite Lévy measure. Here $I\subseteq {\ensuremath {\mathbb{R}}}$ denotes a possibly infinite interval and $\nu_0$ is supposed to be absolutely continuous with respect to Lebesgue with a strictly positive density $g:=\frac{d\nu_0}{d{\ensuremath{\textnormal{Leb}}}}$. In the case where $\nu$ is of finite variation one may write: $$\label{eqn:ch4Levy} X_t=\sum_{0<s\leq t}\Delta X_s$$ or, equivalently, $X$ has a characteristic function given by: $${\ensuremath {\mathbb{E}}}\big[e^{iuX_t}\big]=\exp\bigg(-t\bigg(\int_{I}(1-e^{iuy})\nu(dy)\bigg)\bigg).$$ We suppose that the function $f$ belongs to some a priori set ${\ensuremath {\mathscr{F}}}$, nonparametric in general. The discrete observations are of the form $X_{t_i}$, where $t_i=T_n\frac{i}{n}$, $i=0,\dots,n$ with $T_n=n\Delta_n\to \infty$ and $\Delta_n\to 0$ as $n$ goes to infinity. We will denote by ${\ensuremath {\mathscr{P}}}_n^{\nu_0}$ the statistical model associated with the continuous observation of a trajectory of $X$ until time $T_n$ (which is supposed to go to infinity as $n$ goes to infinity) and by ${\ensuremath {\mathscr{Q}}}_n^{\nu_0}$ the one associated with the observation of the discrete data $(X_{t_i})_{i=0}^n$. The aim of this paper is to prove that, under adequate hypotheses on ${\ensuremath {\mathscr{F}}}$ (for example, $f$ must be bounded away from zero and infinity; see Section \[subsec:ch4parameter\] for a complete definition), the models ${\ensuremath {\mathscr{P}}}_n^{\nu_0}$ and ${\ensuremath {\mathscr{Q}}}_n^{\nu_0}$ are both asymptotically equivalent to a sequence of Gaussian white noise models of the form: $$dy_t=\sqrt{f(t)}dt+\frac{1}{2\sqrt{T_n}}\frac{dW_t}{\sqrt{g(t)}},\quad t\in I.$$ As a corollary, we then get the asymptotic equivalence between ${\ensuremath {\mathscr{P}}}_n^{\nu_0}$ and ${\ensuremath {\mathscr{Q}}}_n^{\nu_0}$. The main results are precisely stated as Theorems \[ch4teo1\] and \[ch4teo2\]. A particular case of special interest arises when $X$ is a compound Poisson process, $\nu_0\equiv {\ensuremath{\textnormal{Leb}}}([0,1])$ and ${\ensuremath {\mathscr{F}}}\subseteq {\ensuremath {\mathscr{F}}}_{(\gamma,K,\kappa,M)}^I$ where, for fixed $\gamma\in (0,1]$ and $K,\kappa, M$ strictly positive constants, ${\ensuremath {\mathscr{F}}}_{(\gamma,K,\kappa,M)}^I$ is a class of continuously differentiable functions on $I$ defined as follows: $$\label{ch4:fholder} {\ensuremath {\mathscr{F}}}_{(\gamma,K,\kappa,M)}^I=\Big\{f: \kappa\leq f(x)\leq M, \ |f'(x)-f'(y)|\leq K|x-y|^{\gamma},\ \forall x,y\in I\Big\}.$$ In this case, the statistical models ${\ensuremath {\mathscr{P}}}_n^{\nu_0}$ and ${\ensuremath {\mathscr{Q}}}_n^{\nu_0}$ are both equivalent to the Gaussian white noise model: $$dy_t=\sqrt{f(t)}dt+\frac{1}{2\sqrt{T_n}}dW_t,\quad t\in [0,1].$$ See Example \[ex:ch4CPP\] for more details. By a theorem of Brown and Low in [@BL], we obtain, a posteriori, an asymptotic equivalence with the regression model $$Y_i=\sqrt{f\Big(\frac{i}{T_n}\Big)}+\frac{1}{2\sqrt{T_n}}\xi_i, \quad \xi_i\sim{\ensuremath {\mathscr{Nn}}}(0,1), \quad i=1,\dots, [T_n].$$ Note that a similar form of a Gaussian shift was found to be asymptotically equivalent to a nonparametric density estimation experiment, see [@N96]. Let us mention that we also treat some explicit examples where $\nu_0$ is neither finite nor compactly-supported (see Examples \[ch4ex2\] and \[ex3\]). Without entering into any detail, we remark here that the methods are very different from those in [@esterESAIM]. In particular, since $f$ belongs to the discontinuous part of a Lévy process, rather then its continuous part, the Girsanov-type changes of measure are irrelevant here. We thus need new instruments, like the Esscher changes of measure. Our proof is based on the construction, for any given Lévy measure $\nu$, of two adequate approximations $\hat \nu_m$ and $\bar \nu_m$ of $\nu$: the idea of discretizing the Lévy density already appeared in an earlier work with P. Étoré and S. Louhichi, [@etore13]. The present work is also inspired by the papers [@cmultinomial] (for a multinomial approximation), [@BC04] (for passing from independent Poisson variables to independent normal random variables) and [@esterESAIM] (for a Bernoulli approximation). This method allows us to construct explicit Markov kernels that lead from one model to the other; these may be applied in practice to transfer minimax estimators. The paper is organized as follows: Sections \[subsec:ch4parameter\] and \[subsec:ch4experiments\] are devoted to make the parameter space and the considered statistical experiments precise. The main results are given in Section \[subsec:ch4mainresults\], followed by Section \[sec:ch4experiments\] in which some examples can be found. The proofs are postponed to Section \[sec:ch4proofs\]. The paper includes an Appendix recalling the definition and some useful properties of the Le Cam distance as well as of Lévy processes. Assumptions and main results ============================ The parameter space {#subsec:ch4parameter} ------------------- Consider a (possibly infinite) Lévy measure $\nu_0$ concentrated on a possibly infinite interval $I\subseteq{\ensuremath {\mathbb{R}}}$, admitting a density $g>0$ with respect to Lebesgue. The parameter space of the experiments we are concerned with is a class of functions ${\ensuremath {\mathscr{F}}}={\ensuremath {\mathscr{F}}}^{\nu_0,I}$ defined on $I$ that form a class of Lévy densities with respect to $\nu_0$: For each $f\in{\ensuremath {\mathscr{F}}}$, let $\nu$ (resp. $\hat \nu_m$) be the Lévy measure having $f$ (resp. $\hat f_m$) as a density with respect to $\nu_0$ where, for every $f\in{\ensuremath {\mathscr{F}}}$, $\hat f_m(x)$ is defined as follows. Suppose first $x>0$. Given a positive integer depending on $n$, $m=m_n$, let $J_j:=(v_{j-1},v_j]$ where $v_1=\varepsilon_m\geq 0$ and $v_j$ are chosen in such a way that $$\label{eq:ch4Jj} \mu_m:=\nu_0(J_j)=\frac{\nu_0\big((I\setminus[0,\varepsilon_m])\cap {\ensuremath {\mathbb{R}}}_+\big)}{m-1},\quad \forall j=2,\dots,m.$$ In the sequel, for the sake of brevity, we will only write $m$ without making explicit the dependence on $n$. Define $x_j^*:=\frac{\int_{J_j}x\nu_0(dx)}{\mu_m}$ and introduce a sequence of functions $0\leq V_j\leq \frac{1}{\mu_m}$, $j=2,\dots,m$ supported on $[x_{j-1}^*, x_{j+1}^*]$ if $j=3,\dots,m-1$, on $[\varepsilon_m, x_3^*]$ if $j=2$ and on $(I\setminus [0,x_{m-1}^*])\cap {\ensuremath {\mathbb{R}}}_+$ if $j=m$. The $V_j$’s are defined recursively in the following way. - $V_2$ is equal to $\frac{1}{\mu_m}$ on the interval $(\varepsilon_m, x_2^*]$ and on the interval $(x_2^*,x_3^*]$ it is chosen so that it is continuous (in particular, $V_2(x_2^*)=\frac{1}{\mu_m}$), $\int_{x_2^*}^{x_3^*}V_2(y)\nu_0(dy)=\frac{\nu_0((x_2^*, v_2])}{\mu_m}$ and $V_2(x_3^*)=0$. - For $j=3,\dots,m-1$ define $V_j$ as the function $\frac{1}{\mu_m}-V_{j-1}$ on the interval $[x_{j-1}^*,x_j^*]$. On $[x_j^*,x_{j+1}^*]$ choose $V_j$ continuous and such that $\int_{x_j^*}^{x_{j+1}^*}V_j(y)\nu_0(dy)=\frac{\nu_0((x_j^*,v_j])}{\mu_m}$ and $V_j(x_{j+1}^*)=0$. - Finally, let $V_m$ be the function supported on $(I\setminus [0,x_{m-1}^*]) \cap {\ensuremath {\mathbb{R}}}_+$ such that $$\begin{aligned} V_m(x)&=\frac{1}{\mu_m}-V_{m-1}(x), \quad\text{for } x \in [x_{m-1}^*,x_m^*],\\ V_m(x)&=\frac{1}{\mu_m}, \quad\text{for } x \in (I\setminus [0,x_m^*])\cap {\ensuremath {\mathbb{R}}}_+.\end{aligned}$$ (It is immediate to check that such a choice is always possible). Observe that, by construction, $$\sum_{j=2}^m V_j(x)\mu_m=1, \quad \forall x\in (I\setminus[0,\varepsilon_m])\cap {\ensuremath {\mathbb{R}}}_+ \quad \textnormal{and} \quad \int_{(I\setminus[0,\varepsilon_m])\cap {\ensuremath {\mathbb{R}}}_+}V_j(y)\nu_0(dy)=1.$$ Analogously, define $\mu_m^-=\frac{\nu_0\big((I\setminus[-\varepsilon_m,0])\cap {\ensuremath {\mathbb{R}}}_-\big)}{m-1}$ and $J_{-m},\dots,J_{-2}$ such that $\nu_0(J_{-j})=\mu_m^-$ for all $j$. Then, for $x<0$, $x_{-j}^*$ is defined as $x_j^*$ by using $J_{-j}$ and $\mu_m^-$ instead of $J_j$ and $\mu_m$ and the $V_{-j}$’s are defined with the same procedure as the $V_j$’s, starting from $V_{-2}$ and proceeding by induction. Define $$\label{eq:ch4hatf} \hat f_m(x)={\ensuremath {\mathbb{I}}}_{[-\varepsilon_m,\varepsilon_m]}(x)+\sum_{j=2}^m \bigg(V_j(x)\int_{J_j} f(y)\nu_0(dy)+V_{-j}(x)\int_{J_{-j}} f(y)\nu_0(dy)\bigg).$$ The definitions of the $V_j$’s above are modeled on the following example: \[ex:Vj\] Let $\nu_0$ be the Lebesgue measure on $[0,1]$ and $\varepsilon_m=0$. Then $v_j=\frac{j-1}{m-1}$ and $x_j^*=\frac{2j-3}{2m-2}$, $j=2,\dots,m$. The standard choice for $V_j$ (based on the construction by [@cmultinomial]) is given by the piecewise linear functions interpolating the values in the points $x_j^*$ specified above: The function $\hat f_m$ has been defined in such a way that the rate of convergence of the $L_2$ norm between the restriction of $f$ and $\hat f_m$ on $I\setminus[-\varepsilon_m,\varepsilon_m]$ is compatible with the rate of convergence of the other quantities appearing in the statements of Theorems \[ch4teo1\] and \[ch4teo2\]. For that reason, as in [@cmultinomial], we have not chosen a piecewise constant approximation of $f$ but an approximation that is, at least in the simplest cases, a piecewise linear approximation of $f$. Such a choice allows us to gain an order of magnitude on the convergence rate of $\|f-\hat f_m\|_{L_2(\nu_0|{I\setminus{[-\varepsilon_m,\varepsilon_m]}})}$ at least when ${\ensuremath {\mathscr{F}}}$ is a class of sufficiently smooth functions. We now explain the assumptions we will need to make on the parameter $f \in {\ensuremath {\mathscr{F}}}= {\ensuremath {\mathscr{F}}}^{\nu_0, I}$. The superscripts $\nu_0$ and $I$ will be suppressed whenever this can lead to no confusion. We require that: 1. There exist constants $\kappa, M >0$ such that $\kappa\leq f(y)\leq M$, for all $y\in I$ and $f\in {\ensuremath {\mathscr{F}}}$. For every integer $m=m_n$, we can consider $\widehat{\sqrt{f}}_m$, the approximation of $\sqrt{f}$ constructed as $\hat f_m$ above, i.e. $\widehat{\sqrt{f}}_m(x)=\displaystyle{{\ensuremath {\mathbb{I}}}_{[-\varepsilon_m,\varepsilon_m]}(x)+\sum_{\substack{j=-m\dots,m\\ j\neq -1,0,1.}}V_j(x)\int_{J_j} \sqrt{f(y)}\nu_0(dy)}$, and introduce the quantities: $$\begin{aligned} A_m^2(f)&:= \int_{I\setminus \big[-\varepsilon_m,\varepsilon_m\big]}\Big(\widehat{\sqrt {f}}_m(y)-\sqrt{f(y)}\Big)^2\nu_0(dy),\\ B_m^2(f)&:= \sum_{\substack{j=-m\dots,m\\ j\neq -1,0,1.}}\bigg(\int_{J_j}\frac{\sqrt{f(y)}}{\sqrt{\nu_0(J_j)}}\nu_0(dy)-\sqrt{\nu(J_j)}\bigg)^2,\\ C_m^2(f)&:= \int_{-\varepsilon_m}^{\varepsilon_m}\big(\sqrt{f(t)}-1\big)^2\nu_0(dt). \end{aligned}$$ The conditions defining the parameter space ${\ensuremath {\mathscr{F}}}$ are expressed by asking that the quantities introduced above converge quickly enough to zero. To state the assumptions of Theorem \[ch4teo1\] precisely, we will assume the existence of sequences of discretizations $m = m_n\to\infty$, of positive numbers $\varepsilon_m=\varepsilon_{m_n}\to 0$ and of functions $V_j$, $j = \pm 2, \dots, \pm m$, such that: 1. \[cond:ch4hellinger\] $\lim\limits_{n \to \infty}n\Delta_n\sup\limits_{f \in{\ensuremath {\mathscr{F}}}}\displaystyle{\int_{I\setminus(-\varepsilon_m,\varepsilon_m)}}\Big(f(x)-\hat f_m(x)\Big)^2 \nu_0(dx) = 0$. 2. \[cond:ch4ABC\]$\lim\limits_{n \to \infty}n\Delta_n\sup\limits_{f \in{\ensuremath {\mathscr{F}}}} \big(A_m^2(f)+B_m^2(f)+C_m^2(f)\big)=0$. Remark in particular that Condition (C\[cond:ch4ABC\]) implies the following: 1. $\displaystyle \sup_{f\in{\ensuremath {\mathscr{F}}}}\int_I (\sqrt{f(y)}-1)^2 \nu_0(dy) \leq L,$ where $L = \sup_{f \in {\ensuremath {\mathscr{F}}}} \int_{-\varepsilon_m}^{\varepsilon_m} (\sqrt{f(x)}-1)^2\nu_0(dx) + (\sqrt{M}+1)^2\nu_0\big(I\setminus (-\varepsilon_m, \varepsilon_m)\big)$, for any choice of $m$ such that the quantity in the limit appearing in Condition (C\[cond:ch4ABC\]) is finite. Theorem \[ch4teo2\] has slightly stronger hypotheses, defining possibly smaller parameter spaces: We will assume the existence of sequences $m_n$, $\varepsilon_m$ and $V_j$, $j = \pm 2, \dots, \pm m$ (possibly different from the ones above) such that Condition (C1) is verified and the following stronger version of Condition (C2) holds: 1. $\lim\limits_{n \to \infty}n\Delta_n\sup\limits_{f \in{\ensuremath {\mathscr{F}}}} \big(A_m^2(f)+B_m^2(f)+nC_m^2(f)\big)=0$. Finally, some of our results have a more explicit statement under the hypothesis of finite variation which we state as: - $\int_I (|x|\wedge 1)\nu_0(dx)<\infty$. The Condition (C1) and those involving the quantities $A_m(f)$ and $B_m(f)$ all concern similar but slightly different approximations of $f$. In concrete examples, they may all be expected to have the same rate of convergence but to keep the greatest generality we preferred to state them separately. On the other hand, conditions on the quantity $C_m(f)$ are purely local around zero, requiring the parameters $f$ to converge quickly enough to 1. \[ex:ch4esempi\] To get a grasp on Conditions (C1), (C2) we analyze here three different examples according to the different behavior of $\nu_0$ near $0\in I$. In all of these cases the parameter space ${\ensuremath {\mathscr{F}}}^{\nu_0, I}$ will be a subclass of ${\ensuremath {\mathscr{F}}}_{(\gamma,K,\kappa,M)}^I$ defined as in . Recall that the conditions (C1), (C2) and (C2’) depend on the choice of sequences $m_n$, $\varepsilon_m$ and functions $V_j$. For the first two of the three examples, where $I = [0,1]$, we will make the standard choice for $V_j$ of triangular and trapezoidal functions, similarly to those in Example \[ex:Vj\]. Namely, for $j = 3, \dots, m-1$ we have $$\label{eq:ch4vj} V_j(x) = {\ensuremath {\mathbb{I}}}_{(x_{j-1}^*, x_j^*]}(x) \frac{x-x_{j-1}^*}{x_j^*-x_{j-1}^*} \frac{1}{\mu_m} + {\ensuremath {\mathbb{I}}}_{(x_{j}^*, x_{j+1}^*]}(x) \frac{x_{j+1}^*-x}{x_{j+1}^*-x_{j}^*} \frac{1}{\mu_m};$$ the two extremal functions $V_2$ and $V_m$ are chosen so that $V_2 \equiv \frac{1}{\mu_m}$ on $(\varepsilon_m, x_2^*]$ and $V_m \equiv \frac{1}{\mu_m}$ on $(x_m^*, 1]$. In the second example, where $\nu_0$ is infinite, one is forced to take $\varepsilon_m > 0$ and to keep in mind that the $x_j^*$ are not uniformly distributed on $[\varepsilon_m,1]$. Proofs of all the statements here can be found in Section \[subsec:esempi\]. **1. The finite case:** $\nu_0\equiv {\ensuremath{\textnormal{Leb}}}([0,1])$. In this case we are free to choose ${\ensuremath {\mathscr{F}}}^{{\ensuremath{\textnormal{Leb}}}, [0,1]} = {\ensuremath {\mathscr{F}}}_{(\gamma, K, \kappa, M)}^{[0,1]}$. Indeed, as $\nu_0$ is finite, there is no need to single out the first interval $J_1=[0,\varepsilon_m]$, so that $C_m(f)$ does not enter in the proofs and the definitions of $A_m(f)$ and $B_m(f)$ involve integrals on the whole of $[0,1]$. Also, the choice of the $V_j$’s as in guarantees that $\int_0^1 V_j(x) dx = 1$. Then, the quantities $\|f-\hat f_m\|_{L_2([0,1])}$, $A_m(f)$ and $B_m(f)$ all have the same rate of convergence, which is given by: $$\sqrt{\int_0^1\Big(f(x)-\hat f_m(x)\Big)^2 \nu_0(dx)}+A_m(f)+B_m(f)=O\Big(m^{-\gamma-1}+m^{-\frac{3}{2}}\Big),$$ uniformly on $f$. See Section \[subsec:esempi\] for a proof. **2. The finite variation case:** $\frac{d\nu_0}{d{\ensuremath{\textnormal{Leb}}}}(x)=x^{-1}{\ensuremath {\mathbb{I}}}_{[0,1]}(x)$. In this case, the parameter space ${\ensuremath {\mathscr{F}}}^{\nu_0, [0,1]}$ is a proper subset of ${\ensuremath {\mathscr{F}}}_{(\gamma, K, \kappa, M)}^{[0,1]}$. Indeed, as we are obliged to choose $\varepsilon_m > 0$, we also need to impose that $C_m(f) = o\big(\frac{1}{n\sqrt{\Delta_n}}\big)$, with uniform constants with respect to $f$, that is, that all $f \in {\ensuremath {\mathscr{F}}}$ converge to 1 quickly enough as $x \to 0$. Choosing $\varepsilon_m = m^{-1-\alpha}$, $\alpha> 0$ we have that $\mu_m=\frac{\ln (\varepsilon_m^{-1})}{m-1}$, $v_j =\varepsilon_m^{\frac{m-j}{m-1}}$ and $x_j^* =\frac{(v_{j}-v_{j-1})}{\mu_m}$. In particular, $\max_j|v_{j-1}-v_j|=|v_m-v_{m-1}|=O\Big(\frac{\ln m}{m}\Big)$. Also in this case one can prove that the standard choice of $V_j$ described above leads to $\int_{\varepsilon_m}^1 V_j(x) \frac{dx}{x} = 1$. Again, the quantities $\|f-\hat f_m\|_{L_2(\nu_0|{I\setminus{[0,\varepsilon_m]}})}$, $A_m(f)$ and $B_m(f)$ have the same rate of convergence given by: $$\label{eq:ch4ex2} \sqrt{\int_{\varepsilon_m}^1\Big(f(x)-\hat f_m(x)\Big)^2 \nu_0(dx)} +A_m(f)+B_m(f)=O\bigg(\bigg(\frac{\ln m}{m}\bigg)^{\gamma+1} \sqrt{\ln (\varepsilon_m^{-1})}\bigg),$$ uniformly on $f$. The condition on $C_m(f)$ depends on the behavior of $f$ near $0$. For example, it is ensured if one considers a parametric family of the form $f(x)=e^{-\lambda x}$ with a bounded $\lambda > 0$. See Section \[subsec:esempi\] for a proof. **3. The infinite variation, non-compactly supported case:** $\frac{d\nu_0}{d{\ensuremath{\textnormal{Leb}}}}(x)=x^{-2}{\ensuremath {\mathbb{I}}}_{{\ensuremath {\mathbb{R}}}_+}(x)$. This example involves significantly more computations than the preceding ones, since the classical triangular choice for the functions $V_j$ would not have integral equal to 1 (with respect to $\nu_0$), and the support is not compact. The parameter space ${\ensuremath {\mathscr{F}}}^{\nu_0, [0, \infty)}$ can still be chosen as a proper subclass of ${\ensuremath {\mathscr{F}}}_{(\gamma, K, \kappa, M)}^{[0,\infty)}$, again by imposing that $C_m(f)$ converges to zero quickly enough (more details about this condition are discussed in Example \[ex3\]). We divide the interval $[0, \infty)$ in $m$ intervals $J_j = [v_{j-1}, v_j)$ with: $$v_0 = 0; \quad v_1 = \varepsilon_m; \quad v_j = \frac{\varepsilon_m(m-1)}{m-j};\quad v_m = \infty; \quad \mu_m = \frac{1}{\varepsilon_m(m-1)}.$$ To deal with the non-compactness problem, we choose some “horizon” $H(m)$ that goes to infinity slowly enough as $m$ goes to infinity and we bound the $L_2$ distance between $f$ and $\hat f_m$ for $x > H(m)$ by $2\sup\limits_{x\geq H(m)}\frac{f(x)^2}{H(m)}$. We have: $$\|f-\hat f_m\|_{L_2(\nu_0|{I\setminus{[0,\varepsilon_m]}})}^2+A_m^2(f)+B_m^2(f)=O\bigg(\frac{H(m)^{3+4\gamma}}{(\varepsilon_m m)^{2+2\gamma}}+\sup_{x\geq H(m)}\frac{f(x)^2}{H(m)}\bigg).$$ In the general case where the best estimate for $\displaystyle{\sup_{x\geq H(m)}f(x)^2}$ is simply given by $M^2$, an optimal choice for $H(m)$ is $\sqrt{\varepsilon_m m}$, that gives a rate of convergence: $$\|f-\hat f_m\|_{L_2(\nu_0|{I\setminus{[0,\varepsilon_m]}})}^2+A_m^2(f)+B_m^2(f) =O\bigg( \frac{1}{\sqrt{\varepsilon_m m}}\bigg),$$ independently of $\gamma$. See Section \[subsec:esempi\] for a proof. Definition of the experiments {#subsec:ch4experiments} ----------------------------- Let $(x_t)_{t\geq 0}$ be the canonical process on the Skorokhod space $(D,{\ensuremath {\mathscr{D}}})$ and denote by $P^{(b,0,\nu)}$ the law induced on $(D,{\ensuremath {\mathscr{D}}})$ by a Lévy process with characteristic triplet $(b,0,\nu)$. We will write $P_t^{(b,0,\nu)}$ for the restriction of $P^{(b,0,\nu)}$ to the $\sigma$-algebra ${\ensuremath {\mathscr{D}}}_t$ generated by $\{x_s:0\leq s\leq t\}$ (see \[sec:ch4levy\] for the precise definitions). Let $Q_t^{(b,0,\nu)}$ be the marginal law at time $t$ of a Lévy process with characteristic triplet ${(b,0,\nu)}$. In the case where $\int_{|y|\leq 1}|y|\nu(dy)<\infty$ we introduce the notation $\gamma^{\nu}:=\int_{|y|\leq 1}y\nu(dy)$; then, Condition (H2) guarantees the finiteness of $\gamma^{\nu-\nu_0}$ (see Remark 33.3 in [@sato] for more details). Recall that we introduced the discretization $t_i=T_n\frac{i}{n}$ of $[0,T_n]$ and denote by $\textbf Q_n^{(\gamma^{\nu-\nu_0},0,\nu)}$ the laws of the $n+1$ marginals of $(x_t)_{t\geq 0}$ at times $t_i$, $i=0,\dots,n$. We will consider the following statistical models, depending on a fixed, possibly infinite, Lévy measure $\nu_0$ concentrated on $I$ (clearly, the models with the subscript $FV$ are meaningful only under the assumption (FV)): $$\begin{aligned} {\ensuremath {\mathscr{P}}}_{n,FV}^{\nu_0}&=\bigg(D,{\ensuremath {\mathscr{D}}}_{T_n},\Big\{P_{T_n}^{(\gamma^{\nu},0,\nu)}:f:=\frac{d\nu}{d\nu_0}\in{\ensuremath {\mathscr{F}}}^{\nu_0,I}\Big\}\bigg),\\ {\ensuremath {\mathscr{Q}}}_{n,FV}^{\nu_0}&=\bigg({\ensuremath {\mathbb{R}}}^{n+1},{\ensuremath {\mathscr{B}}}({\ensuremath {\mathbb{R}}}^{n+1}),\Big\{ \textbf Q_{n}^{(\gamma^{\nu},0,\nu)}:f:=\frac{d\nu}{d\nu_0}\in{\ensuremath {\mathscr{F}}}^{\nu_0,I}\Big\}\bigg),\\ {\ensuremath {\mathscr{P}}}_{n}^{\nu_0}&=\bigg(D,{\ensuremath {\mathscr{D}}}_{T_n},\Big\{P_{T_n}^{(\gamma^{\nu-\nu_0},0,\nu)}:f:=\frac{d\nu}{d\nu_0}\in{\ensuremath {\mathscr{F}}}^{\nu_0,I}\Big\}\bigg),\\ {\ensuremath {\mathscr{Q}}}_{n}^{\nu_0}&=\bigg({\ensuremath {\mathbb{R}}}^{n+1},{\ensuremath {\mathscr{B}}}({\ensuremath {\mathbb{R}}}^{n+1}),\Big\{\textbf Q_{n}^{(\gamma^{\nu-\nu_0},0,\nu)}:f:=\frac{d\nu}{d\nu_0}\in{\ensuremath {\mathscr{F}}}^{\nu_0,I}\Big\}\bigg). \end{aligned}$$ Finally, let us introduce the Gaussian white noise model that will appear in the statement of our main results. For that, let us denote by $(C(I),{\ensuremath {\mathscr{C}}})$ the space of continuous mappings from $I$ into ${\ensuremath {\mathbb{R}}}$ endowed with its standard filtration, by $g$ the density of $\nu_0$ with respect to the Lebesgue measure. We will require $g>0$ and let $\mathbb W_n^f$ be the law induced on $(C(I),{\ensuremath {\mathscr{C}}})$ by the stochastic process satisfying: $$\begin{aligned} \label{eqn:ch4Wf} dy_t=\sqrt{f(t)}dt+\frac{dW_t}{2\sqrt{T_n}\sqrt{g(t)}}, \quad t\in I,\end{aligned}$$ where $(W_t)_{t\in{\ensuremath {\mathbb{R}}}}$ denotes a Brownian motion on ${\ensuremath {\mathbb{R}}}$ with $W_0=0$. Then we set: $${\ensuremath {\mathscr{W}}}_n^{\nu_0}=\Big(C(I),{\ensuremath {\mathscr{C}}},\{\mathbb W_n^{f}:f\in{\ensuremath {\mathscr{F}}}^{\nu_0,I}\}\Big).$$ Observe that when $\nu_0$ is a finite Lévy measure, then ${\ensuremath {\mathscr{W}}}_n^{\nu_0}$ is equivalent to the statistical model associated with the continuous observation of a process $(\tilde y_t)_{t\in I}$ defined by: $$\begin{aligned} d\tilde y_t=\sqrt{f(t)g(t)}dt+\frac{d W_t}{2\sqrt{T_n}}, \quad t\in I.\end{aligned}$$ Main results {#subsec:ch4mainresults} ------------ Using the notation introduced in Section \[subsec:ch4parameter\], we now state our main results. For brevity of notation, we will denote by $H(f,\hat f_m)$ (resp. $L_2(f,\hat f_m)$) the Hellinger distance (resp. the $L_2$ distance) between the Lévy measures $\nu$ and $\hat\nu_m$ restricted to $I\setminus{[-\varepsilon_m,\varepsilon_m]}$, i.e.: $$\begin{aligned} H^2(f,\hat f_m)&:=\int_{I\setminus{[-\varepsilon_m,\varepsilon_m]}}\Big(\sqrt{f(x)}-\sqrt{\hat f_m(x)}\Big)^2 \nu_0(dx),\\ L_2(f,\hat f_m)^2&:=\int_{I\setminus{[-\varepsilon_m,\varepsilon_m]}}\big(f(y)-\hat f_m(y)\big)^2\nu_0(dy).\end{aligned}$$ Observe that Condition (H1) implies (see Lemma \[lemma:ch4hellinger\]) $$\frac{1}{4M}L_2(f,\hat f_m)^2\leq H^2(f,\hat f_m)\leq \frac{1}{4\kappa}L_2(f,\hat f_m)^2.$$ \[ch4teo1\] Let $\nu_0$ be a known Lévy measure concentrated on a (possibly infinite) interval $I\subseteq {\ensuremath {\mathbb{R}}}$ and having strictly positive density with respect to the Lebesgue measure. Let us choose a parameter space ${\ensuremath {\mathscr{F}}}^{\nu_0, I}$ such that there exist a sequence $m = m_n$ of integers, functions $V_j$, $j = \pm 2, \dots, \pm m$ and a sequence $\varepsilon_m \to 0$ as $m \to \infty$ such that Conditions [(H1), (C1), (C2)]{.nodecor} are satisfied for ${\ensuremath {\mathscr{F}}}= {\ensuremath {\mathscr{F}}}^{\nu_0, I}$. Then, for $n$ big enough we have: $$\begin{aligned} \Delta({\ensuremath {\mathscr{P}}}_n^{\nu_0}, {\ensuremath {\mathscr{W}}}_n^{\nu_0}) &= O\bigg(\sqrt{n\Delta_n}\sup_{f\in {\ensuremath {\mathscr{F}}}}\Big(A_m(f)+B_m(f)+C_m(f)\Big)\bigg) \nonumber \\ & +O\bigg(\sqrt{n\Delta_n}\sup_{f\in{\ensuremath {\mathscr{F}}}}L_2(f, \hat f_m)+\sqrt{\frac{m}{n\Delta_n}\Big(\frac{1}{\mu_m}+\frac{1}{\mu_m^-}\Big)}\bigg). \label{eq:teo1}\end{aligned}$$ \[ch4teo2\] Let $\nu_0$ be a known Lévy measure concentrated on a (possibly infinite) interval $I\subseteq {\ensuremath {\mathbb{R}}}$ and having strictly positive density with respect to the Lebesgue measure. Let us choose a parameter space ${\ensuremath {\mathscr{F}}}^{\nu_0, I}$ such that there exist a sequence $m = m_n$ of integers, functions $V_j$, $j = \pm 2, \dots, \pm m$ and a sequence $\varepsilon_m \to 0$ as $m \to \infty$ such that Conditions [(H1), (C1), (C2’)]{.nodecor} are satisfied for ${\ensuremath {\mathscr{F}}}= {\ensuremath {\mathscr{F}}}^{\nu_0, I}$. Then, for $n$ big enough we have: $$\begin{aligned} \Delta({\ensuremath {\mathscr{Q}}}_n^{\nu_0}, {\ensuremath {\mathscr{W}}}_n^{\nu_0})& = O\bigg( \nu_0\Big(I\setminus[-\varepsilon_m,\varepsilon_m]\Big)\sqrt{n\Delta_n^2}+\frac{m\ln m}{\sqrt{n}}+\sqrt{n\sqrt{\Delta_n}\sup_{f\in{\ensuremath {\mathscr{F}}}}C_m(f)}\bigg) \nonumber \\ &+O\bigg(\sqrt{n\Delta_n}\sup_{f\in{\ensuremath {\mathscr{F}}}}\Big(A_m(f)+B_m(f)+H(f,\hat f_m)\Big)\bigg).\label{eq:teo2}\end{aligned}$$ \[cor:ch4generale\] Let $\nu_0$ be as above and let us choose a parameter space ${\ensuremath {\mathscr{F}}}^{\nu_0, I}$ so that there exist sequences $m_n'$, $\varepsilon_m'$, $V_j'$ and $m_n''$, $\varepsilon_m''$, $V_j''$ such that: - Conditions (H1), (C1) and (C2) hold for $m_n'$, $\varepsilon_m'$, $V_j'$, and $\frac{m'}{n\Delta_n}\Big(\frac{1}{\mu_{m'}}+\frac{1}{\mu_{m'}^-}\Big)$ tends to zero. - Conditions (H1), (C1) and (C2’) hold for $m_n''$, $\varepsilon_m''$, $V_j''$, and $\nu_0\Big(I\setminus[-\varepsilon_{m''},\varepsilon_{m''}]\Big)\sqrt{n\Delta_n^2}+\frac{m''\ln m''}{\sqrt{n}}$ tends to zero. Then the statistical models ${\ensuremath {\mathscr{P}}}_{n}^{\nu_0}$ and ${\ensuremath {\mathscr{Q}}}_{n}^{\nu_0}$ are asymptotically equivalent: $$\lim_{n\to\infty}\Delta({\ensuremath {\mathscr{P}}}_{n}^{\nu_0},{\ensuremath {\mathscr{Q}}}_{n}^{\nu_0})=0,$$ If, in addition, the Lévy measures have finite variation, i.e. if we assume (FV), then the same results hold replacing ${\ensuremath {\mathscr{P}}}_{n}^{\nu_0}$ and ${\ensuremath {\mathscr{Q}}}_{n}^{\nu_0}$ by ${\ensuremath {\mathscr{P}}}_{n,FV}^{\nu_0}$ and ${\ensuremath {\mathscr{Q}}}_{n,FV}^{\nu_0}$, respectively (see Lemma \[ch4LC\]). Examples {#sec:ch4experiments} ======== We will now analyze three different examples, underlining the different behaviors of the Lévy measure $\nu_0$ (respectively, finite, infinite with finite variation and infinite with infinite variation). The three chosen Lévy measures are ${\ensuremath {\mathbb{I}}}_{[0,1]}(x) dx$, ${\ensuremath {\mathbb{I}}}_{[0,1]}(x) \frac{dx}{x}$ and ${\ensuremath {\mathbb{I}}}_{{\ensuremath {\mathbb{R}}}_+}(x)\frac{dx}{x^2}$. In all three cases we assume the parameter $f$ to be uniformly bounded and with uniformly $\gamma$-Hölder derivatives: We will describe adequate subclasses ${\ensuremath {\mathscr{F}}}^{\nu_0, I} \subseteq {\ensuremath {\mathscr{F}}}_{(\gamma, K, \kappa, M)}^I$ defined as in . It seems very likely that the same results that are highlighted in these examples hold true for more general Lévy measures; however, we limit ourselves to these examples in order to be able to explicitly compute the quantities involved ($v_j$, $x_j^*$, etc.) and hence estimate the distance between $f$ and $\hat f_m$ as in Examples \[ex:ch4esempi\]. In the first of the three examples, where $\nu_0$ is the Lebesgue measure on $I=[0,1]$, we are considering the statistical models associated with the discrete and continuous observation of a compound Poisson process with Lévy density $f$. Observe that ${\ensuremath {\mathscr{W}}}_n^{{\ensuremath{\textnormal{Leb}}}}$ reduces to the statistical model associated with the continuous observation of a trajectory from: $$dy_t=\sqrt{f(t)}dt+\frac{1}{2\sqrt{T_n}}dW_t,\quad t\in [0,1].$$ In this case we have: \[ex:ch4CPP\](Finite Lévy measure). Let $\nu_0$ be the Lebesgue measure on $I=[0,1]$ and let ${\ensuremath {\mathscr{F}}}= {\ensuremath {\mathscr{F}}}^{{\ensuremath{\textnormal{Leb}}}, [0,1]}$ be any subclass of ${\ensuremath {\mathscr{F}}}_{(\gamma, K, \kappa, M)}^{[0,1]}$ for some strictly positive constants $K$, $\kappa$, $M$ and $\gamma\in(0,1]$. Then: $$\lim_{n\to\infty}\Delta({\ensuremath {\mathscr{P}}}_{n,FV}^{{\ensuremath{\textnormal{Leb}}}},{\ensuremath {\mathscr{W}}}_n^{{\ensuremath{\textnormal{Leb}}}})=0 \ \textnormal{ and } \ \lim_{n\to\infty}\Delta({\ensuremath {\mathscr{Q}}}_{n,FV}^{{\ensuremath{\textnormal{Leb}}}},{\ensuremath {\mathscr{W}}}_n^{{\ensuremath{\textnormal{Leb}}}})=0.$$ More precisely, $$\Delta({\ensuremath {\mathscr{P}}}_{n,FV}^{{\ensuremath{\textnormal{Leb}}}},{\ensuremath {\mathscr{W}}}_n^{{\ensuremath{\textnormal{Leb}}}})=\begin{cases}O\Big((n\Delta_n)^{-\frac{\gamma}{4+2\gamma}}\Big)\quad \textnormal{if } \ \gamma\in\big(0,\frac{1}{2}\big],\\ O\Big((n \Delta_n)^{-\frac{1}{10}}\Big)\quad \textnormal{if } \ \gamma\in\big(\frac{1}{2},1\big]. \end{cases}$$ In the case where $\Delta_n = n^{-\beta}$, $\frac{1}{2} < \beta < 1$, an upper bound for the rate of convergence of $\Delta({\ensuremath {\mathscr{Q}}}_{n,FV}^{{\ensuremath{\textnormal{Leb}}}}, {\ensuremath {\mathscr{W}}}_n^{{\ensuremath{\textnormal{Leb}}}})$ is $$\Delta({\ensuremath {\mathscr{Q}}}_{n,FV}^{{\ensuremath{\textnormal{Leb}}}}, {\ensuremath {\mathscr{W}}}_n^{{\ensuremath{\textnormal{Leb}}}})=\begin{cases} O\Big(n^{-\frac{\gamma+\beta}{4+2\gamma}}\ln n\Big)\quad \textnormal{if } \ \gamma\in\big(0,\frac{1}{2}\big) \text{ and }\frac{2+2\gamma}{3+2\gamma} \leq \beta < 1,\\ O\Big(n^{\frac{1}{2}-\beta}\ln n\Big)\quad \textnormal{if } \ \gamma\in\big(0,\frac{1}{2}\big) \text{ and } \frac{1}{2} < \beta < \frac{2+2\gamma}{3+2\gamma},\\ O\Big(n^{-\frac{2\beta+1}{10}}\ln n\Big)\quad \textnormal{if } \ \gamma\in\big[\frac{1}{2},1\big] \text{ and } \frac{3}{4} \leq \beta < 1,\\ O\Big(n^{\frac{1}{2}-\beta}\ln n\Big)\quad \textnormal{if } \ \gamma\in\big[\frac{1}{2},1\big] \text{ and } \frac{1}{2} < \beta < \frac{3}{4}. \end{cases}$$ See Section \[subsec:ch4ex1\] for a proof. \[ch4ex2\](Infinite Lévy measure with finite variation). Let $X$ be a truncated Gamma process with (infinite) Lévy measure of the form: $$\nu(A)=\int_A \frac{e^{-\lambda x}}{x}dx,\quad A\in{\ensuremath {\mathscr{B}}}([0,1]).$$ Here ${\ensuremath {\mathscr{F}}}^{\nu_0, I}$ is a 1-dimensional parametric family in $\lambda$, assuming that there exists a known constant $\lambda_0$ such that $0<\lambda\leq \lambda_0<\infty$, $f(t) = e^{-\lambda t}$ and $d\nu_0(x)=\frac{1}{x}dx$. In particular, the $f$ are Lipschitz, i.e. ${\ensuremath {\mathscr{F}}}^{\nu_0, [0,1]} \subset {\ensuremath {\mathscr{F}}}_{(\gamma = 1, K, \kappa, M)}^{[0,1]}$. The discrete or continuous observation (up to time $T_n$) of $X$ are asymptotically equivalent to ${\ensuremath {\mathscr{W}}}_n^{\nu_0}$, the statistical model associated with the observation of a trajectory of the process $(y_t)$: $$dy_t=\sqrt{f(t)}dt+\frac{\sqrt tdW_t}{2\sqrt{T_n}},\quad t\in[0,1].$$ More precisely, in the case where $\Delta_n = n^{-\beta}$, $\frac{1}{2} < \beta < 1$, an upper bound for the rate of convergence of $\Delta({\ensuremath {\mathscr{Q}}}_{n,FV}^{\nu_0}, {\ensuremath {\mathscr{W}}}_n^{\nu_0})$ is $$\Delta({\ensuremath {\mathscr{Q}}}_{n,FV}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0}) = \begin{cases} O\big(n^{\frac{1}{2}-\beta} \ln n\big) & \text{if } \frac{1}{2} < \beta \leq \frac{9}{10}\\ O\big(n^{-\frac{1+2\beta}{7}} \ln n\big) & \text{if } \frac{9}{10} < \beta < 1. \end{cases}$$ Concerning the continuous setting we have: $$\Delta({\ensuremath {\mathscr{P}}}_{n,FV}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0})=O\Big(n^{\frac{\beta-1}{6}} \big(\ln n\big)^{\frac{5}{2}}\Big) = O\Big(T_n^{-\frac{1}{6}} \big(\ln T_n\big)^\frac{5}{2}\Big).$$ See Section \[subsec:ch4ex2\] for a proof. \[ex3\](Infinite Lévy measure, infinite variation). Let $X$ be a pure jump Lévy process with infinite Lévy measure of the form: $$\nu(A)=\int_A \frac{2-e^{-\lambda x^3}}{x^2}dx,\quad A\in{\ensuremath {\mathscr{B}}}({\ensuremath {\mathbb{R}}}^+).$$ Again, we are considering a parametric family in $\lambda > 0$, assuming that the parameter stays bounded below a known constant $\lambda_0$. Here, $f(t) =2- e^{-\lambda t^3}$, hence $1\leq f(t)\leq 2$, for all $t\geq 0$, and $f$ is Lipschitz, i.e. ${\ensuremath {\mathscr{F}}}^{\nu_0, {\ensuremath {\mathbb{R}}}_+} \subset {\ensuremath {\mathscr{F}}}_{(\gamma = 1, K, \kappa, M)}^{{\ensuremath {\mathbb{R}}}_+}$. The discrete or continuous observations (up to time $T_n$) of $X$ are asymptotically equivalent to the statistical model associated with the observation of a trajectory of the process $(y_t)$: $$dy_t=\sqrt{f(t)}dt+\frac{tdW_t}{2\sqrt{T_n}},\quad t\geq 0.$$ More precisely, in the case where $\Delta_n = n^{-\beta}$, $0 < \beta < 1$, an upper bound for the rate of convergence of $\Delta({\ensuremath {\mathscr{Q}}}_{n}^{\nu_0}, {\ensuremath {\mathscr{W}}}_n^{\nu_0})$ is $$\Delta({\ensuremath {\mathscr{Q}}}_{n}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0}) = \begin{cases} O\big(n^{\frac{1}{2} - \frac{2}{3}\beta}\big)& \text{if } \frac{3}{4} < \beta < \frac{12}{13}\\ O\big(n^{-\frac{1}{6}+\frac{\beta}{18}} (\ln n)^{\frac{7}{6}}\big) &\text{if } \frac{12}{13}\leq \beta<1. \end{cases}$$ In the continuous setting, we have $$\Delta({\ensuremath {\mathscr{P}}}_{n}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0})=O\big(n^{\frac{3\beta-3}{34}}(\ln n)^{\frac{7}{6}}\big) = O\big(T_n^{-\frac{3}{34}} (\ln T_n)^{\frac{7}{6}}\big).$$ See Section \[subsec:ch4ex3\] for a proof. Proofs of the main results {#sec:ch4proofs} ========================== In order to simplify notations, the proofs will be presented in the case $I\subseteq {\ensuremath {\mathbb{R}}}^+$. Nevertheless, this allows us to present all the main difficulties, since they can only appear near 0. To prove Theorems \[ch4teo1\] and \[ch4teo2\] we need to introduce several intermediate statistical models. In that regard, let us denote by $Q_j^f$ the law of a Poisson random variable with mean $T_n\nu(J_j)$ (see for the definition of $J_{j}$). We will denote by $\mathscr{L}_m$ the statistical model associated with the family of probabilities $\big\{\bigotimes_{j=2}^m Q_j^f:f\in{\ensuremath {\mathscr{F}}}\big\}$: $$\label{eq:ch4l} \mathscr{L}_m=\bigg(\bar{{\ensuremath {\mathbb{N}}}}^{m-1},\mathcal P(\bar{{\ensuremath {\mathbb{N}}}}^{m-1}), \bigg\{\bigotimes_{j=2}^m Q_j^f:f\in{\ensuremath {\mathscr{F}}}\bigg\}\bigg).$$ By $N_{j}^f$ we mean the law of a Gaussian random variable ${\ensuremath {\mathscr{Nn}}}(2\sqrt{T_n\nu(J_j)},1)$ and by $\mathscr{N}_m$ the statistical model associated with the family of probabilities $\big\{\bigotimes_{j=2}^m N_j^f:f\in{\ensuremath {\mathscr{F}}}\big\}$: $$\label{eq:ch4n} \mathscr{N}_m=\bigg({\ensuremath {\mathbb{R}}}^{m-1},\mathscr B({\ensuremath {\mathbb{R}}}^{m-1}), \bigg\{\bigotimes_{j=2}^m N_j^f:f\in{\ensuremath {\mathscr{F}}}\bigg\}\bigg).$$ For each $f\in{\ensuremath {\mathscr{F}}}$, let $\bar \nu_m$ be the measure having $\bar f_m$ as a density with respect to $\nu_0$ where, for every $f\in{\ensuremath {\mathscr{F}}}$, $\bar f_m$ is defined as follows. $$\label{eq:ch4barf} \bar f_m(x):= \begin{cases} \quad 1 & \textnormal{if } x\in J_1,\\ \frac{\nu(J_j)}{{\nu_0}(J_{j})} & \textnormal{if } x\in J_{j}, \quad j = 2,\dots,m. \end{cases}$$ Furthermore, define $$\label{eq:ch4modellobar} \bar{\ensuremath {\mathscr{P}}}_{n}^{\nu_0}=\bigg(D,{\ensuremath {\mathscr{D}}}_{T_n},\Big\{P_{T_n}^{(\gamma^{\bar \nu_m-\nu_0},0,\bar\nu_m)}:\frac{d\bar\nu_m}{d\nu_0}\in{\ensuremath {\mathscr{F}}}\Big\}\bigg).$$ Proof of Theorem \[ch4teo1\] ---------------------------- We begin by a series of lemmas that will be needed in the proof. Before doing so, let us underline the scheme of the proof. We recall that the goal is to prove that estimating $f=\frac{d\nu}{d\nu_0}$ from the continuous observation of a Lévy process $(X_t)_{t\in[0,T_n]}$ without Gaussian part and having Lévy measure $\nu$ is asymptotically equivalent to estimating $f$ from the Gaussian white noise model: $$dy_t=\sqrt{f(t)}dt+\frac{1}{2\sqrt{T_n g(t)}}dW_t,\quad g=\frac{d\nu_0}{d{\ensuremath{\textnormal{Leb}}}},\quad t\in I.$$ Also, recall the definition of $\hat \nu_m$ given in and read ${\ensuremath {\mathscr{P}}}_1 \overset{\Delta} \Longleftrightarrow {\ensuremath {\mathscr{P}}}_2$ as ${\ensuremath {\mathscr{P}}}_1$ is asymptotically equivalent to ${\ensuremath {\mathscr{P}}}_2$. Then, we can outline the proof in the following way. - Step 1: $P_{T_n}^{(\gamma^{\nu-\nu_0},0,\nu)} \overset{\Delta} \Longleftrightarrow P_{T_n}^{(\gamma^{\hat\nu_m-\nu_0},0,\hat\nu_m)}$; - Step 2: $P_{T_n}^{(\gamma^{\hat\nu_m-\nu_0},0,\hat\nu_m)} \overset{\Delta} \Longleftrightarrow \bigotimes_{j=2}^m {\ensuremath {\mathscr{P}}}(T_n\nu(J_j))$ (Poisson approximation). Here $\bigotimes_{j=2}^m {\ensuremath {\mathscr{P}}}(T_n\nu(J_j))$ represents a statistical model associated with the observation of $m-1$ independent Poisson r.v. of parameters $T_n\nu(J_j)$; - Step 3: $\bigotimes_{j=2}^m {\ensuremath {\mathscr{P}}}(T_n \nu(J_j)) \overset{\Delta} \Longleftrightarrow \bigotimes_{j=2}^m {\ensuremath {\mathscr{Nn}}}(2\sqrt{T_n\nu(J_j)},1)$ (Gaussian approximation); - Step 4: $\bigotimes_{j=2}^m {\ensuremath {\mathscr{Nn}}}(2\sqrt{T_n\nu(J_j)},1)\overset{\Delta} \Longleftrightarrow (y_t)_{t\in I}$. Lemmas \[lemma:ch4poisson\]–\[lemma:ch4kernel\], below, are the key ingredients of Step 2. \[lemma:ch4poisson\] Let $\bar{\ensuremath {\mathscr{P}}}_{n}^{\nu_0}$ and $\mathscr{L}_m$ be the statistical models defined in and , respectively. Under the Assumption (H2) we have: $$\Delta(\bar{\ensuremath {\mathscr{P}}}_{n}^{\nu_0}, \mathscr{L}_m)=0, \textnormal{ for all } m.$$ Denote by $\bar {\ensuremath {\mathbb{N}}}={\ensuremath {\mathbb{N}}}\cup \{\infty\}$ and consider the statistics $S:(D,{\ensuremath {\mathscr{D}}}_{T_n})\to \big(\bar{\ensuremath {\mathbb{N}}}^{m-1},\mathcal{P}(\bar{\ensuremath {\mathbb{N}}}^{m-1})\big)$ defined by $$\label{eq:ch4S} S(x)=\Big(N_{T_n}^{x;\,2},\dots,N_{T_n}^{x;\,m}\bigg)\quad \textnormal{with} \quad N_{T_n}^{x;\,j}=\sum_{r\leq T_n}{\ensuremath {\mathbb{I}}}_{J_{j}}(\Delta x_r).$$ An application of Theorem \[ch4teosato\] to $P_{T_n}^{(\gamma^{\bar \nu_m-\nu_0},0,\bar \nu_m)}$ and $P_{T_n}^{(0,0,\nu_0)}$, yields $$\frac{d P_{T_n}^{(\gamma^{\bar \nu_m-\nu_0},0,\bar \nu_m)}}{dP_{T_n}^{(0,0,\nu_0)}}(x)=\exp\bigg(\sum_{j=2}^m \bigg(\ln\Big(\frac{\nu(J_j)}{\nu_0(J_j)}\Big)\bigg) N_{T_n}^{x;j}-T_n\int_I(\bar f_m(y)-1)\nu_0(dy)\bigg).$$ Hence, by means of the Fisher factorization theorem, we conclude that $S$ is a sufficient statistics for $\bar{\ensuremath {\mathscr{P}}}_{n}^{\nu_0}$. Furthermore, under $P_{T_n}^{(\gamma^{\bar \nu_m-\nu_0},0,\bar \nu_m)}$, the random variables $N_{T_n}^{x;j}$ have Poisson distributions $Q_{j}^f$ with means $T_n\nu(J_j)$. Then, by means of Property \[ch4fatto3\], we get $\Delta(\bar{\ensuremath {\mathscr{P}}}_{n}^{\nu_0}, \mathscr{L}_m)=0, \textnormal{ for all } m.$ Let us denote by $\hat Q_j^f$ the law of a Poisson random variable with mean $T_n\int_{J_j}\hat f_m(y)\nu_0(dy)$ and let $\hat{\mathscr{L}}_m$ be the statistical model associated with the family of probabilities $\{\bigotimes_{j=2}^m \hat Q_j^f:f\in {\ensuremath {\mathscr{F}}}\}$. \[lemma:ch4poissonhatf\] $$\Delta(\mathscr L_m,\hat{\mathscr{L}}_m)\leq \sup_{f\in {\ensuremath {\mathscr{F}}}}\sqrt{\frac{T_n}{\kappa}\int_{I\setminus[0,\varepsilon_m]}\big(f(y)-\hat f_m(y)\big)^2\nu_0(dy)}.$$ By means of Facts \[ch4h\]–\[fact:ch4hellingerpoisson\], we get: $$\begin{aligned} \Delta(\mathscr L_m,\hat{\mathscr{L}}_m)&\leq \sup_{f\in{\ensuremath {\mathscr{F}}}}H\bigg(\bigotimes_{j=2}^m Q_j^f,\bigotimes_{j=2}^m \hat Q_j^f\bigg)\\ &\leq \sup_{f\in{\ensuremath {\mathscr{F}}}}\sqrt{\sum_{j=2}^m 2 H^2(Q_j^f,\hat Q_j^f)}\\ & =\sup_{f\in{\ensuremath {\mathscr{F}}}}\sqrt 2\sqrt{\sum_{j=2}^m\bigg(1-\exp\bigg(-\frac{T_n}{2}\bigg[\sqrt{\int_{J_j}\hat f(y)\nu_0(dy)}-\sqrt{\int_{J_j} f(y)\nu_0(dy)}\bigg]^2\bigg)\bigg)}.\end{aligned}$$ By making use of the fact that $1-e^{-x}\leq x$ for all $x\geq 0$ and the equality $\sqrt a-\sqrt b= \frac{a-b}{\sqrt a+\sqrt b}$ combined with the lower bound $f\geq \kappa$ (that also implies $\hat f_m\geq \kappa$) and finally the Cauchy-Schwarz inequality, we obtain: $$\begin{aligned} &1-\exp\bigg(-\frac{T_n}{2}\bigg[\sqrt{\int_{J_j}\hat f(y)\nu_0(dy)}-\sqrt{\int_{J_j} f(y)\nu_0(dy)}\bigg]^2\bigg)\\ &\leq \frac{T_n}{2}\bigg[\sqrt{\int_{J_j}\hat f(y)\nu_0(dy)}-\sqrt{\int_{J_j} f(y)\nu_0(dy)}\bigg]^2\\ & \leq \frac{T_n}{2} \frac{\bigg(\int_{J_j}(f(y)-\hat f_m(y))\nu_0(dy)\bigg)^2}{\kappa \nu_0(J_j)}\\ &\leq \frac{T_n}{2\kappa} \int_{J_j}\big(f(y)-\hat f_m(y)\big)^2\nu_0(dy). \end{aligned}$$ Hence, $$H\bigg(\bigotimes_{j=2}^m Q_j^f,\bigotimes_{j=2}^m \hat Q_j^f\bigg)\leq \sqrt{\frac{T_n}{\kappa}\int_{I\setminus[0,\varepsilon_m]}\big(f(y)-\hat f_m(y)\big)^2\nu_0(dy)}.$$ \[lemma:ch4kernel\] Let $\hat\nu_m$ and $\bar \nu_m$ the Lévy measures defined as in and , respectively. For every $f\in {\ensuremath {\mathscr{F}}}$, there exists a Markov kernel $K$ such that $$KP_{T_n}^{(\gamma^{\bar\nu_m-\nu_0},0,\bar\nu_m)}=P_{T_n}^{(\gamma^{\hat \nu_m-\nu_0},0,\hat \nu_m)}.$$ By construction, $\bar\nu_m$ and $\hat\nu_m$ coincide on $[0,\varepsilon_m]$. Let us denote by $\bar \nu_m^{\textnormal{res}}$ and $\hat\nu_m^{\textnormal{res}}$ the restriction on $I\setminus[0,\varepsilon_m]$ of $\bar\nu_m$ and $\hat\nu_m$ respectively, then it is enough to prove: $KP_{T_n}^{(\gamma^{\bar\nu_m^{\textnormal{res}}-\nu_0},0,\bar\nu_m^{\textnormal{res}})}=P_{T_n}^{(\gamma^{\hat \nu_m^{\textnormal{res}}-\nu_0},0,\hat \nu_m^{\textnormal{res}})}.$ First of all, let us observe that the kernel $M$: $$M(x,A)=\sum_{j=2}^m{\ensuremath {\mathbb{I}}}_{J_j}(x)\int_A V_j(y)\nu_0(dy),\quad x\in I\setminus[0,\varepsilon_m],\quad A\in{\ensuremath {\mathscr{B}}}(I\setminus[0,\varepsilon_m])$$ is defined in such a way that $M \bar\nu_m^{\textnormal{res}} = \hat \nu_m^{\textnormal{res}}$. Indeed, for all $A\in{\ensuremath {\mathscr{B}}}(I\setminus[0,\varepsilon_m])$, $$\begin{aligned} M\bar\nu_m^{\textnormal{res}}(A)&=\sum_{j=2}^m\int_{J_j}M(x,A)\bar\nu_m^{\textnormal{res}}(dx)=\sum_{j=2}^m \int_{J_j}\bigg(\int_A V_j(y)\nu_0(dy)\bigg)\bar\nu_m^{\textnormal{res}}(dx)\nonumber\\ &=\sum_{j=2}^m \bigg(\int_A V_j(y)\nu_0(dy)\bigg)\nu(J_j)=\int_A \hat f_m(y)\nu_0(dy)=\hat \nu_m^{\textnormal{res}}(A). \label{eqn:M} \end{aligned}$$ Observe that $(\gamma^{\bar\nu_m^{\textnormal{res}}-\nu_0},0,\bar\nu_m^{\textnormal{res}})$ and $(\gamma^{\hat \nu_m^{\textnormal{res}}-\nu_0},0,\hat \nu_m^{\textnormal{res}})$ are Lévy triplets associated with compound Poisson processes since $\bar\nu_m^{\textnormal{res}}$ and $\hat \nu_m^{\textnormal{res}}$ are finite Lévy measures. The Markov kernel $K$ interchanging the laws of the Lévy processes is constructed explicitly in the case of compound Poisson processes. Indeed if $\bar X$ is the compound Poisson process having Lévy measure $\bar\nu_m^{\textnormal{res}}$, then $\bar X_{t} = \sum_{i=1}^{N_t} \bar Y_{i}$, where $N_t$ is a Poisson process of intensity $\iota_m:=\bar\nu_m^{\textnormal{res}}(I\setminus [0,\varepsilon_m])$ and the $\bar Y_{i}$ are i.i.d. random variables with probability law $\frac{1}{\iota_m}\bar\nu_m^{\textnormal{res}}$. Moreover, given a trajectory of $\bar X$, both the trajectory $(n_t)_{t\in[0,T_n]}$ of the Poisson process $(N_t)_{t\in[0,T_n]}$ and the realizations $\bar y_i$ of $\bar Y_i$, $i=1,\dots,n_{T_n}$ are uniquely determined. This allows us to construct $n_{T_n}$ i.i.d. random variables $\hat Y_i$ as follows: For every realization $\bar y_i$ of $\bar Y_i$, we define the realization $\hat y_i$ of $\hat Y_i$ by throwing it according to the probability law $M(\bar y_i,\cdot)$. Hence, thanks to , $(\hat Y_i)_i$ are i.i.d. random variables with probability law $\frac{1}{\iota_m} \hat \nu_m^{\text{res}}$. The desired Markov kernel $K$ (defined on the Skorokhod space) is then given by: $$K : (\bar X_{t})_{t\in[0,T_n]} \longmapsto \bigg(\hat X_{t} := \sum_{i=1}^{N_t} \hat Y_{i}\bigg)_{t\in[0,T_n]}.$$ Finally, observe that, since $$\begin{aligned} \iota_m=\int_{I\setminus[0,\varepsilon_m]}\bar f_m(y)\nu_0(dy)&=\int_{I\setminus[0,\varepsilon_m]} f(y)\nu_0(dy)=\int_{I\setminus[0,\varepsilon_m]}\hat f_m(y)\nu_0(dy), \end{aligned}$$ $(\hat X_t)_{t\in[0,T_n]}$ is a compound Poisson process with Lévy measure $\hat\nu_m^{\textnormal{res}}.$ Let us now state two lemmas needed to understand Step 4. \[lemma:ch4wn\] Denote by ${\ensuremath {\mathscr{W}}}_m^\#$ the statistical model associated with the continuous observation of a trajectory from the Gaussian white noise: $$dy_t=\sqrt{f(t)}dt+\frac{1}{2\sqrt{T_n}\sqrt{g(t)}}dW_t,\quad t\in I\setminus [0,\varepsilon_m].$$ Then, according with the notation introduced in Section \[subsec:ch4parameter\] and at the beginning of Section \[sec:ch4proofs\], we have $$\Delta(\mathscr{N}_m,{\ensuremath {\mathscr{W}}}_m^\#)\leq 2\sqrt{T_n}\sup_{f\in {\ensuremath {\mathscr{F}}}} \big(A_m(f)+B_m(f)\big).$$ As a preliminary remark observe that ${\ensuremath {\mathscr{W}}}_m^\#$ is equivalent to the model that observes a trajectory from: $$d\bar y_t=\sqrt{f(t)}g(t)dt+\frac{\sqrt{g(t)}}{2\sqrt{T_n}}dW_t,\quad t\in I\setminus [0,\varepsilon_m].$$ Let us denote by $\bar Y_j$ the increments of the process $(\bar y_t)$ over the intervals $J_j$, $j=2,\dots,m$, i.e. $$\bar Y_j:=\bar y_{v_j}-\bar y_{v_{j-1}}\sim{\ensuremath {\mathscr{Nn}}}\bigg(\int_{J_j}\sqrt{f(y)}\nu_0(dy),\frac{\nu_0(J_j)}{4T_n}\bigg)$$ and denote by $\bar{\mathscr{N}}_m$ the statistical model associated with the distributions of these increments. As an intermediate result, we will prove that $$\label{eq:ch4normali} \Delta(\mathscr{N}_m,\bar{\mathscr{N}}_m)\leq 2\sqrt{T_n} \sup_{f\in {\ensuremath {\mathscr{F}}}} B_m(f), \ \textnormal{ for all m}.$$ To that aim, remark that the experiment $\bar{\mathscr{N}}_m$ is equivalent to observing $m-1$ independent Gaussian random variables of means $\frac{2\sqrt{T_n}}{\sqrt{\nu_0(J_j)}}\int_{J_j}\sqrt{f(y)}\nu_0(dy)$, $j=2,\dots,m$ and variances identically $1$, name this last experiment $\mathscr{N}^{\#}_m$. Hence, using also Property \[ch4delta0\], Facts \[ch4h\] and \[fact:ch4gaussiane\] we get: $$\begin{aligned} \Delta(\mathscr{N}_m, \bar{\mathscr{N}}_m)\leq\Delta(\mathscr{N}_m, \mathscr{N}^{\#}_m)&\leq \sqrt{\sum_{j=2}^m\bigg(\frac{2\sqrt{T_n}}{\sqrt{\nu_0(J_j)}}\int_{J_j}\sqrt{f(y)}\nu_0(dy)-2\sqrt{T_n\nu(J_j)}\bigg)^2}.\end{aligned}$$ Since it is clear that $\delta({\ensuremath {\mathscr{W}}}_m^\#,\bar{\mathscr{N}}_m)=0$, in order to bound $\Delta(\mathscr{N}_m,{\ensuremath {\mathscr{W}}}_m^\#)$ it is enough to bound $\delta(\bar{\mathscr{N}}_m,{\ensuremath {\mathscr{W}}}_m^\#)$. Using similar ideas as in [@cmultinomial] Section 8.2, we define a new stochastic process as: $$Y_t^*=\sum_{j=2}^m\bar Y_j\int_{\varepsilon_m}^t V_j(y)\nu_0(dy)+\frac{1}{2\sqrt{T_n}}\sum_{j=2}^m\sqrt{\nu_0(J_j)}B_j(t),\quad t\in I\setminus [0,\varepsilon_m],$$ where the $(B_j(t))$ are independent centered Gaussian processes independent of $(W_t)$ and with variances $$\textnormal{Var}(B_j(t))=\int_{\varepsilon_m}^tV_j(y)\nu_0(dy)-\bigg(\int_{\varepsilon_m}^tV_j(y)\nu_0(dy)\bigg)^2.$$ These processes can be constructed from a standard Brownian bridge $\{B(s), s\in[0,1]\}$, independent of $(W_t)$, via $$B_i(t)=B\bigg(\int_{\varepsilon_m}^t V_i(y)\nu_0(dy)\bigg).$$ By construction, $(Y_t^*)$ is a Gaussian process with mean and variance given by, respectively: $$\begin{aligned} {\ensuremath {\mathbb{E}}}[Y_t^*]&=\sum_{j=2}^m{\ensuremath {\mathbb{E}}}[\bar Y_j]\int_{\varepsilon_m}^t V_j(y)\nu_0(dy)=\sum_{j=2}^m\bigg(\int_{J_j}\sqrt{f(y)}\nu_0(dy)\bigg)\int_{\varepsilon_m}^t V_j(y)\nu_0(dy),\\ \textnormal{Var}[Y_t^*]&=\sum_{j=2}^m\textnormal{Var}[\bar Y_j]\bigg(\int_{\varepsilon_m}^t V_j(y)\nu_0(dy)\bigg)^2+\frac{1}{4T_n}\sum_{j=2}^m \nu_0(J_j)\textnormal{Var}(B_j(t))\\ &= \frac{1}{4T_n}\int_{\varepsilon_m}^t \sum_{j=2}^m \nu_0(J_j) V_j(y)\nu_0(dy)= \frac{1}{4T_n}\int_{\varepsilon_m}^t \nu_0(dy)=\frac{\nu_0([\varepsilon_m,t])}{4T_n}.\end{aligned}$$ One can compute in the same way the covariance of $(Y_t^*)$ finding that $$\textnormal{Cov}(Y_s^*,Y_t^*)=\frac{\nu_0([\varepsilon_m,s])}{4 T_n}, \ \forall s\leq t.$$ We can then deduce that $$Y^*_t=\int_{\varepsilon_m}^t \widehat{\sqrt {f}}_m(y)\nu_0(dy)+\int_{\varepsilon_m}^t\frac{\sqrt{g(s)}}{2\sqrt{T_n}}dW^*_s,\quad t\in I\setminus [0,\varepsilon_m],$$ where $(W_t^*)$ is a standard Brownian motion and $$\widehat{\sqrt {f}}_m(x):=\sum_{j=2}^m\bigg(\int_{J_j}\sqrt{f(y)}\nu_0(dy)\bigg)V_j(x).$$ Applying Fact \[fact:ch4processigaussiani\], we get that the total variation distance between the process $(Y_t^*)_{t\in I\setminus [0,\varepsilon_m]}$ constructed from the random variables $\bar Y_j$, $j=2,\dots,m$ and the Gaussian process $(\bar y_t)_{t\in I\setminus [0,\varepsilon_m]}$ is bounded by $$\sqrt{4 T_n\int_{I\setminus [0,\varepsilon_m]}\big(\widehat{\sqrt {f}}_m-\sqrt{f(y)}\big)^2\nu_0(dy)},$$ which gives the term in $A_m(f)$. \[lemma:ch4limitewn\] In accordance with the notation of Lemma \[lemma:ch4wn\], we have: $$\label{eq:ch4wn} \Delta({\ensuremath {\mathscr{W}}}_m^\#,{\ensuremath {\mathscr{W}}}_n^{\nu_0})=O\bigg(\sup_{f\in{\ensuremath {\mathscr{F}}}}\sqrt{T_n\int_0^{\varepsilon_m}\big(\sqrt{f(t)}-1\big)^2\nu_0(dt)}\bigg).$$ Clearly $\delta({\ensuremath {\mathscr{W}}}_n^{\nu_0},{\ensuremath {\mathscr{W}}}_m^\#)=0$. To show that $\delta({\ensuremath {\mathscr{W}}}_m^\#,{\ensuremath {\mathscr{W}}}_n^{\nu_0})\to 0$, let us consider a Markov kernel $K^\#$ from $C(I\setminus [0,\varepsilon_m])$ to $C(I)$ defined as follows: Introduce a Gaussian process, $(B_t^m)_{t\in[0,\varepsilon_m]}$ with mean equal to $t$ and covariance $$\textnormal{Cov}(B_s^m,B_t^m)=\int_0^{\varepsilon_m}\frac{1}{4 T_n g(s)}{\ensuremath {\mathbb{I}}}_{[0,s]\cap [0,t]}(z)dz.$$ In particular, $$\textnormal{Var}(B_t^m)=\int_0^t\frac{1}{4 T_n g(s)}ds.$$ Consider it as a process on the whole of $I$ by defining $B_t^m=B_{\varepsilon_m}^m$ $\forall t>\varepsilon_m$. Let $\omega_t$ be a trajectory in $C(I\setminus [0,\varepsilon_m])$, which again we constantly extend to a trajectory on the whole of $I$. Then, we define $K^\#$ by sending the trajectory $\omega_t$ to the trajectory $\omega_t + B_t^m$. If we define $\mathbb{\tilde W}_n$ as the law induced on $C(I)$ by $$d\tilde{y}_t = h(t) dt + \frac{dW_t}{2\sqrt{T_n g(t)}}, \quad t \in I,\quad h(t) = \begin{cases} 1 & t \in [0, \varepsilon_m]\\ \sqrt{f(t)} & t \in I\setminus [0,\varepsilon_m], \end{cases}$$ then $K^\# \mathbb{W}_n^f|_{I\setminus [0,\varepsilon_m]} = \mathbb{\tilde W}_n$, where $\mathbb{W}_n^f$ is defined as in . By means of Fact \[fact:ch4processigaussiani\] we deduce . The proof of the theorem follows by combining the previous lemmas together: - Step 1: Let us denote by $\hat{\ensuremath {\mathscr{P}}}_{n,m}^{\nu_0}$ the statistical model associated with the family of probabilities $(P_{T_n}^{(\gamma^{\hat\nu_m-\nu_0},0,\hat\nu_m)}:\frac{d\nu}{d\nu_0}\in{\ensuremath {\mathscr{F}}})$. Thanks to Property \[ch4delta0\], Fact \[ch4h\] and Theorem \[teo:ch4bound\] we have that $$\Delta({\ensuremath {\mathscr{P}}}_n^{\nu_0},\hat{\ensuremath {\mathscr{P}}}_{n,m}^{\nu_0})\leq \sqrt{\frac{T_n}{2}}\sup_{f\in {\ensuremath {\mathscr{F}}}}H(f,\hat f_m).$$ - Step 2: On the one hand, thanks to Lemma \[lemma:ch4poisson\], one has that the statistical model associated with the family of probability $(P_{T_n}^{(\gamma^{\bar \nu_m-\nu_0},0,\bar\nu_m)}:\frac{d\nu}{d\nu_0}\in{\ensuremath {\mathscr{F}}})$ is equivalent to $\mathscr{L}_m$. By means of Lemma \[lemma:ch4poissonhatf\] we can bound $\Delta(\mathscr{L}_m,\hat{\mathscr{L}}_m)$. On the other hand it is easy to see that $\delta(\hat{\ensuremath {\mathscr{P}}}_{n,m}^{\nu_0}, \hat{\mathscr{L}}_m)=0$. Indeed, it is enough to consider the statistics $$S: x \mapsto \bigg(\sum_{r\leq T_n}{\ensuremath {\mathbb{I}}}_{J_2}(\Delta x_r),\dots,\sum_{r\leq T_n}{\ensuremath {\mathbb{I}}}_{J_m}(\Delta x_r)\bigg)$$ since the law of the random variable $\sum_{r\leq T_n}{\ensuremath {\mathbb{I}}}_{J_j}(\Delta x_r)$ under $P_{T_n}^{(\gamma^{\hat\nu_m-\nu_0},0,\hat\nu_m)}$ is Poisson of parameter $T_n\int_{J_j}\hat f_m(y)\nu_0(dy)$ for all $j=2,\dots,m$. Finally, Lemmas \[lemma:ch4poisson\] and \[lemma:ch4kernel\] allows us to conclude that $\delta(\mathscr{L}_m,\hat{\ensuremath {\mathscr{P}}}_{n,m}^{\nu_0})=0$. Collecting all the pieces together, we get $$\Delta(\hat{\ensuremath {\mathscr{P}}}_{n,m}^{\nu_0},\mathscr{L}_m)\leq \sup_{f\in {\ensuremath {\mathscr{F}}}}\sqrt{\frac{T_n}{\kappa}\int_{I\setminus[0,\varepsilon_m]}\big(f(y)-\hat f_m(y)\big)^2\nu_0(dy)}.$$ - Step 3: Applying Theorem \[ch4teomisto\] and Fact \[ch4hp\] we can pass from the Poisson approximation given by $\mathscr{L}_m$ to a Gaussian one obtaining $$\Delta(\mathscr{L}_m,\mathscr{N}_m)=C\sup_{f\in {\ensuremath {\mathscr{F}}}}\sqrt{\sum_{j=2}^m\frac{2}{T_n\nu(J_j)}}\leq C\sqrt{\sum_{j=2}^m\frac{2\kappa}{T_n\nu_0(J_j)}}=C\sqrt{\frac{(m-1)2\kappa}{T_n\mu_m}}.$$ - Step 4: Finally, Lemmas \[lemma:ch4wn\] and \[lemma:ch4limitewn\] allow us to conclude that: $$\begin{aligned} \Delta({\ensuremath {\mathscr{P}}}_n^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0})&=O\bigg(\sqrt{T_n}\sup_{f\in {\ensuremath {\mathscr{F}}}}\big(A_m(f)+B_m(f)+C_m\big)\bigg)\\ & \quad + O\bigg(\sqrt{T_n}\sup_{f\in {\ensuremath {\mathscr{F}}}}\sqrt{\int_{I\setminus{[0,\varepsilon_m]}}\big(f(y)-\hat f_m(y)\big)^2\nu_0(dy)}+\sqrt{\frac{m}{T_n\mu_m}}\bigg).\end{aligned}$$ Proof of Theorem \[ch4teo2\] ---------------------------- Again, before stating some technical lemmas, let us highlight the main ideas of the proof. We recall that the goal is to prove that estimating $f=\frac{d\nu}{d\nu_0}$ from the discrete observations $(X_{t_i})_{i=0}^n$ of a Lévy process without Gaussian component and having Lévy measure $\nu$ is asymptotically equivalent to estimating $f$ from the Gaussian white noise model $$dy_t=\sqrt{f(t)}dt+\frac{1}{2\sqrt{T_n g(t)}}dW_t,\quad g=\frac{d\nu_0}{d{\ensuremath{\textnormal{Leb}}}},\quad t\in I.$$ Reading ${\ensuremath {\mathscr{P}}}_1 \overset{\Delta} \Longleftrightarrow {\ensuremath {\mathscr{P}}}_2$ as ${\ensuremath {\mathscr{P}}}_1$ is asymptotically equivalent to ${\ensuremath {\mathscr{P}}}_2$, we have: - Step 1. Clearly $(X_{t_i})_{i=0}^n \overset{\Delta} \Longleftrightarrow (X_{t_i}-X_{t_{i-1}})_{i=1}^n$. Moreover, $(X_{t_i}-X_{t_{i-1}})_i\overset{\Delta} \Longleftrightarrow (\epsilon_iY_i)$ where $(\epsilon_i)$ are i.i.d Bernoulli r.v. with parameter $\alpha=\iota_m \Delta_n e^{-\iota_m\Delta_n}$, $\iota_m:=\int_{I\setminus [0,\varepsilon_m]} f(y)\nu_0(dy)$ and $(Y_i)_i$ are i.i.d. r.v. independent of $(\epsilon_i)_{i=1}^n$ and of density $\frac{ f}{\iota_m}$ with respect to ${\nu_0}_{|_{I\setminus [0,\varepsilon_m]}}$; - Step 2. $(\epsilon_iY_i)_i \overset{\Delta} \Longleftrightarrow \mathcal M(n;(\gamma_j)_{j=1}^m)$, where $\mathcal M(n;(\gamma_j)_{j=1}^m)$ is a multinomial distribution with $\gamma_1=1-\alpha$ and $\gamma_i:=\alpha\nu(J_i)$ $i=2,\dots,m$; - Step 3. Gaussian approximation: $\mathcal M(n;(\gamma_1,\dots\gamma_m)) \overset{\Delta} \Longleftrightarrow \bigotimes_{j=2}^m {\ensuremath {\mathscr{Nn}}}(2\sqrt{T_n\nu(J_j)},1)$; - Step 4. $\bigotimes_{j=2}^m {\ensuremath {\mathscr{Nn}}}(2\sqrt{T_n\nu(J_j)},1)\overset{\Delta} \Longleftrightarrow (y_t)_{t\in I}$. \[lemma:ch4discreto\] Let $\nu_i$, $i=1,2$, be Lévy measures such that $\nu_1\ll\nu_2$ and $b_1-b_2=\int_{|y|\leq 1}y(\nu_1-\nu_2)(dy)<\infty$. Then, for all $0<t<\infty$, we have: $$\Big\|Q_t^{(b_1,0,\mu_1)}-Q_t^{(b_2,0,\mu_2)}\Big\|_{TV}\leq \sqrt \frac{t}{2} H(\nu_1,\nu_2).$$ For all given $t$, let $K_t$ be the Markov kernel defined as $K_t(\omega,A):={\ensuremath {\mathbb{I}}}_A(\omega_t)$, $\forall \ A\in{\ensuremath {\mathscr{B}}}({\ensuremath {\mathbb{R}}})$, $\forall \ \omega\in D$. Then we have: $$\begin{aligned} \big\|Q_t^{(b_1,0,\nu_1)}-Q_t^{(b_2,0,\nu_2)}\big\|_{TV}&=\big\|K_tP_t^{(b_1,0,\nu_1)}-K_tP_t^{(b_2,0,\nu_2)}\big\|_{TV}\\ &\leq \big\|P_t^{(b_1,0,\nu_1)}-P_t^{(b_2,0,\nu_2)}\big\|_{TV}\\ &\leq \sqrt \frac{t}{2} H(\nu_1,\nu_2), \end{aligned}$$ where we have used that Markov kernels reduce the total variation distance and Theorem \[teo:ch4bound\]. \[lemma:ch4bernoulli\] Let $(P_i)_{i=1}^n$, $(Y_i)_{i=1}^n$ and $(\epsilon_i)_{i=1}^n$ be samples of, respectively, Poisson random variables ${\ensuremath {\mathscr{P}}}(\lambda_i)$, random variables with common distribution and Bernoulli random variables of parameters $\lambda_i e^{-\lambda_i}$, which are all independent. Let us denote by $Q_{(Y_i,P_i)}$ (resp. $Q_{(Y_i,\epsilon_i)}$) the law of $\sum_{j=1}^{P_i} Y_j$ (resp., $\epsilon_i Y_i$). Then: $$\label{eq:ch4lambda} \Big\|\bigotimes_{i=1}^n Q_{(Y_i,P_i)}-\bigotimes_{i=1}^n Q_{(Y_i,\epsilon_i)}\Big\|_{TV}\leq 2\sqrt{\sum_{i=1}^n\lambda_i^2}.$$ The proof of this Lemma can be found in [@esterESAIM], Section 2.1. \[lemma:ch4troncatura\] Let $f_m^{\textnormal{tr}}$ be the truncated function defined as follows: $$f_m^{\textnormal{tr}}(x)=\begin{cases} 1 &\mbox{ if } x\in[0,\varepsilon_m]\\ f(x) &\mbox{ otherwise} \end{cases}$$ and let $\nu_m^{\textnormal{tr}}$ (resp. $\nu_m^{\textnormal{res}}$) be the Lévy measure having $f_m^{\textnormal{tr}}$ (resp. ${f|_{I\setminus [0,\varepsilon_m]}}$) as a density with respect to $\nu_0$. Denote by ${\ensuremath {\mathscr{Q}}}_{n}^{\textnormal{tr},\nu_0}$ the statistical model associated with the family of probabilities $\Big(\bigotimes_{i=1}^nQ_{t_i-t_{i-1}}^{(\gamma^{\nu_m^{\textnormal{tr}}-\nu_0},0,\nu_m^{\textnormal{tr}})}:\frac{d\nu_m^{\textnormal{tr}}}{d\nu_0}\in{\ensuremath {\mathscr{F}}}\Big)$ and by ${\ensuremath {\mathscr{Q}}}_{n}^{\textnormal{res},\nu_0}$ the model associated with the family of probabilities $\Big(\bigotimes_{i=1}^nQ_{t_i-t_{i-1}}^{(\gamma^{\nu_m^{\textnormal{res}}-\nu_0},0,\nu_m^{\textnormal{res}})}:\frac{d\nu_m^{\textnormal{res}}}{d\nu_0}\in{\ensuremath {\mathscr{F}}}\Big)$. Then: $$\Delta({\ensuremath {\mathscr{Q}}}_{n}^{\textnormal{tr},\nu_0},{\ensuremath {\mathscr{Q}}}_{n}^{\textnormal{res},\nu_0})=0.$$ Let us start by proving that $\delta({\ensuremath {\mathscr{Q}}}_{n}^{\textnormal{tr},\nu_0},{\ensuremath {\mathscr{Q}}}_{n}^{\textnormal{res},\nu_0})=0.$ For that, let us consider two independent Lévy processes, $X^{\textnormal{tr}}$ and $X^0$, of Lévy triplets given by $\big(\gamma^{\nu_m^{\textnormal{tr}}-\nu_0},0,\nu_m^{\textnormal{tr}-\nu_0}\big)$ and $\big(0,0,\nu_0|_{[0,\varepsilon_m]}\big)$, respectively. Then it is clear (using the *Lévy-Khintchine formula*) that the random variable $X_t^{\textnormal{tr}}- X_t^0$ is a randomization of $X_t^{\textnormal{tr}}$ (since the law of $X_t^0$ does not depend on $\nu$) having law $Q_t^{(\gamma^{\nu_m^{\textnormal{res}}-\nu_0},0,\nu_m^{\textnormal{res}})}$, for all $t\geq 0$. Similarly, one can prove that $\delta({\ensuremath {\mathscr{Q}}}_{n}^{\textnormal{res},\nu_0},{\ensuremath {\mathscr{Q}}}_{n}^{\textnormal{tr},\nu_0})=0.$ As a preliminary remark, observe that the model ${\ensuremath {\mathscr{Q}}}_n^{\nu_0}$ is equivalent to the one that observes the increments of $\big((x_t),P_{T_n}^{(\gamma^{\nu-\nu_0},0,\nu)}\big)$, that is, the model $\tilde{\ensuremath {\mathscr{Q}}}_n^{\nu_0}$ associated with the family of probabilities $\Big(\bigotimes_{i=1}^nQ_{t_i-t_{i-1}}^{(\gamma^{\nu-\nu_0},0,\nu)}:\frac{d\nu}{d\nu_0}\in{\ensuremath {\mathscr{F}}}\Big)$. - Step 1: Facts \[ch4h\]–\[ch4hp\] and Lemma \[lemma:ch4discreto\] allow us to write $$\begin{aligned} &\Big\|\bigotimes_{i=1}^nQ_{\Delta_n}^{(\gamma^{\nu-\nu_0},0,\nu)}-\bigotimes_{i=1}^nQ_{\Delta_n}^{(\gamma^{\nu_m^{\textnormal{tr}}-\nu_0},0, \nu_m^{\textnormal{tr}})}\Big\|_{TV}\leq \sqrt{n\sqrt\frac{\Delta_n}{2}H(\nu,\nu_m^{\textnormal{tr}})}\\&=\sqrt{n\sqrt\frac{\Delta_n}{2}\sqrt{\int_0^{\varepsilon_m}\big(\sqrt{f(y)}-1\big)^2\nu_0(dy)}}.\end{aligned}$$ Using this bound together with Lemma \[lemma:ch4troncatura\] and the notation therein, we get $\Delta({\ensuremath {\mathscr{Q}}}_n^{\nu_0}, {\ensuremath {\mathscr{Q}}}_{n}^{\textnormal{res},\nu_0})\leq \sqrt{n\sqrt\frac{\Delta_n}{2}\sup_{f\in {\ensuremath {\mathscr{F}}}}H(f, f_m^{\textnormal{tr}})}$. Observe that $\nu_m^{\textnormal{res}}$ is a finite Lévy measure, hence $\Big((x_t),P_{T_n}^{(\gamma^{\nu_m^{\textnormal{res}}},0,\nu_m^{\textnormal{res}})}\Big)$ is a compound Poisson process with intensity equal to $\iota_m:=\int_{I\setminus [0,\varepsilon_m]} f(y)\nu_0(dy)$ and jumps size density $\frac{ f(x)g(x)}{\iota_m}$, for all $x\in I\setminus [0,\varepsilon_m]$ (recall that we are assuming that $\nu_0$ has a density $g$ with respect to Lebesgue). In particular, this means that $Q_{\Delta_n}^{(\gamma^{\nu_m^{\textnormal{res}}},0,\nu_m^{\textnormal{res}})}$ can be seen as the law of the random variable $\sum_{j=1}^{P_i}Y_j$ where $P_i$ is a Poisson variable of mean $\iota_m \Delta_n$, independent from $(Y_i)_{i\geq 0}$, a sequence of i.i.d. random variables with density $\frac{ fg}{\iota_m}{\ensuremath {\mathbb{I}}}_{I\setminus[0,\varepsilon_m]}$ with respect to Lebesgue. Remark also that $\iota_m$ is confined between $\kappa \nu_0\big(I\setminus [0,\varepsilon_m]\big)$ and $M\nu_0\big(I\setminus [0,\varepsilon_m] \big)$. Let $(\epsilon_i)_{i\geq 0}$ be a sequence of i.i.d. Bernoulli variables, independent of $(Y_i)_{i\geq 0}$, with mean $\iota_m \Delta_n e^{-\iota_m\Delta_n}$. For $i=1,\dots,n$, denote by $Q_i^{\epsilon,f}$ the law of the variable $\epsilon_iY_i$ and by ${\ensuremath {\mathscr{Q}}}_n^{\epsilon}$ the statistical model associated with the observations of the vector $(\epsilon_1Y_1,\dots,\epsilon_nY_n)$, i.e. $${\ensuremath {\mathscr{Q}}}_n^{\epsilon}=\bigg(I^n,{\ensuremath {\mathscr{B}}}(I^n),\bigg\{\bigotimes_{i=1}^n Q_i^{\epsilon,f}:f\in{\ensuremath {\mathscr{F}}}\bigg\}\bigg).$$ Furthermore, denote by $\tilde Q_i^f$ the law of $\sum_{j=1}^{P_i}Y_j$. Then an application of Lemma \[lemma:ch4bernoulli\] yields: $$\begin{aligned} \Big\|\bigotimes_{i=1}^n\tilde Q_i^f&-\bigotimes_{i=1}^nQ_i^{\epsilon,f}\Big\|_{TV} \leq 2\iota_m\sqrt{n\Delta_n^2}\leq 2M\nu_0\big(I\setminus [0,\varepsilon_m]\big)\sqrt{n\Delta_n^2}.\end{aligned}$$ Hence, we get: $$\label{eq:ch4bernoulli} \Delta({\ensuremath {\mathscr{Q}}}_{n}^{\textnormal{res},\nu_0},{\ensuremath {\mathscr{Q}}}_n^{\epsilon})=O\bigg(\nu_0\big(I\setminus [0,\varepsilon_m]\big)\sqrt{n\Delta_n^2}\bigg).$$ Here the O depends only on $M$. - Step 2: Let us introduce the following random variables: $$Z_1=\sum_{j=1}^n{\ensuremath {\mathbb{I}}}_{\{0\}}(\epsilon_jY_j); \quad Z_i=\sum_{j=1}^n{\ensuremath {\mathbb{I}}}_{J_i}(\epsilon_jY_j),\ i=2,\dots,m.$$ Observe that the law of the vector $(Z_1,\dots,Z_m)$ is multinomial $\mathcal M(n;\gamma_1,\dots,\gamma_m)$ where $$\gamma_1=1-\iota_m \Delta_n e^{-\iota_m \Delta_n},\quad \gamma_i=\Delta_n e^{-\iota_m \Delta_n}\nu(J_i),\quad i=2,\dots,m.$$ Let us denote by $\mathcal M_n$ the statistical model associated with the observation of $(Z_1,\dots,Z_m)$. Clearly $\delta({\ensuremath {\mathscr{Q}}}_n^{\epsilon},\mathcal M_n)=0$. Indeed, $\mathcal M_n$ is the image experiment by the random variable $S:I^n\to\{1,\dots,n\}^{m}$ defined as $$S(x_1,\dots,x_n)=\Big(\#\{j: x_j=0\}; \#\big\{j: x_j\in J_2\big\};\dots;\#\big\{j: x_j\in J_m\big\}\Big),$$ where $\# A$ denotes the cardinal of the set $A$. We shall now prove that $\delta(\mathcal M_n,{\ensuremath {\mathscr{Q}}}_n^{\epsilon}) \leq \sup_{f\in{\ensuremath {\mathscr{F}}}}\sqrt{n\Delta_n H^2(f,\hat f_m)}$. We start by defining a discrete random variable $X^*$ concentrated at the points $0$, $x_i^*$, $i=2,\dots,m$: $${\ensuremath {\mathbb{P}}}(X^*=y)=\begin{cases} \gamma_i &\mbox{ if } y=x_i^*,\quad i=1,\dots,m,\\ 0 &\mbox{ otherwise}, \end{cases}$$ with the convention $x_1^*=0$. It is easy to see that $\mathcal M_n$ is equivalent to the statistical model associated with $n$ independent copies of $X^*$. Let us introduce the Markov kernel $$K(x_i^*, A) = \begin{cases} {\ensuremath {\mathbb{I}}}_A(0) & \text{if } i = 1,\\ \int_A V_i(x) \nu_0(dx) & \text{otherwise.} \end{cases}$$ Denote by $P^*$ the law of the random variable $X^*$ and by $Q_i^{\epsilon,\hat f}$ the law of a random variable $\epsilon_i \hat Y_i$ where $\epsilon_i$ is Bernoulli independent of $\hat Y_i$, with mean $\iota_m\Delta_n e^{-\iota_m\Delta_n}$ and $\hat Y_i$ has a density $\frac{\hat f_m g}{\iota_m}{\ensuremath {\mathbb{I}}}_{I\setminus[0,\varepsilon_m]}$ with respect to Lebesgue. The same computations as in Lemma \[lemma:ch4kernel\] prove that $KP^*=Q_i^{\epsilon,\hat f}$. Hence, thanks to Remark \[ch4independentkernels\], we get the equivalence between $\mathcal M_n$ and the statistical model associated with the observations of $n$ independent copies of $\epsilon_i \hat Y_i$. In order to bound $\delta(\mathcal M_n,{\ensuremath {\mathscr{Q}}}_n^{\epsilon})$ it is enough to bound the total variation distance between the probabilities $\bigotimes_{i=1}^n Q_i^{\epsilon,f}$ and $\bigotimes_{i=1}^n Q_i^{\epsilon,\hat f}$. Alternatively, we can bound the Hellinger distance between each of the $Q_i^{\epsilon,f}$ and $Q_i^{\epsilon,\hat f}$, thanks to Facts \[ch4h\] and \[ch4hp\], which is: $$\begin{aligned} \bigg\|\bigotimes_{i=1}^nQ_i^{\epsilon,f} -\bigotimes_{i=1}^nQ_i^{\epsilon,\hat f}\bigg\|_{TV} &\leq \sqrt{\sum_{i=1}^n H^2\big(Q_i^{\epsilon,f}, Q_i^{\epsilon,\hat f}\big)}\\ &= \sqrt{\sum_{i=1}^n \frac{1-\gamma_1}{\iota} H^2(f, \hat f_m)} \leq \sqrt{n\Delta_n H^2(f, \hat f_m)}.\end{aligned}$$ It follows that $$\delta(\mathcal M_n,{\ensuremath {\mathscr{Q}}}_n^{\epsilon})\leq \sqrt{n\Delta_n} \sup_{f \in {\ensuremath {\mathscr{F}}}}H(f,\hat f_m).$$ - Step 3: Let us denote by $\mathcal N_m^*$ the statistical model associated with the observation of $m$ independent Gaussian variables ${\ensuremath {\mathscr{Nn}}}(n\gamma_i,n\gamma_i)$, $i=1,\dots,m$. Very similar computations to those in [@cmultinomial] yield $$\Delta(\mathcal M_n,\mathcal N_m^*)=O\Big(\frac{m \ln m}{\sqrt{n}}\Big).$$ In order to prove the asymptotic equivalence between $\mathcal M_n$ and $\mathcal N_m$ defined as in we need to introduce some auxiliary statistical models. Let us denote by $\mathcal A_m$ the experiment obtained from $\mathcal{N}_m^*$ by disregarding the first component and by $\mathcal V_m$ the statistical model associated with the multivariate normal distribution with the same means and covariances as a multinomial distribution $\mathcal M(n,\gamma_1,\dots,\gamma_m)$. Furthermore, let us denote by $\mathcal N_m^{\#}$ the experiment associated with the observation of $m-1$ independent Gaussian variables ${\ensuremath {\mathscr{Nn}}}(\sqrt{n\gamma_i},\frac{1}{4})$, $i=2,\dots,m$. Clearly $\Delta(\mathcal V_m,\mathcal A_m)=0$ for all $m$: In one direction one only has to consider the projection disregarding the first component; in the other direction, it is enough to remark that $\mathcal V_m$ is the image experiment of $\mathcal A_m$ by the random variable $S:(x_2,\dots,x_m)\to (n(1-\frac{\sum_{i=2}^m x_i}{n}),x_2,\dots,x_m)$. Moreover, using two results contained in [@cmultinomial], see Sections 7.1 and 7.2, one has that $$\Delta(\mathcal A_m,\mathcal N_m^*)=O\bigg(\sqrt{\frac{m}{n}}\bigg),\quad \Delta(\mathcal A_m,\mathcal N_m^{\#})=O\bigg(\frac{m}{\sqrt n}\bigg).$$ Finally, using Facts \[ch4h\] and \[fact:ch4gaussiane\] we can write $$\begin{aligned} \Delta(\mathcal N_m^{\#},\mathcal N_m)&\leq \sqrt{2\sum_{i=2}^m \Big(\sqrt{T_n\nu(J_i)}-\sqrt{T_n\nu(J_i)\exp(-\iota_m\Delta_n)}\Big)^2}\\ &\leq\sqrt{2T_n\Delta_n^2\iota_m^3}\leq \sqrt{2n\Delta_n^3M^3\big(\nu_0\big(I\setminus [0,\varepsilon_m]\big)\big)^3}. \end{aligned}$$ To sum up, $\Delta(\mathcal M_n,\mathcal N_m)=O\Big(\frac{m \ln m}{\sqrt{n}}+\sqrt{n\Delta_n^3\big(\nu_0\big(I\setminus [0,\varepsilon_m]\big)\big)^3}\Big)$, with the $O$ depending only on $\kappa$ and $M$. - Step 4: An application of Lemmas \[lemma:ch4wn\] and \[lemma:ch4limitewn\] yield $$\Delta(\mathcal N_m,{\ensuremath {\mathscr{W}}}_n^{\nu_0}) \leq 2\sqrt T_n \sup_{f\in{\ensuremath {\mathscr{F}}}} \big(A_m(f)+B_m(f)+C_m(f)\big).$$ Proofs of the examples ====================== The purpose of this section is to give detailed proofs of Examples \[ex:ch4esempi\] and Examples \[ex:ch4CPP\]–\[ex3\]. As in Section \[sec:ch4proofs\] we suppose $I\subseteq {\ensuremath {\mathbb{R}}}_+$. We start by giving some bounds for the quantities $A_m(f)$, $B_m(f)$ and $L_2(f, \hat f_m)$, the $L_2$-distance between the restriction of $f$ and $\hat f_m$ on $I\setminus[0,\varepsilon_m].$ Bounds for $A_m(f)$, $B_m(f)$, $L_2(f, \hat{f}_m)$ when $\hat f_m$ is piecewise linear. --------------------------------------------------------------------------------------- In this section we suppose $f$ to be in ${\ensuremath {\mathscr{F}}}_{(\gamma, K, \kappa, M)}^I$ defined as in . We are going to assume that the $V_j$ are given by triangular/trapezoidal functions as in . In particular, in this case $\hat f_m$ is piecewise linear. \[lemma:ch4hellinger\] Let $0<\kappa < M$ be two constants and let $f_i$, $i=1,2$ be functions defined on an interval $J$ and such that $\kappa \leq f_i\leq M$, $i=1,2$. Then, for any measure $\nu_0$, we have: $$\begin{aligned} \frac{1}{4 M} \int_J \big(f_1(x)-f_2(x)\big)^2 \nu_0(dx)&\leq\int_J \big(\sqrt{f_1(x)} - \sqrt{f_2(x)}\big)^2\nu_0(dx)\\ &\leq \frac{1}{4 \kappa} \int_J \big(f_1(x)-f_2(x)\big)^2\nu_0(dx). \end{aligned}$$ This simply comes from the following inequalities: $$\begin{aligned} \frac{1}{2\sqrt M} (f_1(x)-f_2(x)) &\leq \frac{f_1(x)-f_2(x)}{\sqrt{f_1(x)}+\sqrt{f_2(x)}} = \sqrt{f_1(x)} - \sqrt{f_2(x)}\\ &\leq \frac{1}{2 \sqrt{\kappa}} (f_1(x)-f_2(x)). \end{aligned}$$ Recall that $x_i^*$ is chosen so that $\int_{J_i} (x-x_i^*) \nu_0(dx) = 0$. Consider the following Taylor expansions for $x \in J_i$: $$f(x) = f(x_i^*) + f'(x_i^*) (x-x_i^*) + R_i(x); \quad \hat{f}_m(x) = \hat{f}_m(x_i^*) + \hat{f}_m'(x_i^*) (x-x_i^*),$$ where $\hat{f}_m(x_i^*) = \frac{\nu(J_i)}{\nu_0(J_i)}$ and $\hat{f}_m'(x_i^*)$ is the left or right derivative in $x_i^*$ depending whether $x < x_i^*$ or $x > x_i^*$ (as $\hat f_m$ is piecewise linear, no rest is involved in its Taylor expansion). \[lemma:ch4bounds\] The following estimates hold: $$\begin{aligned} |R_i(x)| &\leq K |\xi_i - x_i^*|^\gamma |x-x_i^*|; \\ \big|f(x_i^*) - \hat{f}_m(x_i^*)\big| &\leq \|R_i\|_{L_\infty(\nu_0)} \text{ for } i = 2, \dots, m-1; \label{eqn:bounds}\\ \big|f(x)-\hat{f}_m(x)\big| &\leq \begin{cases} 2 \|R_i\|_{L_\infty(\nu_0)} + K |x_i^*-\eta_i|^\gamma |x-x_i^*| & \text{ if } x \in J_i, \ i = 3, \dots, m-1;\\ C |x-\tau_i| & \text { if } x \in J_i, \ i \in \{2, m\}. \end{cases} \end{aligned}$$ for some constant $C$ and points $\xi_i \in J_i$, $\eta_i\in J_{i-1} \cup J_i\cup J_{i+1}$, $\tau_2 \in J_2 \cup J_3$ and $\tau_m \in J_{m-1} \cup J_m$. By definition of $R_i$, we have $$|R_i(x)| = \Big| \big(f'(\xi_i) - f'(x_i^*)\big)(x-x_i^*) \Big| \leq K |\xi_i - x_i^*|^\gamma |x-x_i^*|,$$ for some point $\xi_i \in J_i$. For the second inequality, $$\begin{aligned} |f(x_i^*)-\hat{f}_m(x_i^*)| &= \frac{1}{\nu_0(J_i)} \Big| \int_{J_i} (f(x_i^*)-f(x)) \nu_0(dx)\Big|\\ &= \frac{1}{\nu_0(J_i)} \bigg|\int_{J_i} R_i(x) \nu_0(dx)\bigg| \leq \|R_i\|_{L_\infty(\nu_0)}, \end{aligned}$$ where in the first inequality we have used the defining property of $x_i^*$. For the third inequality, let us start by proving that for all $2 < i < m-1$, $\hat{f}_m'(x_i^*) = f'(\chi_i)$ for some $\chi_i \in J_i\cup J_{i+1}$ (here, we are considering right derivatives; for left ones, this would be $J_{i-1} \cup J_i$). To see that, take $x\in J_i\cap [x_i^*,x_{i+1}^*]$ and introduce the function $h(x):=f(x)-l(x)$ where $$l(x)=\frac{x-x_i^*}{x_{i+1}^*-x_i^*}\big(\hat f_m(x_{i+1}^*)-\hat f_m(x_i^*)\big)+\hat f_m(x_i^*).$$ Then, using the fact that $\int_{J_i}(x-x_i^*)\nu_0(dx)=0$ joint with $\int_{J_{i+1}}(x-x_{i+1}^*)\nu_0(dx)=(x_{j+1}^*-x_j^*)\mu_m$, we get $$\int_{J_i}h(x)\nu_0(dx)=0=\int_{J_{i+1}}h(x)\nu_0(dx).$$ In particular, by means of the mean theorem, one can conclude that there exist two points $p_i\in J_i$ and $p_{i+1}\in J_{i+1}$ such that $$h(p_i)=\frac{\int_{J_i}h(x)\nu_0(dx)}{\nu_0(J_i)}=\frac{\int_{J_{i+1}}h(x)\nu_0(dx)}{\nu_0(J_{i+1})}=h(p_{i+1}).$$ As a consequence, we can deduce that there exists $\chi_i\in[p_i,p_{i+1}]\subseteq J_i\cup J_{i+1}$ such that $h'(\chi_i)=0$, hence $f'(\chi_i)=l'(\chi_i)=\hat f_m'(x_i^*)$. When $2 < i < m-1$, the two Taylor expansions joint with the fact that $\hat{f}_m'(x_i^*) = f'(\chi_i)$ for some $\chi_i \in J_i\cup J_{i+1}$, give $$\begin{aligned} |f(x) - \hat{f}_m (x)| &\leq |f(x_i^*) - \hat{f}_m(x_i^*)| + |R_i(x)| + K |x_i^* - \chi_i|^\gamma |x-x_i^*|\\ & \leq 2 \|R_i\|_{L_\infty(\nu_0)} + K |x_i^* - \chi_i|^\gamma |x-x_i^*| \end{aligned}$$ whenever $x \in J_i$ and $x > x_i^*$ (the case $x < x_i^*$ is handled similarly using the left derivative of $\hat f_m$ and $\xi_i \in J_{i-1} \cup J_i$). For the remaining cases, consider for example $i = 2$. Then $\hat{f}_m(x)$ is bounded by the minimum and the maximum of $f$ on $J_2 \cup J_3$, hence $\hat{f}_m(x) = f(\tau)$ for some $\tau \in J_2 \cup J_3$. Since $f'$ is bounded by $C = 2M +K$, one has $|f(x) - \hat{f}_m(x)| \leq C|x-\tau|$. \[lemma:ch4abc\] With the same notations as in Lemma \[lemma:ch4bounds\], the estimates for $A_m^2(f)$, $B_m^2(f)$ and $L_2(f, \hat{f}_m)^2$ are as follows: $$\begin{aligned} L_2(f, \hat{f}_m)^2&\leq \frac{1}{4\kappa} \bigg( \sum_{i=3}^m \int_{J_i} \Big(2 \|R_i\|_{L_\infty(\nu_0)} + K |x_i^*-\eta_i|^\gamma|x-x_i^*|\Big)^2 \nu_0(dx) \\ &\phantom{=}\ + C^2 \Big(\int_{J_2}|x-\tau_2|^2\nu_0(dx) + \int_{J_m}|x-\tau_m|^2\nu_0(dx)\Big).\\ A_m^2(f) &= L_2\big(\sqrt{f}, \widehat{\sqrt{f}}_m\big)^2 = O\Big(L_2(f, \hat{f}_m)^2\Big)\\ B_m^2(f) &= O\bigg( \sum_{i=2}^{m} \frac{1}{\sqrt{\kappa}} \nu_0(J_i) (2 \sqrt{M} + 1)^2 \|R_i\|_{L_\infty(\nu_0)}^2\bigg). \end{aligned}$$ The $L_2$-bound is now a straightforward application of Lemmas \[lemma:ch4hellinger\] and \[lemma:ch4bounds\]. The one on $A_m(f)$ follows, since if $f \in {\ensuremath {\mathscr{F}}}_{(\gamma, K, \kappa, M)}^I$ then $\sqrt{f} \in {\ensuremath {\mathscr{F}}}_{(\gamma, \frac{K}{\sqrt{\kappa}}, \sqrt{\kappa}, \sqrt{M})}^I$. In order to bound $B_m^2(f)$ write it as: $$B_m^2(f)=\sum_{j=1}^m \nu_0(J_j)\bigg(\frac{\int_{J_j}\sqrt{f(y)}\nu_0(dy)}{\nu_0(J_j)}-\sqrt{\frac{\nu(J_j)}{\nu_0(J_j)}}\bigg)^2=:\sum_{j=1}^m \nu_0(J_j)E_j^2.$$ By the triangular inequality, let us bound $E_j$ by $F_j+G_j$ where: $$F_j=\bigg|\sqrt{\frac{\nu(J_j)}{\nu_0(J_j)}}-\sqrt{f(x_j^*)}\bigg| \quad \textnormal{ and }\quad G_j=\bigg|\sqrt{f(x_j^*)}-\frac{\int_{J_j}\sqrt{f(y)}\nu_0(dy)}{\nu_0(J_j)}\bigg|.$$ Using the same trick as in the proof of Lemma \[lemma:ch4hellinger\], we can bound: $$\begin{aligned} F_j \leq 2 \sqrt{M} \bigg|\frac{\int_{J_j} \big(f(x)-f(x_i^*)\big)\nu_0(dx)}{\nu_0(J_j)}\bigg| \leq 2 \sqrt{M} \|R_j\|_{L_\infty(\nu_0)}. \end{aligned}$$ On the other hand, $$\begin{aligned} G_j&=\frac{1}{\nu_0(J_j)}\bigg|\int_{J_j}\big(\sqrt{f(x_j^*)}-\sqrt{f(y)}\big)\nu_0(dy)\bigg|\\ &=\frac{1}{\nu_0(J_j)}\bigg|\int_{J_j}\bigg(\frac{f'(x_j^*)}{2\sqrt{f(x_j^*)}}(x-x_j^*)+\tilde R_j(y)\bigg)\nu_0(dy)\bigg| \leq \|\tilde R_j\|_{L_\infty(\nu_0)}, \end{aligned}$$ which has the same magnitude as $\frac{1}{\kappa}\|R_j\|_{L_\infty(\nu_0)}$. Observe that when $\nu_0$ is finite, there is no need for a special definition of $\hat{f}_m$ near $0$, and all the estimates in Lemma \[lemma:ch4bounds\] hold true replacing every occurrence of $i = 2$ by $i = 1$. \[rmk:nonlinear\] The same computations as in Lemmas \[lemma:ch4bounds\] and \[lemma:ch4abc\] can be adapted to the general case where the $V_j$’s (and hence $\hat f_m$) are not piecewise linear. In the general case, the Taylor expansion of $\hat f_m$ in $x_i^*$ involves a rest as well, say $\hat R_i$, and one needs to bound this, as well. Proofs of Examples \[ex:ch4esempi\] {#subsec:esempi} ----------------------------------- In the following, we collect the details of the proofs of Examples \[ex:ch4esempi\]. **1. The finite case:** $\nu_0\equiv {\ensuremath{\textnormal{Leb}}}([0,1])$. Remark that in the case where $\nu_0$ if finite there are no convergence problems near zero and so we can consider the easier approximation of $f$: $$\hat f_m(x):= \begin{cases} m\theta_1 & \textnormal{if } x\in \big[0,x_1^*\big],\\ m^2\big[\theta_{j+1}(x-x_j^*)+\theta_j(x_{j+1}^*-x)\big] & \textnormal{if } x\in (x_j^*,x_{j+1}^*] \quad j = 1,\dots,m-1,\\ m\theta_m & \textnormal{if } x\in (x_m^*,1] \end{cases}$$ where $$x_j^*=\frac{2j-1}{2m},\quad J_j=\Big(\frac{j-1}{m},\frac{j}{m}\Big],\quad \theta_j=\int_{J_j}f(x)dx, \quad j=1,\dots,m.$$ In this case we take $\varepsilon_m = 0$ and Conditions $(C2)$ and $(C2')$ coincide: $$\lim_{n\to\infty}n\Delta_n\sup_{f\in {\ensuremath {\mathscr{F}}}}\Big(A_m^2(f)+B_m^2(f)\Big) = 0.$$ Applying Lemma \[lemma:ch4abc\], we get $$\sup_{f\in {\ensuremath {\mathscr{F}}}} \Big(L_2(f,\hat f_m)+ A_m(f)+ B_m(f)\Big)= O\big(m^{-\frac{3}{2}}+m^{-1-\gamma}\big);$$ (actually, each of the three terms on the left hand side has the same rate of convergence). **2. The finite variation case:** $\frac{d\nu_0}{d{\ensuremath{\textnormal{Leb}}}}(x)=x^{-1}{\ensuremath {\mathbb{I}}}_{[0,1]}(x).$ To prove that the standard choice of $V_j$ described at the beginning of Examples \[ex:ch4esempi\] leads to $\displaystyle{\int_{\varepsilon_m}^1 V_j(x)\frac{dx}{x}=1}$, it is enough to prove that this integral is independent of $j$, since in general $\displaystyle{\int_{\varepsilon_m}^1 \sum_{j=2}^m V_j(x)\frac{dx}{x}=m-1}.$ To that aim observe that, for $j=3,\dots,m-1$, $$\mu_m\int_{\varepsilon_m}^1 V_j(x)\nu_0(dx)=\int_{x_{j-1}^*}^{x_j^*}\frac{x-x_{j-1}^*}{x_j^*-x_{j-1}^*}\frac{dx}{x}+\int_{x_j^*}^{x_{j+1}^*}\frac{x_{j+1}^*-x}{x_{j+1}^*-x_j^*}\frac{dx}{x}.$$ Let us show that the first addendum does not depend on $j$. We have $$\int_{x_{j-1}^*}^{x_j^*}\frac{dx}{x_j^*-x_{j-1}^*}=1\quad \textnormal{and}\quad -\frac{x_{j-1}^*}{x_j^*-x_{j-1}^*}\int_{x_{j-1}^*}^{x_j^*}\frac{dx}{x}=\frac{x_{j-1}^*}{x_j^*-x_{j-1}^*}\ln\Big(\frac{x_{j-1}^*}{x_j^*}\Big).$$ Since $x_j^*=\frac{v_j-v_{j-1}}{\mu_m}$ and $v_j=\varepsilon_m^{\frac{m-j}{m-1}}$, the quantities $\frac{x_j^*}{x_{j-1}^*}$ and, hence, $\frac{x_{j-1}^*}{x_j^*-x_{j-1}^*}$ do not depend on $j$. The second addendum and the trapezoidal functions $V_2$ and $V_m$ are handled similarly. Thus, $\hat f_m$ can be chosen of the form $$\hat f_m(x):= \begin{cases} \quad 1 & \textnormal{if } x\in \big[0,\varepsilon_m\big],\\ \frac{\nu(J_2)}{\mu_m} & \textnormal{if } x\in \big(\varepsilon_m, x_2^*\big],\\ \frac{1}{x_{j+1}^*-x_j^*}\bigg[\frac{\nu(J_{j+1})}{\mu_m}(x-x_j^*)+\frac{\nu(J_{j})}{\mu_m}(x_{j+1}^*-x)\bigg] & \textnormal{if } x\in (x_j^*,x_{j+1}^*] \quad j = 2,\dots,m-1,\\ \frac{\nu(J_m)}{\mu_m} & \textnormal{if } x\in (x_m^*,1]. \end{cases}$$ A straightforward application of Lemmas \[lemma:ch4bounds\] and \[lemma:ch4abc\] gives $$\sqrt{\int_{\varepsilon_m}^1\Big(f(x)-\hat f_m(x)\Big)^2 \nu_0(dx)} +A_m(f)+B_m(f)=O\bigg(\bigg(\frac{\ln m}{m}\bigg)^{\gamma+1} \sqrt{\ln (\varepsilon_m^{-1})}\bigg),$$ as announced. **3. The infinite variation, non-compactly supported case:** $\frac{d\nu_0}{d{\ensuremath{\textnormal{Leb}}}}(x)=x^{-2}{\ensuremath {\mathbb{I}}}_{{\ensuremath {\mathbb{R}}}_+}(x)$. Recall that we want to prove that $$L_2(f,\hat f_m)^2+A_m^2(f)+B_m^2(f)=O\bigg(\frac{H(m)^{3+4\gamma}}{(\varepsilon_m m)^{2\gamma}}+\sup_{x\geq H(m)}\frac{f(x)^2}{H(m)}\bigg),$$ for any given sequence $H(m)$ going to infinity as $m\to\infty$. Let us start by addressing the problem that the triangular/trapezoidal choice for $V_j$ is not doable. Introduce the following notation: $V_j = {\ensuremath {\accentset{\triangle}{V}}}_j + A_j$, $j = 2, \dots, m$, where the ${\ensuremath {\accentset{\triangle}{V}}}_j$’s are triangular/trapezoidal function similar to those in . The difference is that here, since $x_m^*$ is not defined, ${\ensuremath {\accentset{\triangle}{V}}}_{m-1}$ is a trapezoid, linear between $x_{m-2}^*$ and $x_{m-1}^*$ and constantly equal to $\frac{1}{\mu_m}$ on $[x_{m-1}^*,v_{m-1}]$ and ${\ensuremath {\accentset{\triangle}{V}}}_m$ is supported on $[v_{m-1},\infty)$, where it is constantly equal to $\frac{1}{\mu_m}$. Each $A_j$ is chosen so that: 1. It is supported on $[x_{j-1}^*, x_{j+1}^*]$ (unless $j = 2$, $j = m-1$ or $j = m$; in the first case the support is $[x_2^*, x_3^*]$, in the second one it is $[x_{m-2}^*, x_{m-1}^*]$, and $A_m \equiv 0$); 2. ${A_j}$ coincides with $-A_{j-1}$ on $[x_{j-1}^*, x_j^*]$, $j = 3, \dots, m-1$ (so that $\sum V_j \equiv \frac{1}{\mu_n}$) and its first derivative is bounded (in absolute value) by $\frac{1}{\mu_m(x_j^* - x_{j-1}^*)}$ (so that $V_j$ is non-negative and bounded by $\frac{1}{\mu_n}$); 3. $A_j$ vanishes, along with its first derivatives, on $x_{j-1}^*$, $x_j^*$ and $x_{j+1}^*$. We claim that these conditions are sufficient to assure that $\hat f_m$ converges to $f$ quickly enough. First of all, by Remark \[rmk:nonlinear\], we observe that, to have a good bound on $L_2(f, \hat f_m)$, the crucial property of $\hat f_m$ is that its first right (resp. left) derivative has to be equal to $\frac{1}{\mu_m(x_{j+1}^*-x_j^*)}$ (resp. $\frac{1}{\mu_m(x_{j}^*-x_{j-1}^*)}$) and its second derivative has to be small enough (for example, so that the rest $\hat R_j$ is as small as the rest $R_j$ of $f$ already appearing in Lemma \[lemma:ch4bounds\]). The (say) left derivatives in $x_j^*$ of $\hat f_m$ are given by $$\hat f_m'(x_j^*) = \big({\ensuremath {\accentset{\triangle}{V}}}_j'(x_j^*) + A_j'(x_j^*)\big) \big(\nu(J_j)-\nu(J_{j-1})\big); \quad \hat f_m''(x_j^*) = A_j''(x_j^*)\big(\nu(J_j)-\nu(J_{j-1})\big).$$ Then, in order to bound $|\hat f_m''(x_j^*)|$ it is enough to bound $|A_j''(x_j^*)|$ because: $$\big|\hat f_m''(x_j^*)\big| \leq |A_j''(x_j^*)| \Big|\int_{J_j} f(x) \frac{dx}{x^2} - \int_{J_{j-1}} f(x) \frac{dx}{x^2}\Big| \leq |A_j''(x_j^*)| \displaystyle{\sup_{x\in I}}|f'(x)|(\ell_{j}+\ell_{j-1}) \mu_m,$$ where $\ell_{j}$ is the Lebesgue measure of $J_{j}$. We are thus left to show that we can choose the $A_j$’s satisfying points 1-3, with a small enough second derivative, and such that $\int_I V_j(x) \frac{dx}{x^2} = 1$. To make computations easier, we will make the following explicit choice: $$A_j(x) = b_j (x-x_j^*)^2 (x-x_{j-1}^*)^2 \quad \forall x \in [x_{j-1}^*, x_j^*),$$ for some $b_j$ depending only on $j$ and $m$ (the definitions on $[x_j^*, x_{j+1}^*)$ are uniquely determined by the condition $A_j + A_{j+1} \equiv 0$ there). Define $j_{\max}$ as the index such that $H(m) \in J_{j_{\max}}$; it is straightforward to check that $$j_{\max} \sim m- \frac{\varepsilon_m(m-1)}{H(m)}; \quad x_{m-k}^* = \varepsilon_m(m-1) \log \Big(1+\frac{1}{k}\Big), \quad k = 1, \dots, m-2.$$ One may compute the following Taylor expansions: $$\begin{aligned} \int_{x_{m-k-1}^*}^{x_{m-k}^*} {\ensuremath {\accentset{\triangle}{V}}}_{m-k}(x) \nu_0(dx) &= \frac{1}{2} - \frac{1}{6k} + \frac{5}{24k^2} + O\Big(\frac{1}{k^3}\Big);\\ \int_{x_{m-k}^*}^{x_{m-k+1}^*} {\ensuremath {\accentset{\triangle}{V}}}_{m-k}(x) \nu_0(dx) &= \frac{1}{2} + \frac{1}{6k} + \frac{1}{24k^2} + O\Big(\frac{1}{k^3}\Big). \end{aligned}$$ In particular, for $m \gg 0$ and $m-k \leq j_{\max}$, so that also $k \gg 0$, all the integrals $\int_{x_{j-1}^*}^{x_{j+1}^*} {\ensuremath {\accentset{\triangle}{V}}}_j(x) \nu_0(dx)$ are bigger than 1 (it is immediate to see that the same is true for ${\ensuremath {\accentset{\triangle}{V}}}_2$, as well). From now on we will fix a $k \geq \frac{\varepsilon_m m}{H(m)}$ and let $j = m-k$. Summing together the conditions $\int_I V_i(x)\nu_0(dx)=1$ $\forall i>j$ and noticing that the function $\sum_{i = j}^m V_i$ is constantly equal to $\frac{1}{\mu_m}$ on $[x_j^*,\infty)$ we have: $$\begin{aligned} \int_{x_{j-1}^*}^{x_j^*} A_j(x) \nu_0(dx) &= m-j+1 - \frac{1}{\mu_m} \nu_0([x_j^*, \infty)) - \int_{x_{j-1}^*}^{x_j^*} {\ensuremath {\accentset{\triangle}{V}}}_j(x) \nu_0(dx)\\ &= k+1- \frac{1}{\log(1+\frac{1}{k})} - \frac{1}{2} + \frac{1}{6k} + O\Big(\frac{1}{k^2}\Big) = \frac{1}{4k} + O\Big(\frac{1}{k^2}\Big) \end{aligned}$$ Our choice of $A_j$ allows us to compute this integral explicitly: $$\int_{x_{j-1}^*}^{x_j^*} b_j (x-x_{j-1}^*)^2(x-x_j^*)^2 \frac{dx}{x^2} = b_j \big(\varepsilon_m (m-1)\big)^3 \Big(\frac{2}{3} \frac{1}{k^4} + O\Big(\frac{1}{k^5}\Big)\Big).$$ In particular one gets that asymptotically $$b_j \sim \frac{1}{(\varepsilon_m(m-1))^3} \frac{3}{2} k^4 \frac{1}{4k} \sim \bigg(\frac{k}{\varepsilon_m m}\bigg)^3.$$ This immediately allows us to bound the first order derivative of $A_j$ as asked in point 2: Indeed, it is bounded above by $2 b_j \ell_{j-1}^3$ where $\ell_{j-1}$ is again the length of $J_{j-1}$, namely $\ell_j = \frac{\varepsilon_m(m-1)}{k(k+1)} \sim \frac{\varepsilon_m m}{k^2}$. It follows that for $m$ big enough: $$\displaystyle{\sup_{x\in I}|A_j'(x)|} \leq \frac{1}{k^3} \ll \frac{1}{\mu_m(x_j^*-x_{j-1}^*)} \sim \bigg(\frac{k}{\varepsilon_m m}\bigg)^2.$$ The second order derivative of $A_j(x)$ can be easily computed to be bounded by $4 b_j \ell_j^2$. Also remark that the conditions that $|f|$ is bounded by $M$ and that $f'$ is Hölder, say $|f'(x) - f'(y)| \leq K |x-y|^\gamma$, together give a uniform $L_\infty$ bound of $|f'|$ by $2M + K$. Summing up, we obtain: $$|\hat f_m''(x_j^*)| \lesssim b_j \ell_m^3 \mu_m \sim \frac{1}{k^3\varepsilon_m m}$$ (here and in the following we use the symbol $\lesssim$ to stress that we work up to constants and to higher order terms). The leading term of the rest $\hat R_j$ of the Taylor expansion of $\hat f_m$ near $x_j^*$ is $$\hat f_m''(x_j^*) |x-x_j^*|^2 \sim |f_m''(x_j^*)| \ell_j^2 \sim \frac{\varepsilon_m m}{k^7}.$$ Using Lemmas \[lemma:ch4bounds\] and \[lemma:ch4abc\] (taking into consideration Remark \[rmk:nonlinear\]) we obtain $$\begin{aligned} \int_{\varepsilon_m}^{\infty} |f(x) - \hat f_m(x)|^2 \nu_0(dx) &\lesssim \sum_{j=2}^{j_{\max}} \int_{J_j} |f(x) - \hat f_m(x)|^2 \nu_0(dx) + \int_{H(m)}^\infty |f(x)-\hat f_m(x)|^2 \nu_0(dx) \nonumber \\ &\lesssim \sum_{k=\frac{\varepsilon_m m}{H(m)}}^{m}\mu_m \bigg( \frac{(\varepsilon_m m)^{2+2\gamma}}{k^{4+4\gamma}} + \frac{(\varepsilon_m m)^2}{k^{14}}\bigg) + \frac{1}{H(m)}\sup_{x\geq H(m)}f(x)^2 \label{eq:xquadro} \\ &\lesssim \bigg(\frac{H(m)^{3+4\gamma}}{(\varepsilon_m m)^{2+2\gamma}} + \frac{H(m)^{13}}{(\varepsilon_m m)^{10}}\bigg) + \frac{1}{H(m)}. \nonumber \end{aligned}$$ It is easy to see that, since $0 < \gamma \leq 1$, as soon as the first term converges, it does so more slowly than the second one. Thus, an optimal choice for $H(m)$ is given by $\sqrt{\varepsilon_m m}$, that gives a rate of convergence: $$L_2(f,\hat f_m)^2 \lesssim \frac{1}{\sqrt{\varepsilon_m m}}.$$ This directly gives a bound on $H(f, \hat f_m)$. Also, the bound on the term $A_m(f)$, which is $L_2(\sqrt f,\widehat{\sqrt{f}}_m)^2$, follows as well, since $f \in {\ensuremath {\mathscr{F}}}_{(\gamma,K,\kappa,M)}^I$ implies $\sqrt{f} \in {\ensuremath {\mathscr{F}}}_{(\gamma, \frac{K}{\sqrt\kappa}, \sqrt \kappa, \sqrt M)}^I$. Finally, the term $B_m^2(f)$ contributes with the same rates as those in : Using Lemma \[lemma:ch4abc\], $$\begin{aligned} B_m^2(f) &\lesssim \sum_{j=2}^{\lceil m-\frac{\varepsilon_m(m-1)}{H(m)} \rceil} \nu_0(J_j) \|R_j\|_{L_\infty}^2 + \nu_0([H(m), \infty))\\ &\lesssim \mu_m \sum_{k=\frac{\varepsilon_m (m-1)}{H(m)}}^m \Big(\frac{\varepsilon_m m}{k^2}\Big)^{2+2\gamma} + \frac{1}{H(m)}\\ &\lesssim \frac{H(m)^{3+4\gamma}}{(\varepsilon_m m)^{2+2\gamma}} + \frac{1}{H(m)}. \end{aligned}$$ Proof of Example \[ex:ch4CPP\] {#subsec:ch4ex1} ------------------------------ In this case, since $\varepsilon_m = 0$, the proofs of Theorems \[ch4teo1\] and \[ch4teo2\] simplify and give better estimates near zero, namely: $$\begin{aligned} \Delta({\ensuremath {\mathscr{P}}}_{n,FV}^{{\ensuremath{\textnormal{Leb}}}}, {\ensuremath {\mathscr{W}}}_n^{\nu_0}) &\leq C_1 \bigg(\sqrt{T_n}\sup_{f\in {\ensuremath {\mathscr{F}}}}\Big(A_m(f)+ B_m(f)+L_2(f,\hat f_m)\Big)+\sqrt{\frac{m^2}{T_n}}\bigg)\nonumber \\ \Delta({\ensuremath {\mathscr{Q}}}_{n,FV}^{{\ensuremath{\textnormal{Leb}}}}, {\ensuremath {\mathscr{W}}}_n^{\nu_0}) &\leq C_2\bigg(\sqrt{n\Delta_n^2}+\frac{m\ln m}{\sqrt{n}}+\sqrt{T_n}\sup_{f\in{\ensuremath {\mathscr{F}}}}\Big( A_m(f)+ B_m(f)+H\big(f,\hat f_m\big)\Big) \bigg) \label{eq:CPP},\end{aligned}$$ where $C_1$, $C_2$ depend only on $\kappa,M$ and $$\begin{aligned} &A_m(f)=\sqrt{\int_0^1\Big(\widehat{\sqrt f}_m(y)-\sqrt{f(y)}\Big)^2dy},\quad B_m(f)=\sum_{j=1}^m\bigg(\sqrt m\int_{J_j}\sqrt{f(y)}dy-\sqrt{\theta_j}\bigg)^2.\end{aligned}$$ As a consequence we get: $$\begin{aligned} \Delta({\ensuremath {\mathscr{P}}}_{n,FV}^{{\ensuremath{\textnormal{Leb}}}},{\ensuremath {\mathscr{W}}}_n^{\nu_0})&\leq O\bigg(\sqrt{T_n}(m^{-\frac{3}{2}}+m^{-1-\gamma})+\sqrt{m^2T_n^{-1}}\bigg).\end{aligned}$$ To get the bounds in the statement of Example \[ex:ch4CPP\] the optimal choices are $m_n = T_n^{\frac{1}{2+\gamma}}$ when $\gamma \leq \frac{1}{2}$ and $m_n = T_n^{\frac{2}{5}}$ otherwise. Concerning the discrete model, we have: $$\begin{aligned} \Delta({\ensuremath {\mathscr{Q}}}_{n,FV}^{{\ensuremath{\textnormal{Leb}}}},{\ensuremath {\mathscr{W}}}_n^{\nu_0})&\leq O\bigg(\sqrt{n\Delta_n^2}+\frac{m\ln m}{\sqrt{n}}+ \sqrt{n\Delta_n}\big(m^{-\frac{3}{2}}+m^{-1-\gamma}\big)\bigg).\end{aligned}$$ There are four possible scenarios: If $\gamma>\frac{1}{2}$ and $\Delta_n=n^{-\beta}$ with $\frac{1}{2}<\beta<\frac{3}{4}$ (resp. $\beta\geq \frac{3}{4}$) then the optimal choice is $m_n=n^{1-\beta}$ (resp. $m_n=n^{\frac{2-\beta}{5}}$). If $\gamma\geq\frac{1}{2}$ and $\Delta_n=n^{-\beta}$ with $\frac{1}{2}<\beta<\frac{2+2\gamma}{3+2\gamma}$ (resp. $\beta\geq \frac{2+2\gamma}{3+2\gamma}$) then the optimal choice is $m_n=n^{\frac{2-\beta}{4+2\gamma}}$ (resp. $m_n=n^{1-\beta}$). Proof of Example \[ch4ex2\] {#subsec:ch4ex2} --------------------------- As in Examples \[ex:ch4esempi\], we let $\varepsilon_m=m^{-1-\alpha}$ and consider the standard triangular/trapezoidal $V_j$’s. In particular, $\hat f_m$ will be piecewise linear. Condition (C2’) is satisfied and we have $C_m(f)=O(\varepsilon_m)$. This bound, combined with the one obtained in , allows us to conclude that an upper bound for the rate of convergence of $\Delta({\ensuremath {\mathscr{Q}}}_{n,FV}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0})$ is given by: $$\Delta({\ensuremath {\mathscr{Q}}}_{n,FV}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0})\leq C \bigg(\sqrt{\sqrt{n^2\Delta_n}\varepsilon_m}+\sqrt{n\Delta_n}\Big(\frac{\ln (\varepsilon_m^{-1})}{m}\Big)^{2}+\frac{m\ln m}{\sqrt n}+\sqrt{n\Delta_n^2}\ln (\varepsilon_m^{-1}) \bigg),$$ where $C$ is a constant only depending on the bound on $\lambda > 0$. The sequences $\varepsilon_m$ and $m$ can be chosen arbitrarily to optimize the rate of convergence. It is clear from the expression above that, if we take $\varepsilon_m = m^{-1-\alpha}$ with $\alpha > 0$, bigger values of $\alpha$ reduce the first term $\sqrt{\sqrt{n^2\Delta_n}\varepsilon_m}$, while changing the other terms only by constants. It can be seen that taking $\alpha \geq 15$ is enough to make the first term negligeable with respect to the others. In that case, and under the assumption $\Delta_n = n^{-\beta}$, the optimal choice for $m$ is $m = n^\delta$ with $\delta = \frac{5-4\beta}{14}$. In that case, the global rate of convergence is $$\Delta({\ensuremath {\mathscr{Q}}}_{n,FV}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0}) = \begin{cases} O\big(n^{\frac{1}{2}-\beta} \ln n\big) & \text{if } \frac{1}{2} < \beta \leq \frac{9}{10}\\ O\big(n^{-\frac{1+2\beta}{7}} \ln n\big) & \text{if } \frac{9}{10} < \beta < 1. \end{cases}$$ In the same way one can find $$\Delta({\ensuremath {\mathscr{P}}}_{n,FV}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0})=O\bigg( \sqrt{n\Delta_n} \Big(\frac{\ln m}{m}\Big)^2 \sqrt{\ln(\varepsilon_m^{-1})} + \sqrt{\frac{m^2}{n\Delta_n \ln(\varepsilon_m)}} + \sqrt{n \Delta_n} \varepsilon_m \bigg).$$ As above, we can freely choose $\varepsilon_m$ and $m$ (in a possibly different way from above). Again, as soon as $\varepsilon_m = m^{-1-\alpha}$ with $\alpha \geq 1$ the third term plays no role, so that we can choose $\varepsilon_m = m^{-2}$. Letting $\Delta_n = n^{-\beta}$, $0 < \beta < 1$, and $m = n^\delta$, an optimal choice is $\delta = \frac{1-\beta}{3}$, giving $$\Delta({\ensuremath {\mathscr{P}}}_{n,FV}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0})=O\Big(n^{\frac{\beta-1}{6}} \big(\ln n\big)^{\frac{5}{2}}\Big) = O\Big(T_n^{-\frac{1}{6}} \big(\ln T_n\big)^\frac{5}{2}\Big).$$ Proof of Example \[ex3\] {#subsec:ch4ex3} ------------------------ Using the computations in , combined with $\big(f(y)-\hat f_m(y)\big)^2\leq 4 \exp(-2\lambda_0 y^3) \leq 4 \exp(-2\lambda_0 H(m)^3)$ for all $y \geq H(m)$, we obtain: $$\begin{aligned} \int_{\varepsilon_m}^\infty \big|f(x) - \hat f_m(x)\big|^2 \nu_0(dx) &\lesssim \frac{H(m)^{7}}{(\varepsilon_m m)^{4}} + \int_{H(m)}^\infty \big|f(x) - \hat f_m(x)\big|^2 \nu_0(dx)\\ &\lesssim \frac{H(m)^{7}}{(\varepsilon_m m)^{4}} + \frac{e^{-2\lambda_0 H(m)^3}}{H(m)}. \end{aligned}$$ As in Example \[ex:ch4esempi\], this bounds directly $H^2(f, \hat f_m)$ and $A_m^2(f)$. Again, the first part of the integral appearing in $B_m^2(f)$ is asymptotically smaller than the one appearing above: $$\begin{aligned} B_m^2(f) &= \sum_{j=1}^m \bigg(\frac{1}{\sqrt{\mu_m}} \int_{J_j} \sqrt{f} \nu_0 - \sqrt{\int_{J_j} f(x) \nu_0(dx)}\bigg)^2\\ &\lesssim \frac{H(m)^{7}}{(\varepsilon_m m)^{4}} + \sum_{k=1}^{\frac{\varepsilon_m m}{H(m)}} \bigg( \frac{1}{\sqrt{\mu_m}} \int_{J_{m-k}} \sqrt{f} \nu_0 - \sqrt{\int_{J_{m-k}} f(x) \nu_0(dx)}\bigg)^2\\ &\lesssim \frac{H(m)^{7}}{(\varepsilon_m m)^{4}} + \frac{e^{-\lambda_0 H(m)^3}}{H(m)}. \end{aligned}$$ As above, for the last inequality we have bounded $f$ in each $J_{m-k}$, $k \leq \frac{\varepsilon_m m}{H(m)}$, with $\exp(-\lambda_0 H(m)^3)$. Thus the global rate of convergence of $L_2(f,\hat f_m)^2 + A_m^2(f) + B_m^2(f)$ is $\frac{H(m)^{7}}{(\varepsilon_m m)^{4}} + \frac{e^{-\lambda_0 H(m)^3}}{H(m)}$. Concerning $C_m(f)$, we have $C_m^2(f) = \int_0^{\varepsilon_m} \frac{(\sqrt{f(x)} - 1)^2}{x^2} dx \lesssim \varepsilon_m^5$. To write the global rate of convergence of the Le Cam distance in the discrete setting we make the choice $H(m) = \sqrt[3]{\frac{\eta}{\lambda_0}\ln m}$, for some constant $\eta$, and obtain: $$\begin{aligned} \Delta({\ensuremath {\mathscr{Q}}}_{n}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0}) &= O \bigg( \frac{\sqrt{n} \Delta_n}{\varepsilon_m} + \frac{m \ln m}{\sqrt{n}} + \sqrt{n \Delta_n} \Big( \frac{(\ln m)^{\frac{7}{6}}}{(\varepsilon_m m)^2} + \frac{m^{-\frac{\eta}{2}}}{\sqrt[3]{\ln m}} \Big) + \sqrt[4]{n^2 \Delta_n \varepsilon_m^5}\bigg). \end{aligned}$$ Letting $\Delta_n = n^{-\beta}$, $\varepsilon_m = n^{-\alpha}$ and $m = n^\delta$, optimal choices give $\alpha = \frac{\beta}{3}$ and $\delta = \frac{1}{3}+\frac{\beta}{18}$. We can also take $\eta = 2$ to get a final rate of convergence: $$\Delta({\ensuremath {\mathscr{Q}}}_{n}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0}) = \begin{cases} O\big(n^{\frac{1}{2} - \frac{2}{3}\beta}\big)& \text{if } \frac{3}{4} < \beta < \frac{12}{13}\\ O\big(n^{-\frac{1}{6}+\frac{\beta}{18}} (\ln n)^{\frac{7}{6}}\big) &\text{if } \frac{12}{13} \leq \beta < 1. \end{cases}$$ In the continuous setting, we have $$\Delta({\ensuremath {\mathscr{P}}}_{n}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0})=O\bigg(\sqrt{n\Delta_n} \Big( \frac{(\ln m)^\frac{7}{6}}{(\varepsilon_m m)^2} + \frac{m^{-\frac{\eta}{2}}}{\sqrt[3]{\ln m}} + \varepsilon_m^{\frac{5}{2}}\Big) + \sqrt{\frac{\varepsilon_m m^2}{n\Delta_n}} \bigg).$$ Using $T_n = n\Delta_n$, $\varepsilon_m = T_n^{-\alpha}$ and $m = T_n^\delta$, optimal choices are given by $\alpha = \frac{4}{17}$, $\delta = \frac{9}{17}$; choosing any $\eta \geq 3$ we get the rate of convergence $$\Delta({\ensuremath {\mathscr{P}}}_{n}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0})=O\big(T_n^{-\frac{3}{34}} (\ln T_n)^{\frac{7}{6}}\big).$$ Background ========== Le Cam theory of statistical experiments {#sec:ch4lecam} ---------------------------------------- A *statistical model* or *experiment* is a triplet ${\ensuremath {\mathscr{P}}}_j=({\ensuremath {\mathscr{X}}}_j,{\ensuremath {\mathscr{A}}}_j,\{P_{j,\theta}; \theta\in\Theta\})$ where $\{P_{j,\theta}; \theta\in\Theta\}$ is a family of probability distributions all defined on the same $\sigma$-field ${\ensuremath {\mathscr{A}}}_j$ over the *sample space* ${\ensuremath {\mathscr{X}}}_j$ and $\Theta$ is the *parameter space*. The *deficiency* $\delta({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2)$ of ${\ensuremath {\mathscr{P}}}_1$ with respect to ${\ensuremath {\mathscr{P}}}_2$ quantifies “how much information we lose” by using ${\ensuremath {\mathscr{P}}}_1$ instead of ${\ensuremath {\mathscr{P}}}_2$ and it is defined as $\delta({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2)=\inf_K\sup_{\theta\in \Theta}||KP_{1,\theta}-P_{2,\theta}||_{TV},$ where TV stands for “total variation” and the infimum is taken over all “transitions” $K$ (see [@lecam], page 18). The general definition of transition is quite involved but, for our purposes, it is enough to know that Markov kernels are special cases of transitions. By $KP_{1,\theta}$ we mean the image measure of $P_{1,\theta}$ via the Markov kernel $K$, that is $$KP_{1,\theta}(A)=\int_{{\ensuremath {\mathscr{X}}}_1}K(x,A)P_{1,\theta}(dx),\quad\forall A\in {\ensuremath {\mathscr{A}}}_2.$$ The experiment $K{\ensuremath {\mathscr{P}}}_1=({\ensuremath {\mathscr{X}}}_2,{\ensuremath {\mathscr{A}}}_2,\{KP_{1,\theta}; \theta\in\Theta\})$ is called a *randomization* of ${\ensuremath {\mathscr{P}}}_1$ by the Markov kernel $K$. When the kernel $K$ is deterministic, that is $K(x,A)={\ensuremath {\mathbb{I}}}_{A}S(x)$ for some random variable $S:({\ensuremath {\mathscr{X}}}_1,{\ensuremath {\mathscr{A}}}_1)\to({\ensuremath {\mathscr{X}}}_2,{\ensuremath {\mathscr{A}}}_2)$, the experiment $K{\ensuremath {\mathscr{P}}}_1$ is called the *image experiment by the random variable* $S$. The Le Cam distance is defined as the symmetrization of $\delta$ and it defines a pseudometric. When $\Delta({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2)=0$ the two statistical models are said to be *equivalent*. Two sequences of statistical models $({\ensuremath {\mathscr{P}}}_{1}^n)_{n\in{\ensuremath {\mathbb{N}}}}$ and $({\ensuremath {\mathscr{P}}}_{2}^n)_{n\in{\ensuremath {\mathbb{N}}}}$ are called *asymptotically equivalent* if $\Delta({\ensuremath {\mathscr{P}}}_{1}^n,{\ensuremath {\mathscr{P}}}_{2}^n)$ tends to zero as $n$ goes to infinity. A very interesting feature of the Le Cam distance is that it can be also translated in terms of statistical decision theory. Let ${\ensuremath {\mathscr{D}}}$ be any (measurable) decision space and let $L:\Theta\times {\ensuremath {\mathscr{D}}}\mapsto[0,\infty)$ denote a loss function. Let $\|L\|=\sup_{(\theta,z)\in\Theta\times{\ensuremath {\mathscr{D}}}}L(\theta,z)$. Let $\pi_i$ denote a (randomized) decision procedure in the $i$-th experiment. Denote by $R_i(\pi_i,L,\theta)$ the risk from using procedure $\pi_i$ when $L$ is the loss function and $\theta$ is the true value of the parameter. Then, an equivalent definition of the deficiency is: $$\begin{aligned} \delta({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2)=\inf_{\pi_1}\sup_{\pi_2}\sup_{\theta\in\Theta}\sup_{L:\|L\|=1}\big|R_1(\pi_1,L,\theta)-R_2(\pi_2,L,\theta)\big|.\end{aligned}$$ Thus $\Delta({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2)<\varepsilon$ means that for every procedure $\pi_i$ in problem $i$ there is a procedure $\pi_j$ in problem $j$, $\{i,j\}=\{1,2\}$, with risks differing by at most $\varepsilon$, uniformly over all bounded $L$ and $\theta\in\Theta$. In particular, when minimax rates of convergence in a nonparametric estimation problem are obtained in one experiment, the same rates automatically hold in any asymptotically equivalent experiment. There is more: When explicit transformations from one experiment to another are obtained, statistical procedures can be carried over from one experiment to the other one. There are various techniques to bound the Le Cam distance. We report below only the properties that are useful for our purposes. For the proofs see, e.g., [@lecam; @strasser]. \[ch4delta0\] Let ${\ensuremath {\mathscr{P}}}_j=({\ensuremath {\mathscr{X}}},{\ensuremath {\mathscr{A}}},\{P_{j,\theta}; \theta\in\Theta\})$, $j=1,2$, be two statistical models having the same sample space and define $\Delta_0({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2):=\sup_{\theta\in\Theta}\|P_{1,\theta}-P_{2,\theta}\|_{TV}.$ Then, $\Delta({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2)\leq \Delta_0({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2)$. In particular, Property \[ch4delta0\] allows us to bound the Le Cam distance between statistical models sharing the same sample space by means of classical bounds for the total variation distance. To that aim, we collect below some useful results. \[ch4h\] Let $P_1$ and $P_2$ be two probability measures on ${\ensuremath {\mathscr{X}}}$, dominated by a common measure $\xi$, with densities $g_{i}=\frac{dP_{i}}{d\xi}$, $i=1,2$. Define $$\begin{aligned} L_1(P_1,P_2)&=\int_{{\ensuremath {\mathscr{X}}}} |g_{1}(x)-g_{2}(x)|\xi(dx), \\ H(P_1,P_2)&=\bigg(\int_{{\ensuremath {\mathscr{X}}}} \Big(\sqrt{g_{1}(x)}-\sqrt{g_{2}(x)}\Big)^2\xi(dx)\bigg)^{1/2}. \end{aligned}$$ Then, $$\|P_1-P_2\|_{TV}=\frac{1}{2}L_1(P_1,P_2)\leq H(P_1,P_2).$$ \[ch4hp\] Let $P$ and $Q$ be two product measures defined on the same sample space: $P=\otimes_{i=1}^n P_i$, $Q=\otimes_{i=1}^n Q_i$. Then $$H ^2(P,Q)\leq \sum_{i=1}^nH^2(P_i,Q_i).$$ \[fact:ch4hellingerpoisson\] Let $P_i$, $i=1,2$, be the law of a Poisson random variable with mean $\lambda_i$. Then $$H^2(P_1,P_2)=1-\exp\bigg(-\frac{1}{2}\Big(\sqrt{\lambda_1}-\sqrt{\lambda_2}\Big)^2\bigg).$$ \[fact:ch4gaussiane\] Let $Q_1\sim{\ensuremath {\mathscr{Nn}}}(\mu_1,\sigma_1^2)$ and $Q_2\sim{\ensuremath {\mathscr{Nn}}}(\mu_2,\sigma_2^2)$. Then $$\|Q_1-Q_2\|_{TV}\leq \sqrt{2\bigg(1-\frac{\sigma_1^2}{\sigma_2^2}\bigg)^2+\frac{(\mu_1-\mu_2)^2}{2\sigma_2^2}}.$$ \[fact:ch4processigaussiani\] For $i=1,2$, let $Q_i$, $i=1,2$, be the law on $(C,{\ensuremath {\mathscr{C}}})$ of two Gaussian processes of the form $$X^i_t=\int_{0}^t h_i(s)ds+ \int_0^t \sigma(s)dW_s,\ t\in[0,T]$$ where $h_i\in L_2({\ensuremath {\mathbb{R}}})$ and $\sigma\in{\ensuremath {\mathbb{R}}}_{>0}$. Then: $$L_1\big(Q_1,Q_2\big)\leq \sqrt{\int_{0}^T\frac{\big(h_1(y)-h_2(y)\big)^2}{\sigma^2(s)}ds}.$$ \[ch4fatto3\] Let ${\ensuremath {\mathscr{P}}}_i=({\ensuremath {\mathscr{X}}}_i,{\ensuremath {\mathscr{A}}}_i,\{P_{i,\theta}, \theta\in\Theta\})$, $i=1,2$, be two statistical models. Let $S:{\ensuremath {\mathscr{X}}}_1\to{\ensuremath {\mathscr{X}}}_2$ be a sufficient statistics such that the distribution of $S$ under $P_{1,\theta}$ is equal to $P_{2,\theta}$. Then $\Delta({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2)=0$. \[ch4independentkernels\] Let $P_i$ be a probability measure on $(E_i,\mathcal{E}_i)$ and $K_i$ a Markov kernel on $(G_i,\mathcal G_i)$. One can then define a Markov kernel $K$ on $(\prod_{i=1}^n E_i,\otimes_{i=1}^n \mathcal{G}_i)$ in the following way: $$K(x_1,\dots,x_n; A_1\times\dots\times A_n):=\prod_{i=1}^nK_i(x_i,A_i),\quad \forall x_i\in E_i,\ \forall A_i\in \mathcal{G}_i.$$ Clearly $K\otimes_{i=1}^nP_i=\otimes_{i=1}^nK_iP_i$. Finally, we recall the following result that allows us to bound the Le Cam distance between Poisson and Gaussian variables. \[ch4teomisto\](See [@BC04], Theorem 4) Let $\tilde P_{\lambda}$ be the law of a Poisson random variable $\tilde X_{\lambda}$ with mean $\lambda$. Furthermore, let $P_{\lambda}^*$ be the law of a random variable $Z^*_{\lambda}$ with Gaussian distribution ${\ensuremath {\mathscr{Nn}}}(2\sqrt{\lambda},1)$, and let $\tilde U$ be a uniform variable on $\big[-\frac{1}{2},\frac{1}{2}\big)$ independent of $\tilde X_{\lambda}$. Define $$\tilde Z_{\lambda}=2\textnormal{sgn}\big(\tilde X_{\lambda}+\tilde U\big)\sqrt{\big|\tilde X_{\lambda}+\tilde U\big|}.$$ Then, denoting by $P_{\lambda}$ the law of $\tilde Z_{\lambda}$, $$H ^2\big(P_{\lambda}, P_{\lambda}^*\big)=O(\lambda^{-1}).$$ Thanks to Theorem \[ch4teomisto\], denoting by $\Lambda$ a subset of ${\ensuremath {\mathbb{R}}}_{>0}$, by $\tilde {\ensuremath {\mathscr{P}}}$ (resp. ${\ensuremath {\mathscr{P}}}^*$) the statistical model associated with the family of probabilities $\{\tilde P_\lambda: \lambda \in \Lambda\}$ (resp. $\{P_\lambda^* : \lambda \in \Lambda\}$), we have $$\Delta\big(\tilde {\ensuremath {\mathscr{P}}}, {\ensuremath {\mathscr{P}}}^*\big) \leq \sup_{\lambda \in \Lambda} \frac{C}{\lambda},$$ for some constant $C$. Indeed, the correspondence associating $\tilde Z_\lambda$ to $\tilde X_\lambda$ defines a Markov kernel; conversely, associating to $\tilde Z_\lambda$ the closest integer to its square, defines a Markov kernel going in the other direction. Lévy processes {#sec:ch4levy} -------------- A stochastic process $\{X_t:t\geq 0\}$ on ${\ensuremath {\mathbb{R}}}$ defined on a probability space $(\Omega,{\ensuremath {\mathscr{A}}},{\ensuremath {\mathbb{P}}})$ is called a *Lévy process* if the following conditions are satisfied. 1. $X_0=0$ ${\ensuremath {\mathbb{P}}}$-a.s. 2. For any choice of $n\geq 1$ and $0\leq t_0<t_1<\ldots<t_n$, random variables $X_{t_0}$, $X_{t_1}-X_{t_0},\dots ,X_{t_n}-X_{t_{n-1}}$are independent. 3. The distribution of $X_{s+t}-X_s$ does not depend on $s$. 4. There is $\Omega_0\in {\ensuremath {\mathscr{A}}}$ with ${\ensuremath {\mathbb{P}}}(\Omega_0)=1$ such that, for every $\omega\in \Omega_0$, $X_t(\omega)$ is right-continuous in $t\geq 0$ and has left limits in $t>0$. 5. It is stochastically continuous. Thanks to the *Lévy-Khintchine formula*, the characteristic function of any Lévy process $\{X_t\}$ can be expressed, for all $u$ in ${\ensuremath {\mathbb{R}}}$, as: $$\label{caratteristica} {\ensuremath {\mathbb{E}}}\big[e^{iuX_t}\big]=\exp\bigg(-t\Big(iub-\frac{u^2\sigma^2}{2}-\int_{{\ensuremath {\mathbb{R}}}}(1-e^{iuy}+iuy{\ensuremath {\mathbb{I}}}_{\vert y\vert \leq 1})\nu(dy)\Big)\bigg),$$ where $b,\sigma\in {\ensuremath {\mathbb{R}}}$ and $\nu$ is a measure on ${\ensuremath {\mathbb{R}}}$ satisfying $$\nu(\{0\})=0 \textnormal{ and } \int_{{\ensuremath {\mathbb{R}}}}(|y|^2\wedge 1)\nu(dy)<\infty.$$ In the sequel we shall refer to $(b,\sigma^2,\nu)$ as the characteristic triplet of the process $\{X_t\}$ and $\nu$ will be called the *Lévy measure*. This data characterizes uniquely the law of the process $\{X_t\}$. Let $D=D([0,\infty),{\ensuremath {\mathbb{R}}})$ be the space of mappings $\omega$ from $[0,\infty)$ into ${\ensuremath {\mathbb{R}}}$ that are right-continuous with left limits. Define the *canonical process* $x:D\to D$ by $$\forall \omega\in D,\quad x_t(\omega)=\omega_t,\;\;\forall t\geq 0.$$ Let ${\ensuremath {\mathscr{D}}}_t$ and ${\ensuremath {\mathscr{D}}}$ be the $\sigma$-algebras generated by $\{x_s:0\leq s\leq t\}$ and $\{x_s:0\leq s<\infty\}$, respectively (here, we use the same notations as in [@sato]). By the condition (4) above, any Lévy process on ${\ensuremath {\mathbb{R}}}$ induces a probability measure $P$ on $(D,{\ensuremath {\mathscr{D}}})$. Thus $\{X_t\}$ on the probability space $(D,{\ensuremath {\mathscr{D}}},P)$ is identical in law with the original Lévy process. By saying that $(\{x_t\},P)$ is a Lévy process, we mean that $\{x_t:t\geq 0\}$ is a Lévy process under the probability measure $P$ on $(D,{\ensuremath {\mathscr{D}}})$. For all $t>0$ we will denote $P_t$ for the restriction of $P$ to ${\ensuremath {\mathscr{D}}}_t$. In the case where $\int_{|y|\leq 1}|y|\nu(dy)<\infty$, we set $\gamma^{\nu}:=\int_{|y|\leq 1}y\nu(dy)$. Note that, if $\nu$ is a finite Lévy measure, then the process having characteristic triplet $(\gamma^{\nu},0,\nu)$ is a compound Poisson process. Here and in the sequel we will denote by $\Delta x_r$ the jump of process $\{x_t\}$ at the time $r$: $$\Delta x_r = x_r - \lim_{s \uparrow r} x_s.$$ For the proof of Theorems \[ch4teo1\], \[ch4teo2\] we also need some results on the equivalence of measures for Lévy processes. By the notation $\ll$ we will mean “is absolutely continuous with respect to”. \[ch4teosato\] Let $P^1$ (resp. $P^2$) be the law induced on $(D,{\ensuremath {\mathscr{D}}})$ by a Lévy process of characteristic triplet $(\eta,0,\nu_1)$ (resp. $(0,0,\nu_2)$), where $$\label{ch4gamma*} \eta=\int_{\vert y \vert \leq 1}y(\nu_1-\nu_2)(dy)$$ is supposed to be finite. Then $P_t^1\ll P_t^2$ for all $t\geq 0$ if and only if $\nu_1\ll\nu_2$ and the density $\frac{d\nu_1}{d\nu_2}$ satisfies $$\label{ch4Sato} \int\bigg(\sqrt{\frac{d\nu_1}{d\nu_2}(y)}-1\bigg)^2\nu_2(dy)<\infty.$$ Remark that the finiteness in implies that in . When $P_t^1\ll P_t^2$, the density is $$\frac{dP_t^1}{dP_t^2}(x)=\exp(U_t(x)),$$ with $$\label{ch4U} U_t(x)=\lim_{\varepsilon\to 0} \bigg(\sum_{r\leq t}\ln \frac{d\nu_1}{d\nu_2}(\Delta x_r){\ensuremath {\mathbb{I}}}_{\vert\Delta x_r\vert>\varepsilon}- \int_{\vert y\vert > \varepsilon} t\bigg(\frac{d\nu_1}{d\nu_2}(y)-1\bigg)\nu_2(dy)\bigg),\\ P^{(0,0,\nu_2)}\textnormal{-a.s.}$$ The convergence in is uniform in $t$ on any bounded interval, $P^{(0,0,\nu_2)}$-a.s. Besides, $\{U_t(x)\}$ defined by is a Lévy process satisfying ${\ensuremath {\mathbb{E}}}_{P^{(0,0,\nu_2)}}[e^{U_t(x)}]=1$, $\forall t\geq 0$. Finally, let us consider the following result giving an explicit bound for the $L_1$ and the Hellinger distances between two Lévy processes of characteristic triplets of the form $(b_i,0,\nu_i)$, $i=1,2$ with $b_1-b_2=\int_{\vert y \vert \leq 1}y(\nu_1-\nu_2)(dy)$. \[teo:ch4bound\] For any $0<T<\infty$, let $P_T^i$ be the probability measure induced on $(D,{\ensuremath {\mathscr{D}}}_T)$ by a Lévy process of characteristic triplet $(b_i,0,\nu_i)$, $i=1,2$ and suppose that $\nu_1\ll\nu_2$. If $H^2(\nu_1,\nu_2):=\int\big(\sqrt{\frac{d\nu_1}{d\nu_2}(y)}-1\big)^2\nu_2(dy)<\infty,$ then $$H^2(P_T^1,P_T^2)\leq \frac{T}{2}H^2(\nu_1,\nu_2).$$ We conclude the Appendix with a technical statement about the Le Cam distance for finite variation models. \[ch4LC\] $$\Delta({\ensuremath {\mathscr{P}}}_n^{\nu_0},{\ensuremath {\mathscr{P}}}_{n,FV}^{\nu_0})=0.$$ Consider the Markov kernels $\pi_1$, $\pi_2$ defined as follows $$\pi_1(x,A)={\ensuremath {\mathbb{I}}}_{A}(x^d), \quad \pi_2(x,A)={\ensuremath {\mathbb{I}}}_{A}(x-\cdot \gamma^{\nu_0}), \quad \forall x\in D, A \in {\ensuremath {\mathscr{D}}},$$ where we have denoted by $x^d$ the discontinuous part of the trajectory $x$, i.e. $\Delta x_r = x_r - \lim_{s \uparrow r} x_s,\ x_t^d=\sum_{r \leq t}\Delta x_r$ and by $x-\cdot \gamma^{\nu_0}$ the trajectory $x_t-t\gamma{\nu_0}$, $t\in[0,T_n]$. On the one hand we have: $$\begin{aligned} \pi_1 P^{(\gamma^{\nu-\nu_0},0,\nu)}(A)&=\int_D \pi_1(x,A)P^{(\gamma^{\nu-\nu_0},0,\nu)}(dx)=\int_D {\ensuremath {\mathbb{I}}}_A(x^d)P^{(\gamma^{\nu-\nu_0},0,\nu)}(dx)\\ &=P^{(\gamma^{\nu},0,\nu)}(A),\end{aligned}$$ where in the last equality we have used the fact that, under $P^{(\gamma^{\nu-\nu_0},0,\nu)}$, $\{x_t^d\}$ is a Lévy process with characteristic triplet $(\gamma^{\nu},0,\nu)$ (see [@sato], Theorem 19.3). On the other hand: $$\begin{aligned} \pi_2 P^{(\gamma^{\nu},0,\nu)}(A)&=\int_D \pi_2(x,A)P^{(\gamma^{\nu_0},0,\nu)}(dx)=\int_D {\ensuremath {\mathbb{I}}}_A(x-\cdot \gamma^{\nu_0})P^{(\gamma^{\nu},0,\nu)}(dx)\\ &=P^{(\gamma^{\nu-\nu_0},0,\nu)}(A),\end{aligned}$$ since, by definition, $\gamma^{\nu}-\gamma^{\nu_0}$ is equal to $\gamma^{\nu-\nu_0}$. The conclusion follows by the definition of the Le Cam distance. Acknowledgements {#acknowledgements .unnumbered} ---------------- I am very grateful to Markus Reiss for several interesting discussions and many insights; this paper would never have existed in the present form without his advice and encouragement. My deepest thanks go to the anonymous referee, whose insightful comments have greatly improved the exposition of the paper; some gaps in the proofs have been corrected thanks to his/her remarks.
{ "pile_set_name": "arxiv" }
Hugh Lucas-Tooth Sir Hugh Vere Huntly Duff Munro-Lucas-Tooth, 1st Baronet (13 January 1903 – 18 November 1985), born and baptised Hugh Vere Huntly Duff Warrand and known as Sir Hugh Vere Huntly Duff Lucas-Tooth, 1st Baronet, from 1920 to 1965, was a Scottish British Conservative politician. Elected to parliament in 1924 at the age of 21, he was the first British MP to have been born in the 20th century. Family Warrand's father was Hugh Munro Warrand (8 July 1870 – 11 June 1935, married 24 April 1901), Major in the 3rd Battalion of the Queen's Own Cameron Highlanders, and son of Alexander John Cruikshank Warrand of Bught, Inverness-shire. Warrand's mother Beatrice Maude Lucas Lucas-Tooth (died 25 June 1944) was a daughter of Sir Robert Lucas-Tooth, 1st Baronet. Warrand's great-grandfather was Robert Tooth, a prominent Australian businessman. His brother Selwyn John Power Warrand (6 February 1904 – 24 May 1941), who married 25 March 1933 to Frena Lingen Crace, daughter of Everard Crace, from Canberra, Australian Capital Territory, by whom he had two children. Selwyn John Power Warrand was a Commander in the service of the Royal Navy, fought in World War II and was killed in action on board of HMS Hood (51) and his widow remarried in 1947 Henry Richard Charles Humphries. His sister Beatrice Helen Fitzhardinge Warrand (born 1908), married on 27 September 1941 another World War II veteran, Lieutenant Colonel Lyndall Fownes Urwick, Military Cross, Officer of the Order of the British Empire, son of Sir Henry Urwick of Malvern, Worcestershire, Justice of the Peace. Biography Warrand was educated at Eton College, and graduated from Balliol College in 1924 with a Bachelor of Arts degree. He adopted the legally changed name Hugh Vere Huntly Duff Lucas-Tooth of Teanich by Royal Licence in 1920 when he gained the recreated baronetcy of his maternal grandfather, the first baronet, whose three sons had died in World War I, being created 1st Baronet Lucas-Tooth, of Bught, County Inverness, in the Baronetage of the United Kingdom on 1 December 1920, with special remainder to the heirs male of the body of his mother. Lucas-Tooth was first elected to the House of Commons in the 1924 general election as Conservative Member of Parliament for the Isle of Ely from October 1924 to May 1929. Aged 21, he became the youngest MP, known as "Baby of the House". He served as Parliamentary Private Secretary to Arthur Samuel, Secretary for Overseas Trade. Lucas-Tooth was called to the bar in 1933 at Lincoln's Inn entitled to practise as a barrister. He also became a lieutenant colonel in the service of the Queen's Own Cameron Highlanders. During the 1930s Lucas-Tooth helped established the Lucas-Tooth gymnasium at Tooley Street in south London for the benefit of unemployed men from the Northern coalfields and unemployed areas. A new style of physical exercises helped improve the fitness of these men. It was featured in a British Pathe newsreel in 1938 titled 'Fit – Fitter – Fittest'. He was defeated in the 1929 general election by the Liberal candidate, James A. de Rothschild. Lucas-Tooth stood again for parliament in the 1945 general election for Hendon South, and was elected, taking his seat in July 1945. He retained the seat in subsequent general elections until 1970 and was Parliamentary Under-Secretary of State for the Home Department between February 1952 and December 1955. On 3 February 1965 Lucas-Tooth legally changed his name once again by Deed Poll to Hugh Vere Huntly Duff Munro-Lucas-Tooth of Teaninich, to reflect the Scottish lairdship Munro of Teaninich. He retired from Parliament at the 1970 general election. Marriage and issue He married on 10 September 1925 Laetitia Florence Findlay (died 1978), daughter of Sir John Ritchie Findlay, 1st Baronet, of Aberlour; the couple had three children, Laetitia (born 1926), Jennifer (born 1929), and Hugh (born 1932). Hugh succeeded his father as Baronet. References External links Category:1903 births Category:1985 deaths Category:People educated at Eton College Category:Alumni of Balliol College, Oxford Category:Conservative Party (UK) MPs for English constituencies Category:Baronets in the Baronetage of the United Kingdom Category:UK MPs 1924–1929 Category:UK MPs 1945–1950 Category:UK MPs 1950–1951 Category:UK MPs 1951–1955 Category:UK MPs 1955–1959 Category:UK MPs 1959–1964 Category:UK MPs 1964–1966 Category:UK MPs 1966–1970
{ "pile_set_name": "wikipedia_en" }
POV: Henry vs Martin + a poll I won’t make claims as to their gifts and charms, but H & M do resemble me in various ways :) I usually like to write stories from a single point of view. It’s obviously a limited perspective, but I enjoy the constraints. As far as I’m concerned, there’s no such thing as a reliable narrator. Characters misinterpret things, miss things, draw the wrong conclusions, and it can be tricky and fun to work the “truth” into a story alongside the character’s perceptions. For instance, I think it’s obvious to the reader that Martin is DTF from the get-go, but Henry, equipped with the same amount of information, simply doesn’t get it. When I started writing the Ganymede Quartet books, it seemed obvious to me that the story needed to be told from the master’s point of view. Whether or not he’s actually prepared to take responsibility, the fact remains that Henry’s the one in charge and he sets the tone. It’s Martin’s job to adapt and respond and accommodate and serve. Obviously, Martin is better-equipped to steer this particular ship, but, unfortunately for Henry, the roles in this relationship weren’t assigned based on fitness or merit. If you’ve read A Most Personal Property (GQ Book 1), you know that when the opportunity finally arises for Martin to take charge, he does so with great effect, but he does wait for Henry to create the opportunity. He’s very well-trained. I think it’s apparent that Martin is miserable for most of AMPP, and writing weeks of self-doubt and misery even greater than Henry’s, from the perspective of a character who has even less power to effect change…I don’t think anyone wants to read that book, actually. Henry also needed to be the POV character for the main books because Henry is the one who has the most growing to do. They’re both young, both immature, but Martin is less immature, his sense of self is more solid and, well, he’s a lot smarter. Henry learns a lot over the course of the series, which is not to say that Martin doesn’t, but as the one nominally in charge, Henry’s growth has a greater impact on both of them. It was possibly something of a risk, but I left out or delayed certain trains of thought because Henry isn’t necessarily considering all aspects and implications of the master/slave dynamic from early on in their relationship. He’s very loving, but he’s not the most insightful person, and it takes him awhile to consider things that a savvier fellow might have questioned from the beginning. It really does take Henry a long time to wonder how Martin’s position and training impact the way Martin responds to him. I anticipate going a little deeper into Martin’s background, in a way, for the story that will accompany Book 3. I also have a pretty good idea which aspect of Book 4 I’ll present from Martin’s perspective. So far, the Martin stories have been really fun to write, and I definitely look forward to doing them. I think they’re so easy and enjoyable to work on because they revisit territory that I’ve already covered from Henry’s perspective to some extent, and when I’m writing Henry, I’m always considering how Martin might view a given situation, as well. Offering Martin’s POV at all was actually a pretty late development. It occurred to me shortly before publishing A Most Personal Property that the stories I was busy telling myself about Martin’s past would probably be of interest to anyone who was interested in AMPP, and so I quickly wrote A Superior Slave. I hoped that people who enjoyed reading ASS (ugh, that acronym!) for free might be interested in paying for AMPP, and I think that did happen to some small extent. I’ve gotten the impression (whether it’s true or not) that Martin might be the reader favorite by a small margin, so it just seems like a nice idea to continue offering Martin POV stories alongside the main books. While I think a person can enjoy the main books and Henry’s POV without side stories, I like to think Martin’s perspective is a valuable addition. I plan on adding additional points of view from other characters in the universe. I’ve got stories written about a couple of Henry’s friends to show how slave ownership works in private for other people. I’ve got at least two stories I want to write about Henry’s cousin Jesse. I think Tom gets his own novella :D With A Proper Lover (GQ Book 2) and A Master’s Fidelity (GQ Book 2.5) released, I’m just going immediately into editing Book 3 and fleshing out the notes I have for the Martin story. I’d had vague ideas about taking a break, but I honestly don’t know what that would mean at this point. I don’t know what I’d be doing during a break! Right now, the idea of downtime just makes me cranky. Knowing that there are people eager for the next books makes me want to work on getting them out. Besides, working on Martin’s POV is a treat :)
{ "pile_set_name": "pile-cc" }
--- abstract: 'Disk scale length  and central surface brightness $\mu_0$ for a sample of 29955 bright disk galaxies from the Sloan Digital Sky Survey have been analysed. Cross correlation of the SDSS sample with the LEDA catalogue allowed us to investigate the variation of the scale lengths for different types of disk/spiral galaxies and present distributions and typical trends of scale lengths all the SDSS bands with linear relations that indicate the relation that connect scale lengths in one passband to another. We use the volume corrected results in the $r$-band and revisit the relation between these parameters and the galaxy morphology, and find the average values $\langle r_d\rangle = 3.8\pm 2.1$ kpc and $\langle\mu_0\rangle=20.2\pm 0.7$ mag arcsec$^{-2}$. The derived scale lengths presented here are representative for a typical galaxy mass of $10^{10.8} \rm{~M}_\odot$, and the RMS dispersion is larger for more massive galaxies. We analyse the –$\mu_0$ plane and further investigate the Freeman Law and confirm that it indeed defines an upper limit for $\mu_0$ in bright disks ($r_\mathrm{mag}<17.0$), and that disks in late type spirals ($T \ge 6$) have fainter central surface brightness. Our results are based on a sample of galaxies in the local universe ($z< 0.3$) that is two orders of magnitudes larger than any sample previously studied, and deliver statistically significant results that provide a comprehensive test bed for future theoretical studies and numerical simulations of galaxy formation and evolution.' --- Overview ======== The mass distribution of a disk is set by the  and in the exponential case, 60% of the total mass is confined within two scale lengths and 90% within four scale lengths. Moreover, the angular momentum of a disk is set by  and the mass distribution of its host halo, and the fact that the angular momentum vectors are aligned suggests that there is a physical relation between the two. During the formation process, mergers and associated star formation and feedback processes play a crucial role in the resulting structure, however, the observed sizes of disks suggest that the combination of these physical processes yield that galactic disks have not lost much of the original angular momentum acquired from cosmological torques (White & Rees 1978). A large  disk forms when the disk mass is smaller than the halo mass over the disk region, and vice versa, a small  disk forms when the mass of the disk dominates the mass of the halo in any part of the disk. The self gravitating disk will also modify the shape of the rotation curve near the centre of a galaxy and the disk is then set to undergo secular evolution. The natural implication of this scenario is that the  dictates the life of a disk, and consequently, is a prime factor which determines the position of a galaxy on the Hubble sequence. Here we analyse the  and $\mu_0$ from an unprecedentedly large sample of bright disk galaxies in the nearby universe ($z< 0.3$) using the Sloan Digital Sky Survey (SDSS) Data Release 6 (York et al. 2000; Adelman-McCarthy et al. 2008). We have used the Virtual Observatory tools and services to retrieve data in all ($u$, $g$, $r$, $i$, and $z$) SDSS band and used the LEDA catalogue (Paturel et al. 2003) to retrieve morphological classification information about our sample galaxies, and those with types defined as Sa or later are hereafter refereed to as disk galaxies. In the $g$, $i$, and $z$-band, $\approx 27000$–30000 galaxies were analysed, and in the $u$-band,  and $\mu_0$ were robustly derived for a few hundred objects. Throughout this presentation, we use disk parameters in the $r$-band to provide a comprehensive test bed for forthcoming cosmological simulations (or analytic/semi-analytic models) of galaxy formation and evolution. Further details have been presented in Fathi et al. (2010) and Fathi (2010). One prominent indicator for a smooth transition from spiral toward S0 and disky ellipticals is provided by the –$\mu_0$ diagram where $\mu_0$ is the central surface brightness of the disk, where spirals and S0s are mixed and disky ellipticals populate the upper left corner of this diagram. Another instructive relation is the Freeman Law (Freeman 1970) which relates $\mu_0$ to the galaxy morphological type. Although, some studies have found that the Freeman Law is an artefact due to selection effects (e.g., Disney et al. 1976), recent work have shown that proper consideration of selection effects can be combined with kinematic studies to explore an evolutionary sequence. In the comparison between theory and observations, two issues complicate matters. On the theory side mapping between initial halo angular momentum and  is not trivial, partly due to the fact that commonly the initial specific angular momentum distribution of the visible and dark component favour disks which are more centrally concentrated disks than exponential. Observationally, comprehensive samples have yet not been studied, and the mixture of different species such as low and high surface brightness galaxies complicate the measurements of disk parameters. ![image](Fathi_ScaleLengths_Fig1.jpg){width=".69\textwidth"} ![image](Fathi_ScaleLengths_Fig2.jpg){width=".99\textwidth"} Freeman Law and –$\mu_0$ Plane ============================== The Freeman Law defines an upper limit for $\mu_0$ and is hereby confirmed by our analysis of the largest sample ever studied in this context (see Fig. \[fig:mu0rd\]). However, disk galaxies with morphological type $T\ge6$ have fainter $\mu_0$. These results in $r$-band are comparable with those in other SDSS bands (Fathi 2010). Combined with our previous results, i.e. that  varies by two orders of magnitude independent of morphological type, this result implies that disks with large scale lengths, not necessarily have higher $\mu_0$. The $\mu_0$ has a Gaussian distribution with $\langle\mu_0\rangle=20.2\pm0.7$ mag arcsec$^{-2}$ with a linear trend seen in Fig. \[fig:mu0rd\] (applying different internal extinction parameters changes this mean value by 0.2 mag arcsec$^{-2}$). The top right corner is enclosed by the constant disk luminosity line, void of objects. The top right corner is also the region where the disk luminosity exceeds $3L^\star$, thus the absence of galaxies in this region cannot be a selection effect since big bright galaxies cannot be missed in our diameter selected sample. However, it is clear that selection effect plays a role in populating the lower left corner of this diagram. Analogue to the –$\mu_0$ plane, the Tully-Fisher relation implies lines of constant maximum speed a disk can reach. Our 282 well-classified galaxies (illustrated with coloured dots in Fig. \[fig:mu0rd\]) follow the results of, e.g., Graham and de Blok (2001), and confirm that disks of intermediate and early type spirals have higher $\mu_0$ while the late type spirals have lower $\mu_0$, and they populate the lower left corner of the diagram. Intermediate morphologies are mixed along a linear slope of 2.5 in the –$\mu_0$ plane, coinciding with the region populated by S0s as shown by Kent (1985) and disky ellipticals shown by Scorza & Bender (1995) The , on the other hand, does not vary as a function of morphological type (Fathi et al. 2010). Investigating galaxy masses, we find a forth quantity is equally important in this analysis. The total galaxy mass separates the data along lines parallel to the dashed lines drawn in Fig. \[fig:mu0rd\]. This is indeed also confirmed by the Tully-Fisher relation. Moreover, we validate that the lower mass galaxies are those with type $\ge 6$. Investigation of the asymmetry and concentration in this context further confirms the expected trends, i.e. that these parameters increase for later types, and central stellar velocity dispersion decrease for later type spiral galaxies, however, we note that these correlations are well below one-sigma confidence level. However, the higher asymmetry galaxies populate a region more extended toward the bottom right corner with respect to the high asymmetry galaxies. The middle panel shows an opposite trend, and the bottom panel shows that larger velocity dispersion has the same effect as asymmetry (see Fathi 2010 for further details). In the two relations analysed here, we find typically larger scatter than previous analyses, and although our sample represents bright disks, the sample size adds credibility to our findings. These results are fully consistent with the common understanding of the –$\mu_0$ plane and the Freeman Law, and they contribute to past results since they are based on a sample which is two order of magnitudes greater than any previous study, with more than five times more late type spiral galaxies than any previous analysis.\ [*Acknowledgements:* ]{} I thank the IAU, LOC and my colleagues i the SOC for a stimulating symposium, and Mark Allen, Evanthia Hatziminaoglou, Thomas Boch, Reynier Peletier and Michael Gatchell for their invaluable input at various stages of this project. Adelman-McCarthy, J. K. et al. 2008, ApJS, 175, 297 Disney, M. 1976, Nature, 263, 573 Graham, A. W., de Blok, W. 2001, ApJ, 556, L177 Fathi, K. et al. 2010, MNRAS, 406, 1595 Fathi, K. 2010, ApJ, 722, L120 Freeman, K. C. 1970, 160, 811 Kent, S. 1985, ApJSS, 59, 115 Paturel G. et al. 2003, A&A, 412, 45 Scorza, C., Bender, R. 1995, A&A, 293, 20 White, S. D. M., Rees, M. J. 1978, MNRAS, 183, 341 York, D. G. et al. 2000, AJ, 120, 1579
{ "pile_set_name": "arxiv" }
1953 Kent State Golden Flashes football team The 1953 Kent State Golden Flashes football team was an American football team that represented Kent State University in the Mid-American Conference (MAC) during the 1953 college football season. In their eighth season under head coach Trevor J. Rees, the Golden Flashes compiled a 7–2 record (3–1 against MAC opponents), finished in a tie for third place in the MAC, and outscored all opponents by a combined total of 250 to 103. The team's statistical leaders included Lou Mariano with 816 rushing yards, Don Burke with 577 passing yards, and Gino Gioia with 84 receiving yards. Fullback Jim Cullom and offensive tackle Al Kilgore were selected as first-team All-MAC players. References Kent State Category:Kent State Golden Flashes football seasons Kent State football
{ "pile_set_name": "wikipedia_en" }
KTLO-FM KTLO-FM 97.9 FM is a radio station licensed to Mountain Home, Arkansas. The station broadcasts an Adult Standards format and is owned by Mountain Lakes Broadcasting Corp. History On January 7, 1969, Mountain Home Broadcasting Corporation, the owner of KTLO (1240 AM), filed with the Federal Communications Commission to build a new FM radio station in Mountain Home. The construction permit was granted on July 1, 1970, and KTLO-FM began broadcasting at 98.3 MHz on January 11, 1971. $30,000 in new equipment was installed at the KTLO studios on Highway 5 to prepare for the launch of the stereo outlet. KTLO-FM broadcast from a hilltop tower located west of the studios and AM transmitter site. Early FM programming was in a block format, with contemporary and country music interspersed with news features. KTLO-AM-FM was sold in 1975 to four new investors for $400,000. By the mid-1980s, KTLO had settled into a middle-of-the-road music format known as "Stardust 98". The 1990s saw ownership and technical changes for KTLO-FM. The former began with a $775,000 sale of KTLO-AM-FM to Charles and Scottie Earls in late 1994. The Earls oversaw a major technical overhaul for the FM outlet: in 1996, it increased its power to 50,000 watts and relocated to 97.9 MHz from a transmitter on Crystal Mountain, with the programming remaining the same. The Earls divested their remaining shares in KTLO-AM-FM and KCTT-FM 101.7 to the Ward and Knight families in 2010 in a transaction that gave the Earls full control of KOMC-FM and KRZK in Branson, Missouri; the two families had previously been minority owners in Mountain Lakes. Among KTLO-FM's regular programs is Talk of the Town, an interview show. Talk of the Town had previously been hosted by Brenda Nelson, who retired after 34 years on air in 2009 after airing some 8,000 interviews. References External links KTLO-FM website TLO-FM Category:Adult standards radio stations in the United States Category:1971 establishments in Arkansas Category:Radio stations established in 1971
{ "pile_set_name": "wikipedia_en" }
Recently, I sat down to talk with a group of eight students from a large prominent church in Southern California. They were raised in the church. They were regulars at youth group. They claimed to be in relationship with Christ. Yet, they were dead. As I tried to engage them, most seemed unmoved and uninterested. And I was not surprised. As I work with churches around the country, I encounter countless Christian students who are apathetic toward spiritual things. Their relationship with Christ is passionless. Talk of God is ho-hum. But why? Shouldn’t our relationship with Christ be life’s most exciting adventure? I’m not suggesting the Christian life is one, big, emotional high, but why are students more willing to plug into their iPods than their Bibles? Why are they more excited about the latest celebrity gossip than the Gospel? Why aren’t their lives filled with the drama of God’s Kingdom? I think a big part of the problem is that Christian students rarely engage their world for the cause of Christ. Here’s what I’ve observed in my training over the years. The most exciting events I do, the events where students seem to come to life, are those where there is some component of engagement. Let me illustrate. For almost ten years now, I’ve been taking students on mission trips to Berkeley and Utah. Each trip requires hours of training, typically in the form of classroom instruction and the reading of required books. This training is important and necessary, but it’s not what generates the most buzz among the students. Students get fired up on the trip when we give them opportunities to engage non-believers. On these trips we invite Mormon leaders, Unitarians, gay activists, Hare Krishna priests, skeptics, and atheists to dialogue with students. We give our non-Christian guests time to share their views, followed by a time of questions from our students. It’s during Q&A when students really come to life. They ask question after question, graciously yet firmly force our skeptical guests to give a reason for their views. At the conclusion of each encounter, we thank our guests and then spend time debriefing. At this point, students are always abuzz, asking me question after question. Before I know it, an hour of discussing apologetics and theology with youth will have flown by. In addition, we send our groups onto college campuses, like BYU or Berkeley, to conduct surveys. The surveys are designed to get our students into conversation with non-Christians students about spiritual issues. At first, students are fearful and anxious. They’re skeptical about people’s willingness to engage with them. But after an hour or two of surveys, students return and they are always pumped. During our debrief time, students can’t wait to share about their encounters. They’re filled with excitement about their conversations on campus with non-Christians. When we create opportunities for students to engage, there is a vibrancy that infuses the events. But this shouldn’t come as a surprise. Christianity is not a spectator sport. Our teaching should not remain in a classroom or behind the four walls of the church. If we want to train students who can defend the faith not just intelligently but passionately, we need to get them in the game. Think about any sports teams. It’s the starters who are the most passionate about the game, right? The benchwarmers, not so much. I think that’s one reason why our mission trips to Berkeley and Utah are exciting and successful. They get students in the game. They get students engaging a lost world with the truth of Jesus Christ. In 2014, students will get a taste of being in the game as I take them to Berkeley and Utah. I’ve already maxed out the number of mission trips I’m capable of taking through July. Indeed, we’ve had to turn groups away or ask them to start scheduling for 2015. So this year, we’ll be getting students off the sidelines and igniting their fire for Christ. I can’t wait. As a parent of 5 kids, summer gets expensive. I have to pay for swim lessons, soccer camps, VBS, youth group trips, family vacations, and more. And these costs don't even include feeding my kids all … > Read full article
{ "pile_set_name": "pile-cc" }
TODO: Implement depth-major-sources packing paths for NEON Platforms: ARM NEON Coding time: M Experimentation time: M Skill required: M Prerequisite reading: doc/kernels.txt doc/packing.txt Model to follow/adapt: internal/pack_neon.h At the moment we have NEON optimized packing paths for WidthMajor sources. We also need paths for DepthMajor sources. This is harder because for DepthMajor sources, the size of each slice that we have to load is the kernel's width, which is typically 12 (for the LHS) or 4 (for the RHS). That's not very friendly to NEON vector-load instructions which would allow us to load 8 or 16 entries, but not 4 or 12. So you will have to load 4 entries at a time only. For that, the vld1q_lane_u32 seems to be as good as you'll get. The other possible approach would be to load (with plain scalar C++) four uint32's into a temporary local buffer, and use vld1q_u8 on that. Some experimentation will be useful here. For that, you can generate assembly with -save-temps and make assembly easier to inspect by inserting inline assembly comments such as asm volatile("#hello");
{ "pile_set_name": "github" }
var config = { type: Phaser.AUTO, parent: 'phaser-example', width: 800, height: 600, scene: { create: create }, }; var game = new Phaser.Game(config); function create() { var graphics = this.add.graphics(); drawStar(graphics, 100, 300, 4, 50, 50 / 2, 0xffff00, 0xff0000); drawStar(graphics, 400, 300, 5, 100, 100 / 2, 0xffff00, 0xff0000); drawStar(graphics, 700, 300, 6, 50, 50 / 2, 0xffff00, 0xff0000); } function drawStar (graphics, cx, cy, spikes, outerRadius, innerRadius, color, lineColor) { var rot = Math.PI / 2 * 3; var x = cx; var y = cy; var step = Math.PI / spikes; graphics.lineStyle(4, lineColor, 1); graphics.fillStyle(color, 1); graphics.beginPath(); graphics.moveTo(cx, cy - outerRadius); for (i = 0; i < spikes; i++) { x = cx + Math.cos(rot) * outerRadius; y = cy + Math.sin(rot) * outerRadius; graphics.lineTo(x, y); rot += step; x = cx + Math.cos(rot) * innerRadius; y = cy + Math.sin(rot) * innerRadius; graphics.lineTo(x, y); rot += step; } graphics.lineTo(cx, cy - outerRadius); graphics.closePath(); graphics.fillPath(); graphics.strokePath(); }
{ "pile_set_name": "github" }
Stone flaming Stone flaming or thermaling is the application of high temperature to the surface of stone to make it look like natural weathering. The sudden application of a torch to the surface of stone causes the surface layer to expand and flake off, exposing rough stone. Flaming works well on granite, because granite is made up of minerals with differing heat expansion rates. Process After removing a rock from a quarry, the rock is sliced into multiple flat slabs using a diamond gang saw. The saw leaves flat surfaces with circular marks. Flaming is done by wetting, and then running an oxygen-acetylene or oxygen-propane torch over the surface. As seen in both photos, the torch is usually kept at a 45 degree angle to the stone. Alternatives Alternative techniques for creating a rough surface on sawed stone include: bush hammering sandblasting hydrofinishing See also References External links Stone surfaces, photos of various surface treatments Palowy Stone, photos of stone flaming Understanding Flagstone: Sawcut, Thermaled, and Chiseled Edges Photos of hydrofinishing Category:Stonemasonry
{ "pile_set_name": "wikipedia_en" }
Sprint International Sprint International may refer to: Sprint Corporation, telecommunications company The International (golf), golf tournament
{ "pile_set_name": "wikipedia_en" }
--- abstract: 'This paper is to prove the asymptotic normality of a statistic for detecting the existence of heteroscedasticity for linear regression models without assuming randomness of covariates when the sample size $n$ tends to infinity and the number of covariates $p$ is either fixed or tends to infinity. Moreover our approach indicates that its asymptotic normality holds even without homoscedasticity.' address: - 'KLASMOE and School of Mathematics and Statistics, Northeast Normal University, Changchun, P.R.C., 130024.' - 'School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore, 637371' - 'KLASMOE and School of Mathematics and Statistics, Northeast Normal University, Changchun, P.R.C., 130024.' author: - Zhidong Bai - Guangming Pan - Yanqing Yin title: 'Homoscedasticity tests for both low and high-dimensional fixed design regressions' --- [ emptyifnotempty[@addto@macro]{} @addto@macro]{} [^1] [ emptyifnotempty[@addto@macro]{} @addto@macro]{} [^2] [ emptyifnotempty[Corresponding author]{}[@addto@macro]{} @addto@macro]{} [^3] Introduction ============ A brief review of homoscedasticity test --------------------------------------- Consider the classical multivariate linear regression model of $p$ covariates $$\begin{aligned} y_i={\mathbf}x_i{\mathbf}\beta+{\mathbf}\varepsilon_i,\ \ \ \ \ i=1,2,\cdots,n,\end{aligned}$$ where $y_i$ is the response variable, ${\mathbf}x_i=(x_{i,1},x_{i,2},\cdots,x_{i,p})$ is the $p$-dimensional covariates, $\beta={\left(}\beta_1,\beta_2,\cdots,\beta_p{\right)}'$ is the $p$ dimensional regression coefficient vector and $\varepsilon_i$ is the independent random errors obey the same distribution with zero mean and variance $\sigma_i^2$. In most applications of the linear regression models the homoscedasticity is a very important assumption. Without it, the loss in efficiency in using ordinary least squares (OLS) may be substantial and even worse, the biases in estimated standard errors may lead to invalid inferences. Thus, it is very important to examine the homoscedasticity. Formally, we need to test the hypothesis $$\label{a1} H_0: \ \sigma_1^2=\sigma_2^2=\cdots=\sigma_n^2=\sigma^2,$$ where $\sigma^2$ is a positive constant. In the literature there are a lot of work considering this hypothesis test when the dimension $p$ is fixed. Indeed, many popular tests have been proposed. For example Breusch and Pagan [@breusch1979simple] and White [@white1980heteroskedasticity] proposed statistics to investigate the relationship between the estimated errors and the covariates in economics. While in statistics, Dette and Munk [@dette1998estimating], Glejser [@glejser1969new], Harrison and McCabe [@harrison1979test], Cook and Weisberg [@cook1983diagnostics], Azzalini and Bowman[@azzalini1993use] proposed nonparametric statistics to conduct the hypothesis. One may refer to Li and Yao [@li2015homoscedasticity] for more details in this regard. The development of computer science makes it possible for people to collect and deal with high-dimensional data. As a consequence, high-dimensional linear regression problems are becoming more and more common due to widely available covariates. Note that the above mentioned tests are all developed under the low-dimensional framework when the dimension $p$ is fixed and the sample size $n$ tends to infinity. In Li and Yao’s paper, they proposed two test statistics in the high dimensional setting by using the regression residuals. The first statistic uses the idea of likelihood ratio and the second one uses the idea that “the departure of a sequence of numbers from a constant can be efficiently assessed by its coefficient of variation", which is closely related to John’s idea [@john1971some]. By assuming that the distribution of the covariates is ${\mathbf}N({\mathbf}0, {\mathbf}I_p)$ and that the error obey the normal distribution, the “coefficient of variation" statistic turns out to be a function of residuals. But its asymptotic distribution missed some part as indicated from the proof of Lemma 1 in [@li2015homoscedasticity] even in the random design. The aim of this paper is to establish central limit theorem for the “coefficient of variation" statistic without assuming randomness of the covariates by using the information in the projection matrix (the hat matrix). This ensures that the test works when the design matrix is both fixed and random. More importantly we prove that the asymptotic normality of this statistics holds even without homoscedasticity. That assures a high power of this test. The structure of this paper is as follows. Section 2 is to give our main theorem and some simulation results, as well as two real data analysis. Some calculations and the proof of the asymptotic normality are presented in Section 3. Main Theorem, Simulation Results and Real Data Analysis ======================================================= The Main Theorem ---------------- Suppose that the parameter vector $\beta$ is estimated by the OLS estimator $$\hat{ {\beta}}={\left(}{\mathbf}X'{\mathbf}X{\right)}^{-1}{\mathbf}X'{\mathbf}Y.$$ Denote the residuals by $$\hat{{\mathbf}\varepsilon}={\left(}\hat{{\mathbf}\varepsilon_1},\hat{{\mathbf}\varepsilon_2},\cdots,\hat{{\mathbf}\varepsilon_n}{\right)}'={\mathbf}Y-{\mathbf}X\hat \beta={\mathbf}P\varepsilon,$$ with ${\mathbf}P=(p_{ij})_{n\times n}={\mathbf}I_n-{\mathbf}X({\mathbf}X'{\mathbf}X)^{-1}{\mathbf}X'$ and $\varepsilon={\left(}\varepsilon_1,\varepsilon_2,\cdots,\varepsilon_n{\right)}'$. Let ${\mathbf}D$ be an $n\times n$ diagonal matrix with its $i$-th diagonal entry being $\sigma_i$, set ${\mathbf}A=(a_{ij})_{n\times n}={\mathbf}P{\mathbf}D$ and let $\xi={\left(}\xi_1,\xi_2,\cdots,\xi_n{\right)}'$ stand for a standard $n$ dimensional random vector whose entries obey the same distribution with $\varepsilon$. It follows that the distribution of $\hat \varepsilon$ is the same as that of ${\mathbf}A\xi$. In the following, we use ${{\rm Diag}}{\left(}{\mathbf}B{\right)}={\left(}b_{1,1},b_{2,2},\cdots,b_{n,n}{\right)}'$ to stand for the vector formed by the diagonal entries of ${\mathbf}B$ and ${{\rm Diag}}'{\left(}{\mathbf}B{\right)}$ as its transpose, use ${\mathbf}D_{{\mathbf}B}$ stand for the diagonal matrix of ${\mathbf}B$, and use ${\mathbf}1$ stand for the vector ${\left(}1,1,\cdots,1{\right)}'$. Consider the following statistic $$\label{a2} {\mathbf}T=\frac{\sum_{i=1}^n{\left(}\hat{\varepsilon_i}^2-\frac{1}{n}\sum_{i=1}^n\hat{\varepsilon_i}^2{\right)}^2}{\frac{1}{n}{\left(}\sum_{i=1}^n\hat{\varepsilon_i}^2{\right)}^2}.$$ We below use ${\mathbf}A {\circ}{\mathbf}B$ to denote the Hadamard product of two matrices ${\mathbf}A$ and ${\mathbf}B$ and use ${\mathbf}A ^{{\circ}k}$ to denote the Hadamard product of k ${\mathbf}A$. \[th1\] Under the condition that the distribution of $\varepsilon_1$ is symmetric, ${{\rm E}}|\varepsilon_1|^8\leq \infty$ and $p/n\to y\in [0,1)$ as $n\to \infty$, we have $$\frac{{\mathbf}T-a}{\sqrt b}\stackrel{d}{\longrightarrow}{\mathbf}N(0,1)$$ where $a$, $b$ are determined by $n$, $p$ and ${\mathbf}A$. Under $H_0$, we further have $$a={\left(}\frac{n{\left(}3{{\rm tr}}{\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}+\nu_4{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)^2{\right)}}{{\left(}{\left(}n-p{\right)}^2+2{\left(}n-p{\right)}+\nu_4{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P){\right)}}-1{\right)},\ b=\Delta'\Theta\Delta,$$ where $$\Delta'=(\frac{n}{{\left(}{\left(}n-p{\right)}^2+2{\left(}n-p{\right)}+\nu_4{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P){\right)}},-\frac{n^2{\left(}3{{\rm tr}}{\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}+\nu_4{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)^2{\right)}}{{{\left(}{\left(}n-p{\right)}^2+2{\left(}n-p{\right)}+\nu_4{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P){\right)}}^2})$$ and $$\Theta=\left( \begin{array}{cc} \Theta_{11} & \Theta_{12} \\ \Theta_{21} & \Theta_{22} \\ \end{array} \right),$$ where $$\begin{aligned} \Theta_{11}=&72{{\rm Diag}}'({\mathbf}P) {\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}{{\rm Diag}}({\mathbf}P)+24{{\rm tr}}{\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}^2\\\notag &+\nu_4{\left(}96{{\rm tr}}{\mathbf}P {\mathbf}{D_P} {\mathbf}P {\mathbf}P^{{\circ}3}+72{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)^3+36{{\rm Diag}}'({\mathbf}P) {\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}^2{{\rm Diag}}({\mathbf}P) {\right)}\\\notag &+\nu^2_4{\left(}18{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)^4+16{{\rm tr}}({\mathbf}P^{{\circ}3}{\mathbf}P)^2){\right)}\\\notag &+\nu_6{\left(}12{{\rm tr}}{\left(}{\left(}{\mathbf}P{\mathbf}D_{{\mathbf}P}{\mathbf}P {\right)}{\circ}{\left(}{\mathbf}P^{{\circ}2}{\mathbf}P^{{\circ}2}{\right)}{\right)}+16{{\rm tr}}{\mathbf}P {\mathbf}P^{{\circ}3}{\mathbf}P^{{\circ}3}{\right)}+\nu_8{\mathbf}1'({\mathbf}P^{{\circ}4}{\mathbf}P^{{\circ}4}){\mathbf}1,\end{aligned}$$ $$\begin{aligned} \Theta_{22}=\frac{8{\left(}n-p{\right)}^3+4\nu_4{\left(}n-p{\right)}^2{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)}{n^2},\end{aligned}$$ $$\begin{aligned} &\Theta_{12}=\Theta_{21}\\\notag =&\frac{{\left(}n-p{\right)}}{n}{\left(}24{{\rm tr}}{\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}+16\nu_4{{\rm tr}}({\mathbf}P {\mathbf}P^{{\circ}3})+12\nu_4{{\rm tr}}{\left(}{\left(}{\mathbf}P{\mathbf}D_{{\mathbf}p}{\mathbf}P{\right)}{\circ}{\mathbf}P{\right)}+2\nu_6[{{\rm Diag}}({\mathbf}P)'({\mathbf}P^{{\circ}4}){\mathbf}1]{\right)},\end{aligned}$$ $\nu_4=M_4-3$ , $\nu_6=M_6-15 M_4+30$ and $\nu_8=M_8-28 M_6-35M_4^2+420M_4-630$ are the corresponding cumulants of random variable $\varepsilon_1$. The existence of the 8-th moment is necessary because it determines the asymptotic variance of the statistic. The explicit expressions of $a$ and $b$ are given in Theorem \[th1\] under $H_0$. However the explicit expressions of $a$ and $b$ are quite complicated under $H_1$. Nevertheless one may obtain them from (\[e1\])-(\[e2\]) and (\[t10\])-(\[ct12\]) below. In Li and Yao’s paper, under the condition that the distribution of $\varepsilon$ is normal, they also did some simulations when the design matrices are non-Gaussian. Specifically speaking, they also investigated the test when the entries of design matrices are drawn from gamma distribution $G(2,2)$ and uniform distribution $U(0,1)$ respectively. There is no significant difference in terms of size and power between these two non-normal designs and the normal design. This seems that the proposed test is robust against the form of the distribution of the design matrix. But according to our main theorem, it is not always the case. In our main theorem, one can find that when the error $\varepsilon$ obey the normal distribution, under $H_0$ and given $p$ and $n$, the expectation of the statistics is only determined by ${{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)$. We conduct some simulations to investigate the influence of the distribution of the design matrix on this term when $n=1000$ and $p=200$. The simulation results are presented in table \[table1\]. $N(0,1)$ $G(2,2)$ $U(0,1)$ $F(1,2)$ $exp(N(5,3))/100$ ------------------------------------------- ---------- ---------- ---------- ---------- ------------------- ${{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)$ 640.3 640.7 640.2 712.5 708.3 : The value of ${{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)$ corresponding to different design distributions[]{data-label="table1"} It suggests that even if the entries of the design matrix are drawn from some common distribution, the expectation of the statistics may deviate far from that of the normal case. This will cause a wrong test result. Moreover, even in the normal case, our result is more accurate since we do not use any approximate value in the mean of the statistic $T$. Let’s take an example to explain why this test works. For convenient, suppose that $\varepsilon_1$ obey the normal distribution. From the calculation in Section \[exp\] we know that the expectation of the statistic ${\mathbf}T$ defined in (\[a2\]) can be represented as $${{\rm E}}{\mathbf}T=\frac{3n\sum_{i=1}^np_{ii}^2\sigma_i^4}{(\sum_{i=1}^np_{ii}\sigma_i^2)^2}-1+o(1).$$ Now assume that $p_{ii}=\frac{n-p}{n}$ for all $i=1,\cdots,n$. Moreover, without loss of generality, suppose that $\sigma_1=\cdots=\sigma_n=1$ under $H_0$ so that we get ${{\rm E}}{\mathbf}T\to 2$ as $n \to \infty$. However, when $\sigma_1=\cdots=\sigma_{[n/2]}=1$ and $\sigma_{[n/2]+1}=\cdots=\sigma_{n}=2$, one may obtain ${{\rm E}}{\mathbf}T\to 3.08$ as $n\to \infty$. Since ${\rm Var}({\mathbf}T)=O(n^{-1})$ this ensures a high power as long as $n$ is large enough. Some simulation results ----------------------- We next conduct some simulation results to investigate the performance of our test statistics. Firstly, we consider the condition when the random error obey the normal distribution. Table \[table2\] shows the empirical size compared with Li and Yao’s result in [@li2015homoscedasticity] under four different design distributions. We use $``{\rm CVT}"$ and $``{\rm FCVT}"$ to represent their test and our test respectively. The entries of design matrices are $i.i.d$ random samples generated from $N(0,1)$, $t(1)$ ($t$ distribution with freedom degree 1), $F(3,2)$ ($F$ distribution with parameters 3 and 2) and logarithmic normal distribution respectively. The sample size $n$ is 512 and the dimension of covariates varies from 4 to 384. We also follow [@dette1998testing] and consider the following two models: Model 1 : $y_i={\mathbf}x_i{\mathbf}\beta+{\mathbf}\varepsilon_i(1+{\mathbf}x_i {\mathbf}h),\ \ \ \ \ i=1,2,\cdots,n$,\ where ${\mathbf}h=(1,{\mathbf}0_{(p-1)})$, Model 2 : $y_i={\mathbf}x_i{\mathbf}\beta+{\mathbf}\varepsilon_i(1+{\mathbf}x_i {\mathbf}h),\ \ \ \ \ i=1,2,\cdots,n $\ where ${\mathbf}h=({\mathbf}1_{(p/2)},{\mathbf}0_{(p/2)})$. Tables \[table3\] and \[table4\] show the empirical power compared with Li and Yao’s results under four different regressors distributions mentioned above. Then, we consider the condition that the random error obey the two-point distribution. Specifically speaking, we suppose $P(\varepsilon_1=-1)=P(\varepsilon_1=1)=1/2$. Since Li and Yao’s result is unapplicable in this situation, Table \[table5\] just shows the empirical size and empirical power under Model 2 of our test under four different regressors distributions mentioned above. According to the simulation result, it is showed that when $p/n\to [0,1)$ as $n\to \infty$, our test always has good size and power under all regressors distributions. N(0,1) t(1) $F(3,2)$ $e^{(N(5,3))}$ ----- -------- -------- -------- -------- ---------- -------- ---------------- -------- -- -- p FCVT CVT FCVT CVT FCVT CVT FCVT CVT 4 0.0582 0.0531 0.0600 0.0603 0.0594 0.0597 0.0590 0.0594 16 0.0621 0.0567 0.0585 0.0805 0.0585 0.0824 0.0595 0.0803 64 0.0574 0.0515 0.0605 0.2245 0.0586 0.2312 0.0578 0.2348 128 0.0597 0.0551 0.0597 0.5586 0.0568 0.5779 0.0590 0.5934 256 0.0551 0.0515 0.0620 0.9868 0.0576 0.9908 0.0595 0.9933 384 0.0580 0.0556 0.0595 1.0000 0.0600 1.0000 0.0600 1.0000 : empirical size under different distributions[]{data-label="table2"} N(0,1) t(1) $F(3,2)$ $e^{(N(5,3))}$ ----- -------- -------- -------- -------- ---------- -------- ---------------- -------- -- -- p FCVT CVT FCVT CVT FCVT CVT FCVT CVT 4 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 16 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 64 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 128 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 256 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 384 0.8113 0.8072 0.9875 1.0000 0.9876 1.0000 0.9905 1.0000 : empirical power under model 1[]{data-label="table3"} N(0,1) t(1) $F(3,2)$ $e^{(N(5,3))}$ ----- -------- -------- -------- -------- ---------- -------- ---------------- -------- -- -- p FCVT CVT FCVT CVT FCVT CVT FCVT CVT 4 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 16 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 64 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 128 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 256 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 384 0.9066 0.9034 0.9799 1.0000 0.9445 1.0000 0.8883 1.0000 : empirical power under model 2[]{data-label="table4"} N(0,1) t(1) $F(3,2)$ $e^{(N(5,3))}$ ----- -------- -------- -------- -------- ---------- -------- ---------------- -------- -- -- p Size Power Size Power Size Power Size Power 4 0.0695 1.0000 0.0726 1.0000 0.0726 1.0000 0.0664 1.0000 16 0.0695 1.0000 0.0638 1.0000 0.0706 1.0000 0.0556 1.0000 64 0.0646 1.0000 0.0606 1.0000 0.0649 1.0000 0.0622 1.0000 128 0.0617 1.0000 0.0705 1.0000 0.0597 1.0000 0.0630 1.0000 256 0.0684 1.0000 0.0685 1.0000 0.0608 1.0000 0.0649 1.0000 384 0.0610 0.8529 0.0748 1.0000 0.0758 1.0000 0.0742 1.0000 : empirical size and power under different distributions[]{data-label="table5"} Two Real Rata Analysis ---------------------- ### The Death Rate Data Set In [@mcdonald1973instabilities], the authors fitted a multiple linear regression of the total age adjusted mortality rate on 15 other variables (the average annual precipitation, the average January temperature, the average July temperature, the size of the population older than 65, the number of members per household, the number of years of schooling for persons over 22, the number of households with fully equipped kitchens, the population per square mile, the size of the nonwhite population, the number of office workers, the number of families with an income less than \$3000, the hydrocarbon pollution index, the nitric oxide pollution index, the sulfur dioxide pollution index and the degree of atmospheric moisture). The number of observations is 60. To investigate whether the homoscedasticity assumption in this models is justified, we applied our test and got a p-value of 0.4994, which strongly supported the assumption of constant variability in this model since we use the one side test. The data set is available at <http://people.sc.fsu.edu/~jburkardt/datasets/regression/regression.html>. ### The 30-Year Conventional Mortgage Rate Data Set The 30-Year Conventional Mortgage Rate data [@Mortgage] contains the economic data information of USA from 01/04/1980 to 02/04/2000 on a weekly basis (1049 samples). The goal is to predict the 30-Year Conventional Mortgage Rate by other 15 features . We used a multiple linear regression to fit this data set and got a good result. The adjusted R-squared is 0.9986, the P value of the overall F-test is 0. Our homoscedasticity test reported a p-value 0.4439. Proof Of The Main Theorem ========================= This section is to prove the main theorem. The first step is to establish the asymptotic normality of ${\mathbf}T_1$, ${\mathbf}T_2$ and $\alpha{\mathbf}T_1+\beta{\mathbf}T_2$ with $\alpha^2+\beta^2 \neq 0$ by the moment convergence theorem. Next we will calculate the expectations, variances and covariance of the statistics ${\mathbf}T_1=\sum_{i=1}^n\hat{\varepsilon_i}^4$ and ${\mathbf}T_2=\frac{1}{n}{\left(}\sum_{i=1}^n\hat{\varepsilon_i}^2{\right)}^2$. The main theorem then follows by the delta method. Note that without loss of generality, under $H_0$, we can assume that $\sigma=1$. The asymptotic normality of the statistics. {#clt} ------------------------------------------- We start by giving a definition in Graph Theory. A graph ${\mathbf}G={\left(}{\mathbf}V,{\mathbf}E,{\mathbf}F{\right)}$ is called two-edge connected, if removing any one edge from $G$, the resulting subgraph is still connected. The next lemma is a fundamental theorem for Graph-Associated Multiple Matrices without the proof. For the details of this theorem, one can refer to the section ${\mathbf}A.4.2$ in [@bai2010spectral]. \[lm2\] Suppose that ${\mathbf}G={\left(}{\mathbf}V,{\mathbf}E, {\mathbf}F{\right)}$ is a two-edge connected graph with $t$ vertices and $k$ edges. Each vertex $i$ corresponds to an integer $m_i \geq 2$ and each edge $e_j$ corresponds to a matrix ${\mathbf}T^{(j)}={\left(}t_{\alpha,\beta}^{(j)}{\right)},\ j=1,\cdots,k$, with consistent dimensions, that is, if $F(e_j)=(f_i(e_j),f_e(e_j))=(g,h),$ then the matrix ${\mathbf}T^{{\left(}j{\right)}}$ has dimensions $m_g\times m_h$. Define ${\mathbf}v=(v_1,v_2,\cdots,v_t)$ and $$\begin{aligned} T'=\sum_{{\mathbf}v}\prod_{j=1}^kt_{v_{f_i(e_j)},v_{f_e(e_j)}}^{(j)}, \end{aligned}$$ where the summation $\sum_{{\mathbf}v}$ is taken for $v_i=1,2,\cdots, m_i, \ i=1,2,\cdots,t.$ Then for any $i\leq t$, we have $$|T'|\leq m_i\prod_{j=1}^k\|{\mathbf}T^{(j)}\|.$$ Let $\mathcal{T}=({\mathbf}T^{(1)},\cdots,{\mathbf}T^{(k)})$ and define $G(\mathcal{T})=(G,\mathcal{T})$ as a Graph-Associated Multiple Matrices. Write $T'=sum(G(\mathcal{T}))$, which is referred to as the summation of the corresponding Graph-Associated Multiple Matrices. We also need the following truncation lemma \[lm3\] Suppose that $\xi_n={\left(}\xi_1,\cdots,\xi_n{\right)}$ is an i.i.d sequence with ${{\rm E}}|\xi_1|^r \leq \infty$, then there exists a sequence of positive numbers $(\eta_1,\cdots,\eta_n)$ satisfy that as $n \to \infty$, $\eta_n \to 0$ and $$P(\xi_n\neq\widehat \xi_n,\ {\rm i.o.})=0,$$ where $\widehat \xi_n={\left(}\xi_1 I(|\xi_1|\leq \eta_n n^{1/r}),\cdots,\xi_n I(|\xi_n|\leq \eta_n n^{1/r}){\right)}.$ And the convergence rate of $\eta_n$ can be slower than any preassigned rate. ${{\rm E}}|\xi_1|^r \leq \infty$ indicated that for any $\epsilon>0$, we have $$\sum_{m=1}^\infty 2^{2m}P(|\xi_1|\geq\epsilon2^{2m/r})\leq \infty.$$ Then there exists a sequence of positive numbers $\epsilon=(\epsilon_1,\cdots,\epsilon_m)$ such that $$\sum_{m=1}^\infty 2^{2m}P(|\xi_1|\geq\epsilon_m2^{2m/r})\leq \infty,$$ and $\epsilon_m \to 0$ as $m \to 0$. And the convergence rate of $\epsilon_m$ can be slower than any preassigned rate. Now, define $\delta_n=2^{1/r}\epsilon_m$ for $2^{2m-1}\leq n\leq 2^{2m}$, we have as $n\to \infty$ $$\begin{aligned} P(\xi_n\neq\widehat \xi_n,\ {\rm i.o.})\leq &\lim_{k\to \infty}\sum_{m=k}^{\infty}P\Big(\bigcup_{2^{2m-1}\leq n\leq 2^{2m}}\bigcup_{i=1}^n{\left(}|\xi_i|\geq\eta_nn^{1/r}{\right)}\Big)\\\notag \leq&\lim_{k\to \infty}\sum_{m=k}^{\infty}P\Big(\bigcup_{2^{2m-1}\leq n\leq 2^{2m}}\bigcup_{i=1}^{2^{2m}}{\left(}|\xi_i|\geq\epsilon_m 2^{1/r}2^{\frac{{\left(}2m-1{\right)}}{r}}{\right)}\Big)\\\notag \leq&\lim_{k\to \infty}\sum_{m=k}^{\infty}P\Big(\bigcup_{2^{2m-1}\leq n\leq 2^{2m}}\bigcup_{i=1}^{2^{2m}}{\left(}|\xi_i|\geq\epsilon_m 2^{{2m}/{r}}{\right)}\Big)\\\notag =&\lim_{k\to \infty}\sum_{m=k}^{\infty}P\Big(\bigcup_{i=1}^{2^{2m}}{\left(}|\xi_i|\geq\epsilon_m 2^{{2m}/{r}}{\right)}\Big)\\\notag \leq&\lim_{k\to \infty}\sum_{m=k}^{\infty}2^{2m}P\Big(|\xi_1|\geq\epsilon_m 2^{{2m}/{r}}\Big)=0.\end{aligned}$$ We note that the truncation will neither change the symmetry of the distribution of $\xi_1$ nor change the order of the variance of ${\mathbf}T$. Now, we come to the proof of the asymptotic normality of the statistics. We below give the proof of the asymptotic normality of $\alpha{\mathbf}T_1+\beta{\mathbf}T_2$ , where $\alpha^2+\beta^2\neq 0$. The asymptotic normality of either ${\mathbf}T_1$ or ${\mathbf}T_2$ is a result of setting $\alpha=0$ or $\beta=0$ respectively. Denote $\mu_1={{\rm E}}{\mathbf}T_1={{\rm E}}\sum_{i=1}^n\hat \varepsilon_i^4$, $\mu_2={{\rm E}}{\mathbf}T_2={{\rm E}}n^{-1}{\left(}\sum_{i=1}^n\hat \varepsilon_i^2{\right)}^2$ and $S=\sqrt{{\rm {Var}}{\left(}\alpha {\mathbf}T_1+\beta{\mathbf}T_2{\right)}}$. Below is devote to calculating the moments of ${\mathbf}T_0=\frac{\alpha {\mathbf}T_1+\beta {\mathbf}T_2-{\left(}\alpha \mu_1+\beta \mu_2{\right)}}{S}=\frac{\alpha {\left(}{\mathbf}T_1-\mu_1{\right)}+\beta {\left(}{\mathbf}T_2-\mu_2{\right)}}{S}$. Note that by Lemma \[lm3\], we can assume that $\xi_1$ is truncated at $\eta_n n^{1/8}$. Then we have for large enough $n$ and $l>4$, $$M_{2l}\leq \eta_n M_8{\sqrt n}^{2l/4-1}.$$ Let’s take a look at the random variable $$\begin{aligned} &\alpha T_1+\beta T_2=\alpha \sum_{i=1}^n{\left(}\sum_{j=1}^n a_{ij}\xi_j{\right)}^4+(n^{-1})\beta {\left(}\sum_{i=1}^n{\left(}\sum_{j=1}^n a_{ij}\xi_j{\right)}^2{\right)}^2\\\notag =&\alpha \sum_{i,j_1,\cdots,j_4} a_{i,j_1}a_{i,j_2}a_{i,j_3}a_{i,j_4}\xi_{j_1}\xi_{j_2}\xi_{j_3}\xi_{j_4}+(n^{-1})\beta\sum_{i_1,i_2,j_1,\cdots,j_4} a_{i_1,j_1}a_{i_1,j_2}a_{i_2,j_3}a_{i_2,j_4}\xi_{j_1}\xi_{j_2}\xi_{j_3}\xi_{j_4}\\\notag =&\alpha \sum_{i,j_1,\cdots,j_4} a_{i,j_1}a_{i,j_2}a_{i,j_3}a_{i,j_4}\xi_{j_1}\xi_{j_2}\xi_{j_3}\xi_{j_4}+(n^{-1})\beta\sum_{u_1,u_2,v_1,\cdots,v_4} a_{u_1,v_1}a_{u_1,v_2}a_{u_2,v_3}a_{u_2,v_4}\xi_{v_1}\xi_{v_2}\xi_{v_3}\xi_{v_4}.\end{aligned}$$ We next construct two type of graphs for the last two sums. For given integers $i,j_1,j_2,j_3,j_4\in [1,n]$, draw a graph as follows: draw two parallel lines, called the $I$-line and the $J$-line respectively; plot $i$ on the $I$-line and $j_1,j_2,j_3$ and $j_4$ on the $J$-line; finally, we draw four edges from $i$ to $j_t$, $t=1,2,3,4$ marked with $\textcircled{1}$. Each edge $(i,j_t)$ represents the random variable $a_{i,j_t}\xi_{j_t}$ and the graph $G_1(i,{\mathbf}j)$ represents $\prod_{\rho=1}^{4}a_{i,j_\rho}\xi_{j_\rho}$. For any given integer $k_1$, we draw $k_1$ such graphs between the $I$-line and the $J$-line denoted by $G_1(\tau)=G_1(i_\tau,{\mathbf}j_\tau)$, and write $G_{(1,k_1)}=\cup_{\tau} G_1(\tau)$. For given integers $u_1,u_2,v_1,v_2,v_3,v_4\in [1,n]$, draw a graph as follows: plot $u_1$ and $u_2$ on the $I$-line and $v_1,v_2,v_3$ and $v_4$ on the $J$-line; then, we draw two edges from $u_1$ to $v_1$ and $v_2$ marked with $\textcircled{2}$ , draw two edges from $u_2$ to $v_3$ and $v_4$ marked with $\textcircled{2}$. Each edge $(u_l,v_t)$ represents the random variable $a_{u_l,v_t}\xi_{v_t}$ and the graph $G_2({\mathbf}u,{\mathbf}v)$ represents $a_{u_1,v_1}a_{u_1,v_2}a_{u_2,v_3}a_{u_2,v_4}$. For any given integer $k_2$, we draw $k_2$ such graphs between the $I$-line and the $J$-line denoted by $G_2(\psi)=G_2({\mathbf}u_\psi,{\mathbf}v_\psi)$, and write $G_{(2,k_2)}=\cup_{\psi} G_2(\psi)$, $G_{k}=G_{(1,k_1)}\cup G_{(2,k_2)}$. Then the $k$-th order moment of ${\mathbf}T_0$ is $$\begin{aligned} M_k'=&S^{-k}\sum_{k_1+k_2=k}{k\choose k_1}\alpha^{k_1}\beta^{k_2}\sum_{\substack{\{i_1,{\mathbf}j_1,\cdots,i_{k_1},{\mathbf}j_{k_1}\} \\ \{{\mathbf}u_1,{\mathbf}v_1,\cdots,{\mathbf}u_{k_2},{\mathbf}v_{k_2}\}}}\\ &n^{-k_2}{{\rm E}}\Big[\prod_{\tau=1}^{k_1}[G_1(i_\tau,{\mathbf}j_\tau)-{{\rm E}}(G_1(i_\tau,{\mathbf}j_\tau))]\prod_{\phi=1}^{k_2}[G_2({\mathbf}u_\psi,{\mathbf}v_\psi)-{{\rm E}}(G_2({\mathbf}u_\psi,{\mathbf}v_\psi))]\Big].\end{aligned}$$ We first consider a graph $G_k$ for the given set of integers $k_1,k_2$, $i_1, {\mathbf}j_1,\cdots,i_{k_1},{\mathbf}j_{k_1}$ and ${\mathbf}u_1,{\mathbf}v_1,\cdots,{\mathbf}u_{k_2},{\mathbf}v_{k_2}$. We have the following simple observations: Firstly, if $G_k$ contains a $j$ vertex of odd degree, then the term is zero because odd-ordered moments of random variable $\xi_j$ are 0. Secondly, if there is a subgraph $G_1(\tau)$ or $G_2(\psi)$ that does not have an $j$ vertex coinciding with any $j$ vertices of other subgraphs, the term is also 0 because $G_1(\tau)$ or $G_2(\psi)$ is independent of the remainder subgraphs. Then, upon these two observations, we split the summation of non-zero terms in $M_k'$ into a sum of partial sums in accordance of isomorphic classes (two graphs are called isomorphic if one can be obtained from the other by a permutation of $(1,2,\cdots,n)$, and all the graphs are classified into isomorphic classes. For convenience, we shall choose one graph from an isomorphic class as the canonical graph of that class). That is, we may write $$\begin{aligned} M_k'=&S^{-k}\sum_{k_1+k_2=k}{k\choose k_1}\alpha^{k_1}\beta^{k_2}n^{-k_2}\sum_{G_k'}M_{G_k'},\end{aligned}$$ where $$M_{G_k'}=\sum_{G_k\in G_k'}{{\rm E}}G_k.$$ Here $G_k'$ is a canonical graph and $\sum_{G_k\in G_k'}$ denotes the summation for all graphs $G_k$ isomorphic to $G_k'$. In that follows, we need the fact that the variances of ${\mathbf}T_1$ and ${\mathbf}T_2$ and their covariance are all of order n. This will be proved in Section \[var\]. Since all of the vertices in the non-zero canonical graphs have even degrees, every connected component of them is a circle, of course a two-edge connected graph. For a given isomorphic class with canonical graph $G_k'$, denote by $c_{G_k'}$ the number of connected components of the canonical graph $G_k'$. For every connected component $G_0$ that has $l$ non-coincident $J$-vertices with degrees $d_1,\cdots,d_l$, let $d'=\max\{d_1-8,\cdots,d_l-8,0\}$, denote $\mathcal{T}=(\underbrace{{\mathbf}A,\cdots,{\mathbf}A}_{\sum_{t=1}^l d_t})$ and define $G_0(\mathcal{T})=(G_0,\mathcal{T})$ as a Graph-Associated Multiple Matrices. By Lemma \[lm2\] we then conclude that the contribution of this canonical class is at most ${\left(}\prod_{t=1}^l M_{d_t}{\right)}sum(G(\mathcal{T}))=O(\eta_n^{d'}n\sqrt n^{d'/4})$. Noticing that $\eta_n \to 0$, if $c_{G_k'}$ is less than $k/2+k_2$, then the contribution of this canonical class is negligible because $S^k\asymp n^{k/2}$ and $M_{G_k'}$ in $M_k'$ has a factor of $n^{-k_2}$. However one can see that $c_{G_k'}$ is at most $[k/2]+k_2$ for every ${G_k'}$ by the argument above and noticing that every $G_2(\bullet)$ has two $i$ vertices. Therefore, $M_k'\to 0$ if $k$ is odd. Now we consider the limit of $M_k'$ when $k=2s$. We shall say that [*the given set of integers $i_1, {\mathbf}j_1,\cdots,i_{k_1},{\mathbf}j_{k_1}$ and ${\mathbf}u_1,{\mathbf}v_1,\cdots,{\mathbf}u_{k_2},{\mathbf}v_{k_2}$ (or equivalent the graph $G_k$) satisfies the condition $c(s_1,s_2,s_3)$ if in the graph $G_k$ plotted by this set of integers there are $2s_1$ $G_1{{\left(}\bullet{\right)}}$ connected pairwisely, $2s_2$ $G_2{{\left(}\bullet{\right)}}$ connected pairwisely and $s_3$ $G_1{{\left(}\bullet{\right)}}$ connected with $s_3$ $G_2{{\left(}\bullet{\right)}}$, where $2s_1+s_3=k_1$, $2s_2+s_3=k_2$ and $s_1+s_2+s_3=s$, say $G_1{{\left(}2\tau-1{\right)}}$ connects $G_1{{\left(}2\tau{\right)}}$, $\tau=1,2,\cdots,s_1$, $G_2{{\left(}2\psi-1{\right)}}$ connects $G_1{{\left(}2\psi{\right)}}$, $\psi=1,2,\cdots,s_2$ and $G_1{{\left(}2s_1+\varphi{\right)}}$ connects $G_2{{\left(}2s_2+\varphi{\right)}}$, $\varphi=1,2,\cdots,s_3$, and there are no other connections between subgraphs.*]{} Then, for any $G_k$ satisfying $c(s_1,s_2,s_3)$, we have $$\begin{aligned} {{\rm E}}G_k=&\prod_{\tau=1}^{s_1}{{\rm E}}[(G_1{{\left(}2\tau-1{\right)}}-{{\rm E}}(G_1{{\left(}2\tau-1{\right)}}))(G_1{{\left(}2\tau{\right)}}-{{\rm E}}(G_1{\left(}{2\tau}{\right)}))]\times\\\notag &\prod_{\psi=1}^{s_2}{{\rm E}}[(G_2{{\left(}2\psi-1{\right)}}-{{\rm E}}(G_2{{\left(}2\psi-1{\right)}}))(G_2{{\left(}2\psi{\right)}}-{{\rm E}}(G_2{\left(}{2\psi}{\right)}))]\times\\\notag &\prod_{\varphi=1}^{s_3}{{\rm E}}[(G_1{{\left(}2s_1+\varphi{\right)}}-{{\rm E}}(G_1{{\left(}2s_1+\varphi{\right)}}))(G_2{{\left(}2s_2+\varphi{\right)}}-{{\rm E}}(G_2{\left(}{2s_2+\varphi}{\right)}))].\end{aligned}$$ Now, we compare $$\begin{aligned} &n^{-k_2}\sum_{G_k\in c(s_1,s_2,s_3)} {{\rm E}}G_k\\\notag =&n^{-k_2}\sum_{G_k\in c(s_1,s_2,s_3)}\prod_{\tau=1}^{s_1}{{\rm E}}[(G_1{{\left(}2\tau-1{\right)}}-{{\rm E}}(G_1{{\left(}2\tau-1{\right)}}))(G_1{{\left(}2\tau{\right)}}-{{\rm E}}(G_1{\left(}{2\tau}{\right)}))]\times\\\notag &\prod_{\psi=1}^{s_2}{{\rm E}}[(G_2{{\left(}2\psi-1{\right)}}-{{\rm E}}(G_2{{\left(}2\psi-1{\right)}}))(G_2{{\left(}2\psi{\right)}}-{{\rm E}}(G_2{\left(}{2\psi}{\right)}))]\times \\\notag &\prod_{\varphi=1}^{s_3}{{\rm E}}[(G_1{{\left(}2s_1+\varphi{\right)}}-{{\rm E}}(G_1{{\left(}2s_1+\varphi{\right)}}))(G_2{{\left(}2s_2+\varphi{\right)}}-{{\rm E}}(G_2{\left(}{2s_2+\varphi}{\right)}))], \end{aligned}$$ with $$\begin{aligned} &{\left(}{{\rm E}}{\left(}{\mathbf}T_1-\mu_1{\right)}^2{\right)}^{s_1}{\left(}{{\rm E}}{\left(}{\mathbf}T_2-\mu_2{\right)}^2{\right)}^{s_2}{\left(}{{\rm E}}{\left(}{\mathbf}T_1-\mu_1{\right)}{\left(}{\mathbf}T_2-\mu_2{\right)}{\right)}^{s_3}\\\notag =&n^{-k_2}\sum_{G_k}\prod_{\tau=1}^{s_1}{{\rm E}}[(G_1{{\left(}2\tau-1{\right)}}-{{\rm E}}(G_1{{\left(}2\tau-1{\right)}}))(G_1{{\left(}2\tau{\right)}}-{{\rm E}}(G_1{\left(}{2\tau}{\right)}))]\times\\\notag &\prod_{\psi=1}^{s_2}{{\rm E}}[(G_2{{\left(}2\psi-1{\right)}}-{{\rm E}}(G_2{{\left(}2\psi-1{\right)}}))(G_2{{\left(}2\psi{\right)}}-{{\rm E}}(G_2{\left(}{2\psi}{\right)}))]\times \\\notag &\prod_{\varphi=1}^{s_3}{{\rm E}}[(G_1{{\left(}2s_1+\varphi{\right)}}-{{\rm E}}(G_1{{\left(}2s_1+\varphi{\right)}}))(G_2{{\left(}2s_2+\varphi{\right)}}-{{\rm E}}(G_2{\left(}{2s_2+\varphi}{\right)}))],\end{aligned}$$ where $\sum_{G_k\in c(s_1,s_2,s_3)}$ stands for the summation running over all graph $G_k$ satisfying the condition $c(s_1,s_2,s_3)$. If $G_k$ satisfies the two observations mentioned before, then ${{\rm E}}G_k=0$, which does not appear in both expressions; if $G_k$ satisfies the condition $c(s_1,s_2,s_3)$, then the two expressions both contain ${{\rm E}}G_k$. Therefore, the second expression contains more terms that $G_k$ have more connections among subgraphs than the condition $c(s_1,s_2,s_3)$. Therefore, by Lemma \[lm2\], $${\left(}{{\rm E}}{\left(}{\mathbf}T_1-\mu_1{\right)}^2{\right)}^{s_1}{\left(}{{\rm E}}{\left(}{\mathbf}T_2-\mu_2{\right)}^2{\right)}^{s_2}{\left(}{{\rm E}}{\left(}{\mathbf}T_1-\mu_1{\right)}{\left(}{\mathbf}T_2-\mu_2{\right)}{\right)}^{s_3}=n^{-k_2}\sum_{G_k\in c(s_1,s_2,s_3)} {{\rm E}}G_k+o(S^k). \label{map1}$$ If $G_k\in G_k'$ with $c_{G_k'}=s+k_2$, for any nonnegative integers $s_1,s_2,s_3$ satisfying $k_1=2s_1+s_3$, $k_2=2s_2+s_3$ and $s_1+s_2+s_3=s$, we have ${k_1\choose s_3}{k_2 \choose s_3}(2s_1-1)!!(2s_2-1)!!s_3!$ ways to pairing the subgraphs satisfying the condition $c(s_1,s_2,s_3)$. By (\[map1\]), we then have $$\begin{aligned} &&\sum_{c_{G_k'}=s+k_2}n^{-k_2}{{\rm E}}G_k+o(S^k)\\ &=&\sum_{s_1+s_2+s_3=s\atop 2s_1+s_3=k_1, 2s_2+s_3=k_2}{k_1\choose s_3}{k_2 \choose s_3}(2s_1-1)!!(2s_2-1)!!s_3! (Var({\mathbf}T_1))^{s_1}(Var({\mathbf}T_2))^{s_2}(Cov({\mathbf}T_1,{\mathbf}T_2))^{s_3}\end{aligned}$$ It follows that $$\begin{aligned} M_k'=&S^{-k}\sum_{k_1+k_2=k}{k\choose k_1}\alpha^{k_1}\beta^{k_2}n^{-k_2}\sum_{c_{G_k'}=s+k_2}{{\rm E}}G_k+o(1)\\ =&\Big(S^{-2s}\sum_{k_1=0}^{2s}\sum_{s_3=0}^{\min\{k_1,k_2\}}{2s\choose k_1}{k_1 \choose s_3}{k_2 \choose s_3}{\left(}2s_1-1{\right)}!!{\left(}2s_2-1{\right)}!!s_3!\\ &{\left(}\alpha^2Var({\mathbf}T_1){\right)}^{s_1}{\left(}\beta^2Var({\mathbf}T_2){\right)}^{s_2}{\left(}\alpha\beta Cov({\mathbf}T_1,{\mathbf}T_2){\right)}^{s_3}\Big)+o(1)\\ =&\Big(S^{-2s}\sum_{s_1+s_2+s_3=s}{2s\choose 2s_1+s_3}{2s_1+s_3 \choose s_3}{2s_2+s_3 \choose s_3}{\left(}2s_1-1{\right)}!!{\left(}2s_2-1{\right)}!!s_3!\\ &{\left(}\alpha^2Var({\mathbf}T_1){\right)}^{s_1}{\left(}\beta^2Var({\mathbf}T_2){\right)}^{s_2}{\left(}\alpha\beta Cov({\mathbf}T_1,{\mathbf}T_2){\right)}^{s_3}\Big)+o(1)\\ =&\Big(S^{-2s}\sum_{s_1+s_2+s_3=s}\frac{(2s)!(2s_1+s_3)!(2s_2+s_3)!}{{\left(}2s_1+s_3{\right)}!(2s_2+s_3)!s_3!(2s_1)!s_3!(2s_2)!}{\left(}2s_1-1{\right)}!!{\left(}2s_2-1{\right)}!!s_3!\\ &{\left(}\alpha^2Var({\mathbf}T_1){\right)}^{s_1}{\left(}\beta^2Var({\mathbf}T_2){\right)}^{s_2}{\left(}\alpha\beta Cov({\mathbf}T_1,{\mathbf}T_2){\right)}^{s_3}\Big)+o(1)\\ =&\Big(S^{-2s}\sum_{s_1+s_2+s_3=s}(2s-1)!!\frac{s!}{s_1!s_2!s_3!}\\ &{\left(}\alpha^2Var({\mathbf}T_1){\right)}^{s_1}{\left(}\beta^2Var({\mathbf}T_2){\right)}^{s_2}{\left(}2\alpha\beta Cov({\mathbf}T_1,{\mathbf}T_2){\right)}^{s_3}\Big)+o(1),\end{aligned}$$ which implies that $$M'_{2s}\to (2s-1)!!.$$ Combining the arguments above and the moment convergence theorem we conclude that $$\frac{{\mathbf}T_1-{{\rm E}}{\mathbf}T_1}{\sqrt{{\rm Var}{\mathbf}T_1}}\stackrel{d}{\rightarrow} {\rm N}{\left(}0,1{\right)},\ \frac{{\mathbf}T_2-{{\rm E}}{\mathbf}T_2}{\sqrt{{\rm Var}{\mathbf}T_2}}\stackrel{d}{\rightarrow} {\rm N}{\left(}0,1{\right)}, \ \frac{ {\left(}\alpha {\mathbf}T_1+\beta {\mathbf}T_2{\right)}-{{\rm E}}{\left(}\alpha {\mathbf}T_1+\beta {\mathbf}T_2{\right)}}{\sqrt{{\rm Var}{\left(}\alpha {\mathbf}T_1+\beta {\mathbf}T_2{\right)}}}\stackrel{d}{\rightarrow} {\rm N}{\left(}0,1{\right)},$$ where $\alpha^2+\beta^2\neq 0.$ Let $$\Sigma=\left( \begin{array}{cc} {\rm {Var}}({\mathbf}T_1) & \rm {Cov}({\mathbf}T_1,{\mathbf}T_2) \\ \rm {Cov}({\mathbf}T_1,{\mathbf}T_2) & \rm {Var}({\mathbf}T_2) \\ \end{array} \right) .$$ We conclude that $\Sigma^{-1/2}{\left(}{\mathbf}T_1-{{\rm E}}{\mathbf}T_1,{\mathbf}T_2-{{\rm E}}{\mathbf}T_2{\right)}'$ is asymptotic two dimensional gaussian vector. The expectation {#exp} --------------- In the following let ${\mathbf}B={\mathbf}A{\mathbf}A'$. Recall that $${\mathbf}{T_1}=\sum_{i=1}^n\widehat{\varepsilon_i}^4=\sum_{i=1}^n{\left(}\sum_{j=1}^n a_{i,j}\xi_j{\right)}^4=\sum_{i=1}^n\sum_{j_1,j_2,j_3,j_4}a_{i,j_1}a_{i,j_2}a_{i,j_3}a_{i,j_4}\xi_{j_1}\xi_{j_2}\xi_{j_3}\xi_{j_4},$$ $${\mathbf}T_2=n^{-1}{\left(}\sum_{i=1}^n{\left(}\sum_{j=1}^n a_{i,j}\xi_j{\right)}^2{\right)}^2 =n^{-1}\sum_{i_1,i_2}\sum_{j_1,j_2,j_3,j_4}a_{i_1,j_1}a_{i_1,j_2}a_{i_2,j_3}a_{i_2,j_4}\xi_{j_1}\xi_{j_2}\xi_{j_3}\xi_{j_4}.$$ Since all odd moments of $\xi_1,\cdots,\xi_n$ are 0, we know that ${{\rm E}}{\mathbf}T_1$ and ${{\rm E}}{\mathbf}T_2$ are only affected by terms whose multiplicities of distinct values in the sequence $(j_1,\cdots,j_4)$ are all even. We need to evaluate the mixed moment ${{\rm E}}{\left(}{\mathbf}T_1^{\gamma}{\mathbf}T_2^{\omega}{\right)}$. For simplifying notations particularly in Section \[var\] we introduce the following notations $$\begin{aligned} &\Omega_{\{\omega_1,\omega_2,\cdots,\omega_s\}}^{{\left(}\gamma_1,\gamma_2,\cdots,\gamma_t{\right)}}[\underbrace{{\left(}\phi_{1,1},\cdots,\phi_{1,s}{\right)},{\left(}\phi_{2,1},\cdots,\phi_{2,s}{\right)}, \cdots,{\left(}\phi_{t,1},\cdots,\phi_{t,s}{\right)}}_{t \ groups}]_0 \\ =&\sum_{i_1,\cdots,i_{t},j_1\neq\cdots\neq j_{s}}\prod_{\tau=1,\cdots, t}\prod_{\rho=1,\cdots, s} a_{i_{\tau},j_{\rho}}^{\phi_{\tau,\rho}},\end{aligned}$$ where $i_1,\cdots,i_{t}$ and $j_1,\cdots,j_{s}$ run over $1,\cdots,n$ and are subject to the restrictions that $j_1,\cdots,j_s$ are distinct; $\sum_{l=1}^t \gamma_l=\sum_{l=1}^s \omega_l=\theta,$ and for any $k=1,\cdots, s$, $\sum_{l=1}^t \phi_{l,k}=\theta$. Intuitively, $t$ is the number of distinct $i$-indices and $s$ that of distinct $j$’s; $\gamma_\tau$ is the multiplicity of the index $i_\tau$ and $\omega_\rho=\sum_{l=1}^t\phi_{l,\rho}$ that of $j_\rho$; $\phi_{\tau,\rho}$ the multiplicity of the factor $a_{i_\tau,j_\rho}$; and $\theta=4(\gamma+\omega)$. Define $$\begin{aligned} &\Omega_{\{\omega_1,\omega_2,\cdots,\omega_s\}}^{{\left(}\gamma_1,\gamma_2,\cdots,\gamma_t{\right)}}[\underbrace{{\left(}\phi_{1,1},\cdots,\phi_{1,s}{\right)},{\left(}\phi_{2,1},\cdots,\phi_{2,s}{\right)}, \cdots,{\left(}\phi_{t,1},\cdots,\phi_{t,s}{\right)}}_{t \ groups}] \\ =&\sum_{i_1,\cdots,i_{t},j_1,\cdots, j_{s}}\prod_{\tau=1,\cdots, t}\prod_{\rho=1,\cdots, s} a_{i_{\tau},j_{\rho}}^{\phi_{\tau,\rho}}.\end{aligned}$$ The definition above is similar to that of $$\Omega_{\{\omega_1,\omega_2,\cdots,\omega_s\}}^{{\left(}\gamma_1,\gamma_2,\cdots,\gamma_t{\right)}}[\underbrace{{\left(}\phi_{1,1},\cdots,\phi_{1,s}{\right)},{\left(}\phi_{2,1},\cdots,\phi_{2,s}{\right)}, \cdots,{\left(}\phi_{t,1},\cdots,\phi_{t,s}{\right)}}_{t \ groups}]_0$$ without the restriction that the indices $j_1,\cdots,j_s$ are distinct from each other. To help understand these notations we demonstrate some examples as follows. $$\begin{aligned} \Omega_{\{2,2,2,2\}}^{(4,4)}[(2,2,0,0),(0,0,2,2)]=\sum_{i_1,i_2,j_1,\cdots,j_4}a_{i_1,j_1}^2a_{i_1,j_2}^2a_{i_2,j_3}^2a_{i_2,j_3}^2,\end{aligned}$$ $$\begin{aligned} \Omega_{\{2,2,2,2\}}^{(4,4)}[(2,1,1,0),(0,1,1,2)]=\sum_{i_1,i_2,j_1,\cdots,j_4}a_{i_1,j_1}^2a_{i_1,j_2}a_{i_1,j_3}a_{i_2,j_2}a_{i_2,j_3}a_{i_2,j_4}^2,\end{aligned}$$ $$\begin{aligned} \Omega_{\{2,2,2,2\}}^{(4,4)}[(2,2,0,0),(0,0,2,2)]_0=\sum_{i_1,i_2,j_1\neq\cdots\neq j_4}a_{i_1,j_1}^2a_{i_1,j_2}^2a_{i_2,j_3}^2a_{i_2,j_3}^2,\end{aligned}$$ $$\begin{aligned} \Omega_{\{2,2,2,2\}}^{(4,4)}[(2,1,1,0),(0,1,1,2)]_0=\sum_{i_1,i_2,j_1\neq\cdots\neq j_4}a_{i_1,j_1}^2a_{i_1,j_2}a_{i_1,j_3}a_{i_2,j_2}a_{i_2,j_3}a_{i_2,j_4}^2.\end{aligned}$$ We further use $M_k$ to denote the $k$-th order moment of the error random variable. We also use ${\mathbf}C_{n}^k$ to denote the combinatorial number $n \choose k$. We then obtain $$\begin{aligned} \label{e1} &{{\rm E}}{\mathbf}T_1={{\rm E}}\sum_{i=1}^n\sum_{j_1,j_2,j_3,j_4}a_{i,j_1}a_{i,j_2}a_{i,j_3}a_{i,j_4}\xi_{j_1}\xi_{j_2}\xi_{j_3}\xi_{j_4}\\\notag =&M_4\Omega_{\{4\}}^{(4)}+M_2^2{\Omega_{\{2,2\}}^{(4)}}_0=M_4\Omega_{\{4\}}^{(4)}+\frac{{\mathbf}C_4^2}{2!}{\Omega_{\{2,2\}}^{(4)}}[{\left(}2,0{\right)},{\left(}0,2{\right)}]_0\\\notag =&M_4\Omega_{\{4\}}^{(4)}+\frac{{\mathbf}C_4^2}{2!}{\left(}{\Omega_{\{2,2\}}^{(4)}}[{\left(}2,0{\right)},{\left(}0,2{\right)}]-\Omega_{\{4\}}^{(4)}{\right)}\\\notag =&\frac{{\mathbf}C_4^2}{2!}{\Omega_{\{2,2\}}^{(4)}}[{\left(}2,0{\right)},{\left(}0,2{\right)}]+\nu_4 \Omega_{\{4\}}^{(4)} =3\sum_i{\left(}\sum_{j}a_{i,j}^2{\right)}^2+\nu_4\sum_{ij}a_{ij}^4\\\notag =&3\sum_ib_{i,i}^2+\nu_4\sum_{ij}a_{ij}^4=3{{\rm tr}}{\left(}{\mathbf}B{\circ}{\mathbf}B{\right)}+\nu_4{{\rm tr}}({\mathbf}A{\circ}{\mathbf}A)'({\mathbf}A{\circ}{\mathbf}A),\end{aligned}$$ where $\nu_4=M_4-3$ and $$\begin{aligned} \label{e2} &{{\rm E}}{\mathbf}T_2=n^{-1}{{\rm E}}\sum_{i_1,i_2}\sum_{j_1,j_2,j_3,j_4}a_{i_1,j_1}a_{i_1,j_2}a_{i_2,j_3}a_{i_2,j_4}\xi_{j_1}\xi_{j_2}\xi_{j_3}\xi_{j_4}\\\notag =&n^{-1}{\left(}M_4\Omega_{\{4\}}^{(2,2)}+M_2^2{\Omega_{\{2,2\}}^{(2,2)}}_0{\right)}\\\notag =&n^{-1}{\left(}M_4\Omega_{\{4\}}^{(2,2)}+{\left(}{\Omega_{\{2,2\}}^{(2,2)}}[{\left(}2,0{\right)},{\left(}0,2{\right)}]_0+2\Omega_{\{2,2\}}^{(2,2)}[{\left(}1,1{\right)},{\left(}1,1{\right)}]_0{\right)}{\right)}\\\notag =&n^{-1}{\left(}M_4\Omega_{\{4\}}^{(2,2)}+{\left(}{\Omega_{\{2,2\}}^{(2,2)}}[{\left(}2,0{\right)},{\left(}0,2{\right)}]+2\Omega_{\{2,2\}}^{(2,2)}[{\left(}1,1{\right)},{\left(}1,1{\right)}]{\right)}-3\Omega_{\{4\}}^{(2,2)}{\right)}\\\notag =&n^{-1}{\left(}\sum_{i_1,i_2,j_1,j_2}a_{i_1,j_1}^2a_{i_2,j_2}^2+2\sum_{i_1,i_2,j_1,j_2}a_{i_1,j_1}a_{i_1,j_2}a_{i_2,j_1}a_{i_2,j_2} +\nu_4\sum_{i_1,i_2,j}a_{i_1j}^2a_{i_2j}^2{\right)}\\\notag =&n^{-1}{\left(}{\left(}\sum_{i,j}a_{i,j}^2{\right)}^2+2\sum_{i_1,i_2}{\left(}\sum_{j}a_{i_1,j}a_{i_2,j}{\right)}^2+\nu_4\sum_{i_1,i_2,j}a_{i_1j}^2a_{i_2j}^2{\right)}\\\notag =&n^{-1}{\left(}{\left(}\sum_{i,j}a_{i,j}^2{\right)}^2+2\sum_{i_1,i_2}b_{i_1,i_2}^2+\nu_4\sum_{j=1}^nb_{jj}^2{\right)}\\ \notag =&n^{-1}{\left(}{\left(}{{\rm tr}}{\mathbf}B{\right)}^2+2{{\rm tr}}{\mathbf}B^2+\nu_4{{\rm tr}}({\mathbf}B{\circ}{\mathbf}B){\right)}.\end{aligned}$$ The variances and covariance {#var} ---------------------------- We are now in the position to calculate the variances of ${\mathbf}T_1$, ${\mathbf}T_2$ and their covariance. First, we have $$\begin{aligned} \label{t10} &{\rm Var}( {\mathbf}T_1)={{\rm E}}{\left(}\sum_{i}\widehat{\varepsilon_i}^4-{{\rm E}}{\left(}\sum_{i}\widehat{\varepsilon_i}^4{\right)}{\right)}^2\\\notag =&\sum_{i_1,i_2,j_1,\cdots,j_8}[{{\rm E}}G(i_1,{\mathbf}j_1)G(i_2,{\mathbf}j_2)-{{\rm E}}G(i_1,{\mathbf}j_1){{\rm E}}G(i_2,{\mathbf}j_2)]\\\notag =&\Bigg(\Omega_{\{8\}}^{(4,4)}+\Omega_{\{2,6\}_0}^{(4,4)}+\Omega_{\{4,4\}_0}^{(4,4)}+\Omega_{\{2,2,4\}_0}^{(4,4)}+\Omega_{\{2,2,2,2\}_0}^{(4,4)}\Bigg),\\\notag\end{aligned}$$ where the first term comes from the graphs in which the 8 $J$-vertices coincide together; the second term comes from the graphs in which there are 6 $J$-vertices coincident and another two coincident and so on. Because $G(i_1,{\mathbf}j_1)$ and $G(i_2,{\mathbf}j_2)$ have to connected each other, thus, we have $$\begin{aligned} \label{t11} &\Omega_{\{2,2,2,2\}_0}^{(4,4)}\\\notag=&\frac{{\mathbf}C_4^2{\mathbf}C_4^2{\mathbf}C_2^1{\mathbf}C_2^1}{2!}\Omega_{\{2,2,2,2\}}^{(4,4)}[(2,1,1,0),(0,1,1,2)]_0+{{\mathbf}C_4^1{\mathbf}C_3^1{\mathbf}C_2^1}\Omega_{\{2,2,2,2\}}^{(4,4)}[(1,1,1,1),(1,1,1,1)]_0\\\notag =&72\Big(\Omega_{\{2,2,2,2\}}^{(4,4)}[(2,1,1,0),(0,1,1,2)]-4\Omega_{\{2,2,4\}}^{(4,4)}[(2,1,1),(0,1,3)]_0-\Omega_{\{2,2,4\}}^{(4,4)}[(2,0,2),(0,2,2)]_0\\\notag &-\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]_0-2\Omega_{\{4,4\}}^{(4,4)}[(3,1),(1,3)]_0-\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]_0\\\notag &-2\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]_0-2\Omega_{\{2,6\}}^{(4,4)}[(2,2),(0,4)]_0-\Omega_{\{8\}}^{(4,4)}\Big)\\\notag &+24\Big(\Omega_{\{2,2,2,2\}}^{(4,4)}[(1,1,1,1),(1,1,1,1)]-6\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]_0-3\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]_0\\\notag &-4\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]_0-\Omega_{\{8\}}^{(4,4)}\Big).\\\notag =&72\Big(\Omega_{\{2,2,2,2\}}^{(4,4)}[(2,1,1,0),(0,1,1,2)]-4\Omega_{\{2,2,4\}}^{(4,4)}[(2,1,1),(0,1,3)]-\Omega_{\{2,2,4\}}^{(4,4)}[(2,0,2),(0,2,2)]\\\notag &-\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]+2\Omega_{\{4,4\}}^{(4,4)}[(3,1),(1,3)]_0+\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]_0\\\notag &+4\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]_0+4\Omega_{\{2,6\}}^{(4,4)}[(2,2),(0,4)]_0+5\Omega_{\{8\}}^{(4,4)}\Big)\\\notag &+24\Big(\Omega_{\{2,2,2,2\}}^{(4,4)}[(1,1,1,1),(1,1,1,1)]-6\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]+3\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]_0\\\notag &+8\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]_0+5\Omega_{\{8\}}^{(4,4)}\Big).\\\notag =&72\Big(\Omega_{\{2,2,2,2\}}^{(4,4)}[(2,1,1,0),(0,1,1,2)]-4\Omega_{\{2,2,4\}}^{(4,4)}[(2,1,1),(0,1,3)]-\Omega_{\{2,2,4\}}^{(4,4)}[(2,0,2),(0,2,2)]\\\notag &-\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]+2\Omega_{\{4,4\}}^{(4,4)}[(3,1),(1,3)]+\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]\\\notag &+4\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]+4\Omega_{\{2,6\}}^{(4,4)}[(2,2),(0,4)]-6\Omega_{\{8\}}^{(4,4)}\Big)\\\notag &+24\Big(\Omega_{\{2,2,2,2\}}^{(4,4)}[(1,1,1,1),(1,1,1,1)]-6\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]+3\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]\\\notag &+8\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]-6\Omega_{\{8\}}^{(4,4)}\Big).\end{aligned}$$ Likewise we have $$\begin{aligned} \label{t12} &{\Omega_{\{2,2,4\}}^{(4,4)}}_0\\\notag =&{{\mathbf}C_2^1{\mathbf}C_4^3{\mathbf}C_4^1{\mathbf}C_3^1}M_4\Omega_{\{2,2,4\}}^{(4,4)}[(1,2,1),(1,0,3)]_0\\\notag &+\frac{{\mathbf}C_4^2{\mathbf}C_4^2{\mathbf}C_2^1{\mathbf}C_2^1}{2!}M_4\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]_0+{{\mathbf}C_4^2{\mathbf}C_4^2}(M_4-1)\Omega_{\{2,2,4\}}^{(4,4)}[(2,0,2),(0,2,2)]_0\\\notag =&96M_4\Omega_{\{2,2,4\}}^{(4,4)}[(1,2,1),(1,0,3)]_0\\\notag &+72M_4\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]_0+36(M_4-1)\Omega_{\{2,2,4\}}^{(4,4)}[(2,0,2),(0,2,2)]_0,\\\ \notag =&96M_4\Omega_{\{2,2,4\}}^{(4,4)}[(1,2,1),(1,0,3)]\\\notag &+72M_4\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]+36(M_4-1)\Omega_{\{2,2,4\}}^{(4,4)}[(2,0,2),(0,2,2)]\\\ \notag &-96M_4\Omega_{\{4,4\}}^{(4,4)}[(3,1),(1,3)]_0-(108 M_4-36)\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]_0\\\notag &-240M_4\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]_0-(168M_4-72)\Omega_{\{2,6\}}^{(4,4)}[(2,2),(0,4)]_0-(204M_4-36)\Omega_{\{8\}}^{(4,4)}\\\notag =&96M_4\Omega_{\{2,2,4\}}^{(4,4)}[(1,2,1),(1,0,3)]\\\notag &+72M_4\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]+36(M_4-1)\Omega_{\{2,2,4\}}^{(4,4)}[(2,0,2),(0,2,2)]\\\ \notag &-96M_4\Omega_{\{4,4\}}^{(4,4)}[(3,1),(1,3)]-(108 M_4-36)\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]\\\notag &-240M_4\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]-(168M_4-72)\Omega_{\{2,6\}}^{(4,4)}[(2,2),(0,4)]+(408M_4-72)\Omega_{\{8\}}^{(4,4)}\end{aligned}$$ $$\begin{aligned} \label{t13} \Omega_{\{4,4\}_0}^{(4,4)}=&{{\mathbf}C_2^1{\mathbf}C_4^1{\mathbf}C_4^3}M_4^2\Omega_{\{4,4\}}^{(4,4)}[(3,1),(1,3)]_0 +\frac{{\mathbf}C_4^2{\mathbf}C_4^2}{2!}(M_4^2-1)\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]_0\\\notag =&16M_4^2\Omega_{\{4,4\}}^{(4,4)}[(3,1),(1,3)]+18(M_4^2-1)\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]\\\notag &-(34M_4^2-18)\Omega_{\{8\}}^{(4,4)},\end{aligned}$$ $$\begin{aligned} \label{t14} \Omega_{\{2,6\}_0}^{(4,4)}=&{\mathbf}C_2^1{\mathbf}C_4^2(M_6-M_4)\Omega_{\{2,6\}}^{(4,4)}[(2,2),(0,4)]_0+{\mathbf}C_4^1{\mathbf}C_4^1M_6\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]_0\\\notag =&12(M_6-M_4)\Omega_{\{2,6\}}^{(4,4)}[(2,2),(0,4)]+16M_6\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]\\\notag & -(28 M_6-12 M_4)\Omega_{\{8\}}^{(4,4)}.\end{aligned}$$ and $$\begin{aligned} \label{t15} \Omega_{\{8\}_0}^{(4,4)}=&(M_8-M_4^2)\Omega_{\{8\}}^{(4,4)}[(4),(4)].\end{aligned}$$ Combining (\[t10\]), (\[t11\]), (\[t12\]), (\[t13\]), (\[t14\]) and (\[t15\]), we obtain $$\begin{aligned} \label{vt1} &{\rm Var}( {\mathbf}T_1)=72\Omega_{\{2,2,2,2\}}^{(4,4)}[(2,1,1,0),(0,1,1,2)]+24\Omega_{\{2,2,2,2\}}^{(4,4)}[(1,1,1,1),(1,1,1,1)]\\\notag &+96(M_4-3)\Omega_{\{2,2,4\}}^{(4,4)}[(2,1,1),(0,1,3)] +36(M_4-3)\Omega_{\{2,2,4\}}^{(4,4)}[(2,0,2),(0,2,2)]\\\notag &+72(M_4-3)\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]+16(M_4^2-6M_4+9)\Omega_{\{4,4\}}^{(4,4)}[(3,1),(1,3)]\\\notag &+18(M_4^2-6M_4+9)\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]+16(M_6-15M_4+30)\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]\\\notag &+12(M_6-15M_4+30)\Omega_{\{2,6\}}^{(4,4)}[(2,2),(0,4)] +(M_8-28M_6-35M_4^2+420M_4-630)\Omega_{\{8\}}^{(4,4)}[(4),(4)],\end{aligned}$$ where $$\begin{aligned} &\Omega_{\{2,2,2,2\}}^{(4,4)}[(2,1,1,0),(0,1,1,2)]=\sum_{i_1,\cdots,i_2,j_1,\cdots, j_4}a_{i_1,j_1}^2a_{i_1,j_2}a_{i_1,j_3}a_{i_2,j_2}a_{i_2,j_3}a_{i_2,j_4}^2\\\notag &={{\rm Diag}}'({\mathbf}B) {\left(}{\mathbf}B{\circ}{\mathbf}B{\right)}{{\rm Diag}}({\mathbf}B),\end{aligned}$$ $$\begin{aligned} &\Omega_{\{2,2,2,2\}}^{(4,4)}[(1,1,1,1),(1,1,1,1)]=\sum_{i_1,\cdots,i_2,j_1,\cdots, j_4}a_{i_1,j_1}a_{i_1,j_2}a_{i_1,j_3}a_{i_1,j_4}a_{i_2,j_1}a_{i_2,j_2}a_{i_2,j_3}a_{i_2,j_4}\\\notag &={{\rm tr}}{\left(}{\mathbf}B{\circ}{\mathbf}B{\right)}^2,\end{aligned}$$ $$\begin{aligned} &\Omega_{\{2,2,4\}}^{(4,4)}[(2,1,1),(0,1,3)]=\sum_{i_1,\cdots,i_2,j_1,\cdots, j_4}a_{i_1,j_1}^2a_{i_1,j_2}a_{i_1,j_3}a_{i_2,j_2}a_{i_2,j_3}^3={{\rm tr}}{\mathbf}B {\mathbf}{D_B} {\mathbf}A {\mathbf}A'^{{\circ}3},\end{aligned}$$ $$\begin{aligned} &\Omega_{\{2,2,4\}}^{(4,4)}[(2,0,2),(0,2,2)]=\sum_{i_1,\cdots,i_2,j_1,\cdots, j_4}a_{i_1,j_1}^2a_{i_1,j_3}^2a_{i_2,j_2}^2a_{i_2,j_3}^2\\\notag &={{\rm Diag}}'({\mathbf}B) {\left(}{\mathbf}A{\circ}{\mathbf}A{\right)}{\left(}{\mathbf}A{\circ}{\mathbf}A{\right)}'{{\rm Diag}}({\mathbf}B) ,\end{aligned}$$ $$\begin{aligned} &\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]=\sum_{i_1,\cdots,i_2,j_1,\cdots, j_4}a_{i_1,j_1}a_{i_1,j_2}a_{i_1,j_3}^2a_{i_2,j_1}a_{i_2,j_2}a_{i_2,j_3}^2\\\notag &={{\rm tr}}{\left(}{\left(}{\mathbf}B{\circ}{\mathbf}B{\right)}{\left(}{\mathbf}A{\circ}{\mathbf}A{\right)}{\left(}{\mathbf}A{\circ}{\mathbf}A{\right)}'{\right)},\end{aligned}$$ $$\begin{aligned} &\Omega_{\{4,4\}}^{(4,4)}[(3,1),(1,3)]=\sum_{i_1,\cdots,i_2,j_1,\cdots, j_4}a_{i_1,j_1}^3a_{i_1,j_2}a_{i_2,j_1}a_{i_2,j_2}^3={{\rm tr}}{\left(}{\left(}{\mathbf}A^{{\circ}3}{\mathbf}A'{\right)}{\left(}{\mathbf}A^{{\circ}3}{\mathbf}A'{\right)}'{\right)},\end{aligned}$$ $$\begin{aligned} &\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]=\sum_{i_1,\cdots,i_2,j_1,\cdots, j_4}a_{i_1,j_1}^2a_{i_1,j_2}^2a_{i_2,j_1}^2a_{i_2,j_2}^2={{\rm tr}}{\left(}{\left(}{\mathbf}A {\circ}{\mathbf}A{\right)}{\left(}{\mathbf}A {\circ}{\mathbf}A{\right)}'{\right)}^2 ,\end{aligned}$$ $$\begin{aligned} &\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]=\sum_{i_1,\cdots,i_2,j_1,\cdots, j_4}a_{i_1,j_1}a_{i_1,j_2}^3a_{i_2,j_1}a_{i_2,j_2}^3={{\rm tr}}{\left(}{\mathbf}B {\mathbf}A^{{\circ}3} {\mathbf}A'^{{\circ}3}{\right)},\end{aligned}$$ $$\begin{aligned} &\Omega_{\{2,6\}}^{(4,4)}[(2,2),(0,4)]=\sum_{i_1,\cdots,i_2,j_1,\cdots, j_4}a_{i_1,j_1}^2a_{i_1,j_2}^2a_{i_2,j_2}^4={{\rm tr}}{\left(}{\left(}{\mathbf}A' {\mathbf}D_{{\mathbf}B}{\mathbf}A {\right)}{\circ}{\left(}{\mathbf}A'^{{\circ}2}{\mathbf}A^{{\circ}2}{\right)}{\right)},\end{aligned}$$ and $$\begin{aligned} &\Omega_{\{8\}}^{(4,4)}[(4),(4)]=\sum_{i_1,\cdots,i_2,j_1,\cdots, j_4}a_{i_1,j_1}^42a_{i_2,j_1}^4={\mathbf}1'{\mathbf}A^{{\circ}4}{\mathbf}A'^{{\circ}4}{\mathbf}1,\end{aligned}$$ Using the same procedure, we have $$\begin{aligned} &{\rm Var}({\mathbf}T_2)=n^{-2}{\left(}{{\rm E}}{\left(}\sum_{i}\widehat{\varepsilon_i}^2{\right)}^4-{{\rm E}}^2{\left(}\sum_{i}\widehat{\varepsilon_i}^2{\right)}^2{\right)}\\\notag =&n^{-2}\sum_{i_1,\cdots,i_4,j_1,\cdots,j_8}a_{i_1,j_1}a_{i_1,j_2}a_{i_2,j_3}a_{i_2,j_4}a_{i_3,j_5}a_{i_3,j_6}a_{i_4,j_7}a_{i_4,j_8} {\left(}{{\rm E}}\prod_{t=1}^8\xi_{j_t}-{{\rm E}}\prod_{t=1}^4\xi_{j_t}{{\rm E}}\prod_{t=5}^8\xi_{j_t}{\right)}\\\notag =&n^{-2}(P_{2,1}+P_{2,2})+O(1),\end{aligned}$$ where $$\begin{aligned} P_{2,1}= {\mathbf}C_2^1{\mathbf}C_2^1{\mathbf}C_2^1\sum_{i_1,\cdots,i_4,j_1,\cdots, j_4}a^2_{i_1,j_1}a_{i_2,j_2}a_{i_3,j_2}a_{i_2,j_3}a_{i_3,j_3}a_{i_4,j_4}^2 =8{\left(}{{\rm tr}}{\mathbf}B{\right)}^2{{\rm tr}}{\mathbf}B^2,\end{aligned}$$ $$\begin{aligned} P_{2,2}=\nu_4{\mathbf}C_2^1{\mathbf}C_2^1\sum_{i_1,\cdots,i_4,j_1,j_2, j_3}a^2_{i_1,j_1}a^2_{i_2,j_2}a^2_{i_3,j_2}a_{i_4,j_3}^2 =4\nu_4{{\rm tr}}({\mathbf}B'{\circ}{\mathbf}B'){\left(}{{\rm tr}}{\mathbf}B{\right)}^2,\end{aligned}$$ Similarly, we have $$\begin{aligned} &{\rm Cov}({\mathbf}T_1,{\mathbf}T_2)=n^{-1}{\left(}{{\rm E}}{\left(}\sum_{i}\widehat{\varepsilon_i}^2{\right)}^2\sum_{i}\widehat{\varepsilon_i}^4-{{\rm E}}{\left(}\sum_{i}\widehat{\varepsilon_i}^2{\right)}^2\sum_{i}\widehat{\varepsilon_i}^4{\right)}\\\notag =&n^{-1}\sum_{i_1,\cdots,i_3,j_1,\cdots,j_8}a_{i_1,j_1}a_{i_1,j_2}a_{i_2,j_3}a_{i_2,j_4}a_{i_3,j_5}a_{i_3,j_6}a_{i_3,j_7}a_{i_3,j_8} {\left(}{{\rm E}}\prod_{t=1}^8\xi_{j_t}-{{\rm E}}\prod_{t=1}^4\xi_{j_t}{{\rm E}}\prod_{t=5}^8\xi_{j_t}{\right)}\\ =&n^{-1}(P_{3.1}+P_{3,2}+P_{3,3}+P_{3,4})+O(1),\end{aligned}$$ where $$\begin{aligned} P_{3,1}={\mathbf}C_4^2{\mathbf}C_2^1{\mathbf}C_2^1\sum_{i_1,\cdots,i_3,j_1,\cdots, j_4}a_{i_1,j_1}^2a_{i_2,j_2}a_{i_2,j_3}a_{i_3,j_2}a_{i_3,j_3}a_{i_3,j_4}^2 =24{{\rm tr}}{\left(}{\mathbf}B^2{\circ}{\mathbf}B{\right)}{{\rm tr}}{\mathbf}B,\end{aligned}$$ $$\begin{aligned} P_{3,2}=\nu_4{\mathbf}C_4^1{\mathbf}C_2^1{\mathbf}C_2^1\sum_{i_1,\cdots,i_3,j_1,\cdots, j_3}a_{i_1,j_1}^2a_{i_2,j_2}a_{i_3,j_2}a_{i_2,j_3}a_{i_3,j_3}^3 =16\nu_4{{\rm tr}}({\mathbf}B{\mathbf}A {\mathbf}A'^{{\circ}3}){{\rm tr}}{\mathbf}B,\end{aligned}$$ $$\begin{aligned} P_{3,3}=\nu_4{\mathbf}C_4^2{\mathbf}C_2^1\sum_{i_1,\cdots,i_3,j_1,\cdots, j_3}a_{i_1,j_1}^2a^2_{i_2,j_2}a^2_{i_3,j_2}a_{i_3,j_3}^2 =12\nu_4{{\rm tr}}{\left(}{\left(}{\mathbf}A'{\mathbf}D_{{\mathbf}B}{\mathbf}A{\right)}{\circ}{\left(}{\mathbf}A'{\mathbf}A{\right)}{\right)}{{\rm tr}}{\mathbf}B,\end{aligned}$$ $$\begin{aligned} \label{ct12} P_{3,3}=\nu_6{\mathbf}C_2^1\sum_{i_1,i_2,i_3,j_1, j_2}a_{i_1,j_1}^2a^2_{i_2,j_2}a^4_{i_3,j_2} =2\nu_6[{{\rm Diag}}({\mathbf}A'{\mathbf}A)'({\mathbf}A'^{{\circ}4}){\mathbf}1]{{\rm tr}}{\mathbf}B,\end{aligned}$$ We would like to point out that we do not need the assumption that $H_0$ holds up to now. From now on, in order to simplify the above formulas we assume $H_0$ holds. Summarizing the calculations above, we obtain under $H_0$ $$\begin{aligned} {{\rm E}}{\mathbf}T_1=3\sum_ib_{i,i}^2+\nu_4\sum_{ij}a_{ij}^4=3{{\rm tr}}{\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}+\nu_4{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)^2,\end{aligned}$$ $$\begin{aligned} \label{e2} {{\rm E}}{\mathbf}T_2=n^{-1}{\left(}{\left(}n-p{\right)}^2+2{\left(}n-p{\right)}+\nu_4{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P){\right)},\end{aligned}$$ $$\begin{aligned} {\rm Var}{\mathbf}T_1=&72{{\rm Diag}}'({\mathbf}P) {\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}{{\rm Diag}}({\mathbf}P)+24{{\rm tr}}{\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}^2\\\notag &+\nu_4{\left(}96{{\rm tr}}{\mathbf}P {\mathbf}{D_P} {\mathbf}P {\mathbf}P^{{\circ}3}+72{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)^3+36{{\rm Diag}}'({\mathbf}P) {\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}^2{{\rm Diag}}({\mathbf}P) {\right)}\\\notag &+\nu^2_4{\left(}18{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)^4+16{{\rm tr}}({\mathbf}P^{{\circ}3}{\mathbf}P)^2){\right)}\\\notag &+\nu_6{\left(}12{{\rm tr}}{\left(}{\left(}{\mathbf}P{\mathbf}D_{{\mathbf}P}{\mathbf}P {\right)}{\circ}{\left(}{\mathbf}P^{{\circ}2}{\mathbf}P^{{\circ}2}{\right)}{\right)}+16{{\rm tr}}{\mathbf}P {\mathbf}P^{{\circ}3}{\mathbf}P^{{\circ}3}{\right)}+\nu_8{\mathbf}1'({\mathbf}P^{{\circ}4}{\mathbf}P^{{\circ}4}){\mathbf}1,\end{aligned}$$ $$\begin{aligned} {\rm Var}({\mathbf}T_2)=\frac{8{\left(}n-p{\right)}^3+4\nu_4{\left(}n-p{\right)}^2{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)}{n^2}+O(1),\end{aligned}$$ and $$\begin{aligned} &{\rm Cov}({\mathbf}T_1,{\mathbf}T_2)\\\notag =&\frac{{\left(}n-p{\right)}}{n}{\left(}24{{\rm tr}}{\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}+16\nu_4{{\rm tr}}({\mathbf}P {\mathbf}P^{{\circ}3})+12\nu_4{{\rm tr}}{\left(}{\left(}{\mathbf}P{\mathbf}D_{{\mathbf}p}{\mathbf}P{\right)}{\circ}{\mathbf}P{\right)}+2\nu_6[{{\rm Diag}}({\mathbf}P)'({\mathbf}P^{{\circ}4}){\mathbf}1]{\right)}.\end{aligned}$$ The proof of the main theorem ----------------------------- Define a function $f(x,y)=\frac{x}{y}-1$. One may verify that $f_x(x,y)=\frac{1}{y},$ $f_y(x,y)=-\frac{x}{y^2}$, where $f_x(x,y)$ and $f_y(x,y)$ are the first order partial derivative. Since ${\mathbf}T=\frac{{\mathbf}T_1}{{\mathbf}T_2}-1,$ using the delta method, we have under $H_0$, $${{\rm E}}{\mathbf}T=f({{\rm E}}{\mathbf}T_1,{{\rm E}}{\mathbf}T_2)={\left(}\frac{3n{{\rm tr}}{\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}}{(n-p)^2+2{\left(}n-p{\right)}}-1{\right)},$$ $$\begin{aligned} {\rm {Var}} {\mathbf}T=(f_x({{\rm E}}{\mathbf}T_1,{{\rm E}}{\mathbf}T_2),f_y({{\rm E}}{\mathbf}T_1,{{\rm E}}{\mathbf}T_2))\Sigma(f_x({{\rm E}}{\mathbf}T_1,{{\rm E}}{\mathbf}T_2),f_y({{\rm E}}{\mathbf}T_1,{{\rm E}}{\mathbf}T_2))'.\end{aligned}$$ The proof of the main theorem is complete. [10]{} Adelchi Azzalini and Adrian Bowman. On the use of nonparametric regression for checking linear relationships. , pages 549–557, 1993. Zhidong Bai and Jack W Silverstein. , volume 20. Springer, 2010. Trevor S Breusch and Adrian R Pagan. A simple test for heteroscedasticity and random coefficient variation. , pages 1287–1294, 1979. R Dennis Cook and Sanford Weisberg. Diagnostics for heteroscedasticity in regression. , 70(1):1–10, 1983. Holger Dette and Axel Munk. Testing heteroscedasticity in nonparametric regression. , 60(4):693–708, 1998. Holger Dette, Axel Munk, and Thorsten Wagner. Estimating the variance in nonparametric regression—what is a reasonable choice? , 60(4):751–764, 1998. Herbert Glejser. A new test for heteroskedasticity. , 64(325):316–323, 1969. Michael J Harrison and Brendan PM McCabe. A test for heteroscedasticity based on ordinary least squares residuals. , 74(366a):494–499, 1979. S John. Some optimal multivariate tests. , 58(1):123–127, 1971. Zhaoyuan Li and Jianfeng Yao. Homoscedasticity tests valid in both low and high-dimensional regressions. , 2015. Gary C McDonald and Richard C Schwing. Instabilities of regression estimates relating air pollution to mortality. , 15(3):463–481, 1973. Halbert White. A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. , pages 817–838, 1980. H.Altay Guvenir and I.Uysal. Bilkent University Function Approximation Repository. , 2000 [^1]: Zhidong Bai is partially supported by a grant NSF China 11571067 [^2]: G. M. Pan was partially supported by a MOE Tier 2 grant 2014-T2-2-060 and by a MOE Tier 1 Grant RG25/14 at the Nanyang Technological University, Singapore. [^3]: Yanqing Yin was partially supported by a project of China Scholarship Council
{ "pile_set_name": "arxiv" }
/* eslint-disable eslint-comments/disable-enable-pair */ /* eslint-disable import/no-mutable-exports */ let CURRENT = 'NULL'; /** * use authority or getAuthority * @param {string|()=>String} currentAuthority */ const renderAuthorize = Authorized => currentAuthority => { if (currentAuthority) { if (typeof currentAuthority === 'function') { CURRENT = currentAuthority(); } if ( Object.prototype.toString.call(currentAuthority) === '[object String]' || Array.isArray(currentAuthority) ) { CURRENT = currentAuthority; } } else { CURRENT = 'NULL'; } return Authorized; }; export { CURRENT }; export default Authorized => renderAuthorize(Authorized);
{ "pile_set_name": "github" }
Isla Damas Isla Damas, or Damas Island, is a small (6 km²) island in Costa Rica in the vicinity of Quepos. It is particularly noted for its estuaries lined with mangroves. Fauna on the island include white-faced monkeys, sloths, green iguanas, crocodiles, spectacled caimans, boas, crab-eating raccoons and silky anteaters, as well as crabs and numerous bird species, such as: heron, pelicans. Boat and kayak tours through the island's estuaries are popular excursions with tourists staying in Quepos, Manuel Antonio National Park, or Jacó. Gallery Damas
{ "pile_set_name": "wikipedia_en" }
Yves Niaré Yves Niaré (20 July 1977 – 5 December 2012) was a shot putter from France. Career Niaré was born in Saint-Maurice, Val-de-Marne. He father was Malian shot putter Namakoro Niaré. His main honor was the silver medal at the 2009 European Indoor Championships with a throw of 20.42 metres. He also finished eleventh at the 1996 World Junior Championships, and fourth at the 2009 Mediterranean Games. Niaré competed at the 2001 World Championships, the 2006 European Championships, the 2007 World Championships, the 2008 Olympic Games and the 2009 World Championships without reaching the final. His personal best throw in the shot put was 20.72 metres, a French national record, achieved in May 2008 in Versailles. He also had 63.44 metres in the discus throw, achieved in May 2007 in Chelles. He is the brother of French High Jumper Gaëlle Niaré. Death Niaré was killed on the morning of 5 December 2012 in an automobile accident. A statement regarding his death was issued by the French Athletics Federation. He was 35. Competition record References External links Category:1977 births Category:2012 deaths Category:Sportspeople from Val-de-Marne Category:French male shot putters Category:French male discus throwers Category:Athletes (track and field) at the 2008 Summer Olympics Category:Olympic athletes of France Category:Road incident deaths in France Category:French people of Malian descent
{ "pile_set_name": "wikipedia_en" }
goog.module('nested.exported.enums'); /** @const */ exports = { /** @const @enum {string} */ A: { A1: 'a1', }, // The structure of the AST changes if this extra property is present. B: 0, };
{ "pile_set_name": "github" }
Annie Lapin Annie Lapin (born 1978) is an American artist who lives and works in Los Angeles, California. Her abstract paintings are grounded in representation. Early life and education Although born in Washington D.C., Lapin spent most of her early years in Kentucky. She received her BA from Yale University in 2001, and completed an MFA at the University of California, Los Angeles, in 2007. Exhibitions Lapin has had solo exhibitions at Grand Arts in Kansas City, Missouri (2008), at the Pasadena Museum of California Art (2009), at the Museum of Contemporary Art Santa Barbara (2012), and at the Weatherspoon Art Museum of the University of North Carolina at Greensboro, where was Falk Visiting Artist in 2013–2014. Notes Further reading Los Angeles Times Review: Annie Lapin's Various Peep Shows Priscilla Frank (January 25, 2014). Annie Lapin's Newest Painting Exhibition Combines Instant Attraction and a Slow Burn. Huffington Post. External links Video: New American Paintings x Future Shipwreck: Annie Lapin Category:1978 births Category:American artists Category:Living people
{ "pile_set_name": "wikipedia_en" }
Vasa, Minnesota Vasa is an unincorporated community in Vasa Township, Goodhue County, Minnesota, United States. The community is nine miles east of Cannon Falls at the junction of State Highway 19 (MN 19) and County 7 Boulevard. It is within ZIP code 55089 based in Welch. Nearby places include Cannon Falls, Red Wing, Welch, and White Rock. Vasa is 12 miles west-southwest of Red Wing. References Category:Unincorporated communities in Minnesota Category:Unincorporated communities in Goodhue County, Minnesota
{ "pile_set_name": "wikipedia_en" }
Loagan Bunut National Park The Loagan Bunut National Park () is a national park located in Miri Division, Sarawak, Malaysia, on the Borneo island. The park was named after the Loagan Bunut lake nearby, which is connected to Sungai Bunut (sungai is Malay for river), Sungai Baram and Sungai Tinjar. This park occupies a space of and is well known for its rich biodiversity and unique aquatic ecosystem. The national park was gazetted on January 1, 1990 and it was opened to public on August 29, 1991. See also List of national parks of Malaysia References Category:National parks of Malaysia Category:Protected areas of Sarawak Category:Miri, Malaysia Category:1990 establishments in Malaysia
{ "pile_set_name": "wikipedia_en" }
Peter Cooley Peter Cooley (born November 19, 1940) is an American poet and Professor of English in the Department of English at Tulane University. He also directs Tulane's Creative Writing Program. Born in Detroit, Michigan, he holds degrees from Shimer College, the University of Chicago and the University of Iowa. He is the father of poet Nicole Cooley. Career Prior to joining Tulane, Cooley taught at the University of Wisconsin, Green Bay. He was the Robert Frost Fellow at the Bread Loaf Writers’ Conference in 1981. Poetry and awards Cooley has published several books of poetry with the Carnegie Mellon University Press. He received the Inspirational Professor Award in 2001 and the Newcomb Professor of the Year Award in 2003. On August 14, 2015 he was named Louisiana's poet laureate. Bibliography Poetry Collections The Room Where Summer Ends (Pittsburgh: Carnegie Mellon University Press, 1979) Nightseasons (Pittsburgh: Carnegie Mellon University Press, 1983) The Van Gogh Notebook (Pittsburgh: Carnegie Mellon University Press, 1987) The Astonished Hours (Pittsburgh: Carnegie Mellon University Press, 1992) Sacred Conversations (Pittsburgh: Carnegie Mellon University Press, 1998) A Place Made of Starlight (Pittsburgh: Carnegie Mellon University Press, 2003) Divine Margins (Pittsburgh: Carnegie Mellon University Press, 2009) Night Bus to the Afterlife (Pittsburgh: Carnegie Mellon University Press, 2014) World Without Finishing (Pittsburgh: Carnegie Mellon University Press, 2018) List of poems References External links Peter Cooley listing in The Literary Encyclopedia Peter Cooley’s faculty page, Tulane University Peter Cooley author page at Virginia Quarterly Review, with links to poems Category:1940 births Category:Living people Category:American male poets Category:Poets Laureate of Louisiana Category:Shimer College alumni Category:The New Yorker people Category:Tulane University faculty
{ "pile_set_name": "wikipedia_en" }
Gerda Gilboe Gerda Gilboe (5 July 1914 – 11 April 2009) was a Danish actress and singer. She appeared in 18 films between 1943 and 2003. Life Gilboe was born in 1914. She was the daughter of a blacksmith, Gilboe started her career in musical theatre and operas in Aarhus before she moved to Copenhagen to work at different theatres. Her national breakthrough came, when she accepted the role as Eliza in My Fair Lady at Falkoner Teatret at short notice in 1960. Although she was then in her mid-40s and had only five days to learn the part, the production was a huge success. In the following years she took on more and more non-singing roles, and besides her theatre career she took a degree in rhetoric. Later in her life she started teaching rhetoric and drama. She appeared in several films, receiving particular acclaim for her appearance as Esther in Carlo & Esther, a 1994 film. She plays a woman in her 70s who catches the attention of Carlo who has a wife with Alzheimer's disease. Rides on his motorbike lead to an affair. Death Gilboe died on 11 April 2009 at an actors' home in Copenhagen, aged 94. Filmography A Time for Anna (2003) Kærlighed ved første hik (1999) Dybt vand (1999) Besat (1999) Antenneforeningen (1999) Kun en pige (1995) Elsker elsker ikke... (1995) Carlo & Ester (1994) Lad isbjørnene danse (1990) Isolde (1989) Sidste akt (1987) Walter og Carlo – yes, det er far (1986) Pas på ryggen, professor (1977) Kun sandheden (1975) Den kyske levemand (1974) Lise kommer til Byen (1947) En ny dag gryer (1945) Moster fra Mols (1943) References External links Category:1914 births Category:2009 deaths Category:Danish female singers Category:Danish film actresses Category:Danish musical theatre actresses Category:People from Aarhus Category:Place of birth missing Category:Place of death missing Category:20th-century Danish actresses Category:20th-century singers Category:20th-century women singers
{ "pile_set_name": "wikipedia_en" }
f := function() local l; l := 0 * [1..6]; l[[1..3]] := 1; end; f(); Where(); WhereWithVars(); quit; f:=function() if true = 1/0 then return 1; fi; return 2; end;; f(); Where(); WhereWithVars(); quit; f:=function() local x; if x then return 1; fi; return 2; end;; f(); Where(); WhereWithVars(); quit; f:=function() if 1 then return 1; fi; return 2; end;; f(); Where(); WhereWithVars(); quit; f:=function() if 1 < 0 then return 1; elif 1 then return 2; fi; return 3; end;; f(); Where(); WhereWithVars(); quit; f:=function() while 1 do return 1; od; return 2; end;; f(); Where(); WhereWithVars(); quit; f:=function() local i; for i in 1 do return 1; od; return 2; end;; f(); Where(); WhereWithVars(); quit; f:=function() local i; for i in true do return 1; od; return 2; end;; f(); Where(); WhereWithVars(); quit; f:=function(x) local i,j; for i in true do return 1; od; return 2; end;; f([1,2,3]); Where(); WhereWithVars(); quit; f:=function(x) local i,j; Unbind(x); for i in true do return 1; od; return 2; end;; f([1,2,3]); Where(); WhereWithVars(); quit; f:=function(x) local i,j; Unbind(x); j := 4; for i in true do return 1; od; return 2; end;; f([1,2,3]); Where(); WhereWithVars(); quit; f:=function() local x; repeat x:=1; until 1; return 2; end;; f(); Where(); WhereWithVars(); quit; f:=function() local x; Assert(0, 1); return 2; end;; f(); Where(); WhereWithVars(); quit; f:=function() local x; Assert(0, 1, "hello"); return 2; end;; f(); Where(); WhereWithVars(); quit; # Verify issue #2656 is fixed InstallMethod( \[\,\], [ IsMatrixObj, IsPosInt, IsPosInt ], { m, row, col } -> ELM_LIST( m, row, col ) ); l := [[1]];; f := {} -> l[2,1];; f(); Where(); WhereWithVars(); quit; # verify issue #1373 is fixed InstallMethod( Matrix, [IsFilter, IsSemiring, IsMatrixObj], {a,b,c} -> fail );
{ "pile_set_name": "github" }
<!DOCTYPE html> <html lang="en" data-navbar="/account/navbar-profile.html"> <head> <meta charset="utf-8" /> <title translate="yes">Establecer o perfil predeterminado</title> <link href="/public/pure-min.css" rel="stylesheet"> <link href="/public/content.css" rel="stylesheet"> <link href="/public/content-additional.css" rel="stylesheet"> <base target="_top" href="/"> </head> <body> <h1 translate="yes">Establecer o perfil predeterminado</h1> <p translate="yes">O teu perfil predeterminado serve como principal punto de contacto da túa conta.</p> <div id="message-container"></div> <form id="submit-form" method="post" class="pure-form" action="/account/set-default-profile" name="submit-form"> <fieldset> <div class="pure-control-group"> <select id="profileid" name="profileid"> <option value="" translate="yes"> Selecciona perfil </option> </select> </div> <button id="submit-button" type="submit" class="pure-button pure-button-primary" translate="yes">Establecer o perfil predeterminado</button> </fieldset> </form> <template id="success"> <div class="success message" translate="yes"> Éxito! O perfil é o teu estándar </div> </template> <template id="unknown-error"> <div class="error message" translate="yes"> Erro! Produciuse un erro descoñecido </div> </template> <template id="default-profile"> <div class="error message" translate="yes"> Erro! Este é xa o teu perfil predeterminado </div> </template> <template id="profile-option"> <option value="${profile.profileid}"> ${profile.contactEmail}, ${profile.firstName} ${profile.lastName} </option> </template> </body> </html>
{ "pile_set_name": "github" }
1982–83 Georgia Tech Yellow Jackets men's basketball team The 1982-83 Georgia Tech Yellow Jackets men's basketball team represented the Georgia Institute of Technology. Led by head coach Bobby Cremins, the team finished the season with an overall record of 13-15 (4-10 ACC). Roster Schedule and results References Category:Georgia Tech Yellow Jackets men's basketball seasons Georgia Tech Category:1982 in sports in Georgia (U.S. state) Category:1983 in sports in Georgia (U.S. state)
{ "pile_set_name": "wikipedia_en" }
Purchase either a combined Buildings & Contents Home Insurance policy, or separate Buildings or Contents Home Insurance Policy online at Littlewoods.com between 1st and 31st August 2017 to qualify for a free Amazon Echo Dot. New Littlewoods Home Insurance customers only. Provided your policy is still active and your premiums are up to date, we'll email you 4 weeks post-purchase to explain how you claim your free Amazon Echo Dot. If you return your item due to a fault, where possible, a replacement item will be provided. Own it! this summerwith £20 back! 1 - Spend £50 or more in one order before 30.06.172 - Enter code LAMJA at checkout3 - £20 will be credited to your original method of payment - simple! Offer excludes sale items, Apple products, Financial Services products and delivery/installation charges. Valid for one use only, this code cannot be used in conjunction with any other offer code.If you return items from your order, the credit will be reversed if the order value falls below the minimum required. Sat Navs at Littlewoods Make finding your way around easy with a sat nav from our fab range at Littlewoods. We’ve got a great selection of top brands like Garmin, TomTom and Kenwood, so there’ll be no chance of getting lost. Take a look at essential features to make journeys that little bit easier, like local area guides highlighting points of interest and useful info like the nearest petrol station or hotel. Choose from state-of-the-art designs with 3D map formats, or pick a bird’s-eye view. And we have accessories too, including travel cases for safe and stylish storage. In-car Entertainment Range If you like listening to music while you’re driving, have a look at our in-car entertainment range. Choose from a wide selection of multi-functional products with high-quality sound and easy-to-use controls. We've got state-of-the-art touch screen options with high-res graphics, and you can stream music with AppRadio Mode, CarPlay or via Bluetooth technology. If you prefer to play songs from your phone, opt for a USB connection, and sing along to those classic road-trip tunes. And if you’re all about that bass then check out our subwoofers for an immersive experience. Picking the Right Sat Nav If international travel is on your agenda, pick one of our sat navs with road maps for up to 152 different countries and get ready to explore. Keep up to date on recent road changes with a sat nav that comes with a lifetime supply of maps, meaning you'll always have access to the quickest routes available. For all the latest info and live traffic updates, choose a model with a data plan and SIM – they're particularly useful if you take a busy commuter route. We’ve got styles with handy reverse cameras too, for those who want a little extra help parking. Buy Now Pay Later (BNPL) allows you to delay payment for 12 months. The payment free period starts when you place your order (including items which are purchased on pre-order and/or are not ready for immediate dispatch). Select BNPL at checkout and the repayment period of either 104 or 156 weeks. This is the repayment period you will pay over, once the payment free period (12 months) has ended. The interest rate typically used to calculate BNPL interest is 44.9% per annum. Your interest rate will be detailed in checkout. The interest is calculated on the payment free period and the repayment period. You can avoid interest by paying the cash price in full within the payment free period. Delivery charges and other Financial Services products are not available on Buy Now Pay Later and will appear on your next statement. Please note, if you have non BNPL purchases on your account you will still need to make at least your minimum payment as detailed on your statement. Buy Now Pay Later (BNPL) allows you to delay payment for 12 months. The payment free period starts when you place your order (including items which are purchased on pre-order and/or are not ready for immediate dispatch). Select BNPL at checkout and the repayment period of either 104 or 156 weeks. This is the repayment period you will pay over, once the payment free period (12 months) has ended. The interest rate typically used to calculate BNPL interest is 44.9% per annum. Your interest rate will be detailed in checkout. The interest is calculated on the payment free period and the repayment period. You can avoid interest by paying the cash price in full within the payment free period. Delivery charges and other Financial Services products are not available on Buy Now Pay Later and will appear on your next statement. Please note, if you have non BNPL purchases on your account you will still need to make at least your minimum payment as detailed on your statement. Buy Now Pay Later (BNPL) allows you to delay payment for 12 months. The payment free period starts when you place your order (including items which are purchased on pre-order and/or are not ready for immediate dispatch). Select BNPL at checkout and the repayment period of either 104 or 156 weeks. This is the repayment period you will pay over, once the payment free period (12 months) has ended. Your interest rate will be detailed in checkout. The interest is calculated on the payment free period and the repayment period. You can avoid interest by paying the cash price in full within the payment free period. Delivery charges and other Financial Services products are not available on Buy Now Pay Later and will appear on your next statement. Please note, if you have non BNPL purchases on your account you will still need to make at least your minimum payment as detailed on your statement.
{ "pile_set_name": "pile-cc" }
<?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:background="@color/appBackground" android:foreground="?android:attr/selectableItemBackground" android:gravity="center_vertical" android:orientation="horizontal" android:paddingBottom="15dp" android:paddingLeft="10dp" android:paddingRight="10dp" android:paddingTop="15dp"> <ImageView android:id="@+id/song_item_img" android:layout_width="50dp" android:layout_height="50dp" android:layout_weight="0" /> <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginStart="15dp" android:layout_weight="1" android:orientation="vertical"> <TextView android:id="@+id/song_item_name" android:layout_width="wrap_content" android:layout_height="wrap_content" android:singleLine="true" android:textColor="#000" android:textSize="16sp" /> <TextView android:id="@+id/song_item_artist" android:layout_width="wrap_content" android:layout_height="wrap_content" android:singleLine="true" android:textColor="#989898" android:textSize="14sp" /> </LinearLayout> <ImageView android:id="@+id/song_item_menu" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginRight="5dp" android:layout_weight="0" android:background="@drawable/unbounded_ripple" android:foregroundTint="#434343" android:padding="5dp" android:src="@drawable/abc_ic_menu_moreoverflow_mtrl_alpha" android:theme="@style/Theme.AppCompat.Light" /> </LinearLayout>
{ "pile_set_name": "github" }
package io.gitlab.arturbosch.detekt.generator.collection import io.gitlab.arturbosch.detekt.api.DetektVisitor import io.gitlab.arturbosch.detekt.generator.collection.exception.InvalidDocumentationException import io.gitlab.arturbosch.detekt.rules.isOverride import org.jetbrains.kotlin.psi.KtCallExpression import org.jetbrains.kotlin.psi.KtClassOrObject import org.jetbrains.kotlin.psi.KtFile import org.jetbrains.kotlin.psi.KtProperty import org.jetbrains.kotlin.psi.KtReferenceExpression import org.jetbrains.kotlin.psi.KtSuperTypeList import org.jetbrains.kotlin.psi.KtValueArgumentList import org.jetbrains.kotlin.psi.psiUtil.containingClass import org.jetbrains.kotlin.psi.psiUtil.referenceExpression data class MultiRule( val name: String, val rules: List<String> = listOf() ) { operator fun contains(ruleName: String) = ruleName in this.rules } private val multiRule = io.gitlab.arturbosch.detekt.api.MultiRule::class.simpleName ?: "" class MultiRuleCollector : Collector<MultiRule> { override val items = mutableListOf<MultiRule>() override fun visit(file: KtFile) { val visitor = MultiRuleVisitor() file.accept(visitor) if (visitor.containsMultiRule) { items.add(visitor.getMultiRule()) } } } class MultiRuleVisitor : DetektVisitor() { val containsMultiRule get() = classesMap.any { it.value } private var classesMap = mutableMapOf<String, Boolean>() private var name = "" private val rulesVisitor = RuleListVisitor() private val properties: MutableMap<String, String> = mutableMapOf() fun getMultiRule(): MultiRule { val rules = mutableListOf<String>() val ruleProperties = rulesVisitor.ruleProperties .mapNotNull { properties[it] } rules.addAll(ruleProperties) rules.addAll(rulesVisitor.ruleNames) if (name.isEmpty()) { throw InvalidDocumentationException("MultiRule without name found.") } if (rules.isEmpty()) { throw InvalidDocumentationException("MultiRule $name contains no rules.") } return MultiRule(name, rules) } override fun visitSuperTypeList(list: KtSuperTypeList) { val isMultiRule = list.entries ?.mapNotNull { it.typeAsUserType?.referencedName } ?.any { it == multiRule } ?: false val containingClass = list.containingClass() val className = containingClass?.name if (containingClass != null && className != null && !classesMap.containsKey(className)) { classesMap[className] = isMultiRule } super.visitSuperTypeList(list) } override fun visitClassOrObject(classOrObject: KtClassOrObject) { super.visitClassOrObject(classOrObject) if (classesMap[classOrObject.name] != true) { return } name = classOrObject.name?.trim() ?: "" } override fun visitProperty(property: KtProperty) { super.visitProperty(property) if (classesMap[property.containingClass()?.name] != true) { return } if (property.isOverride() && property.name != null && property.name == "rules") { property.accept(rulesVisitor) } else { val name = property.name val initializer = property.initializer?.referenceExpression()?.text if (name != null && initializer != null) { properties[name] = initializer } } } } class RuleListVisitor : DetektVisitor() { var ruleNames: MutableSet<String> = mutableSetOf() private set var ruleProperties: MutableSet<String> = mutableSetOf() private set override fun visitValueArgumentList(list: KtValueArgumentList) { super.visitValueArgumentList(list) val argumentExpressions = list.arguments.map { it.getArgumentExpression() } // Call Expression = Constructor of rule ruleNames.addAll(argumentExpressions .filterIsInstance<KtCallExpression>() .map { it.calleeExpression?.text ?: "" }) // Reference Expression = variable we need to search for ruleProperties.addAll(argumentExpressions .filterIsInstance<KtReferenceExpression>() .map { it.text ?: "" }) } }
{ "pile_set_name": "github" }
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.stanbol.entityhub.web.reader; import java.io.IOException; import java.io.InputStream; import java.lang.annotation.Annotation; import java.lang.reflect.Type; import java.util.Arrays; import java.util.Collections; import java.util.HashMap; import java.util.HashSet; import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.Set; import javax.servlet.ServletContext; import javax.ws.rs.Consumes; import javax.ws.rs.WebApplicationException; import javax.ws.rs.core.Context; import javax.ws.rs.core.HttpHeaders; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.MultivaluedMap; import javax.ws.rs.core.Response; import javax.ws.rs.core.Response.Status; import javax.ws.rs.ext.MessageBodyReader; import javax.ws.rs.ext.Provider; import org.apache.clerezza.commons.rdf.Graph; import org.apache.clerezza.commons.rdf.BlankNodeOrIRI; import org.apache.clerezza.commons.rdf.Triple; import org.apache.clerezza.commons.rdf.IRI; import org.apache.clerezza.rdf.core.serializedform.Parser; import org.apache.clerezza.rdf.core.serializedform.SupportedFormat; import org.apache.clerezza.rdf.core.serializedform.UnsupportedParsingFormatException; import org.apache.felix.scr.annotations.Component; import org.apache.felix.scr.annotations.Property; import org.apache.felix.scr.annotations.Reference; import org.apache.felix.scr.annotations.Service; import org.apache.stanbol.commons.indexedgraph.IndexedGraph; import org.apache.stanbol.entityhub.jersey.utils.JerseyUtils; import org.apache.stanbol.entityhub.jersey.utils.MessageBodyReaderUtils; import org.apache.stanbol.entityhub.jersey.utils.MessageBodyReaderUtils.RequestData; import org.apache.stanbol.entityhub.model.clerezza.RdfValueFactory; import org.apache.stanbol.entityhub.servicesapi.model.Representation; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Provides support for reading Representation form Requests. This implementation * supports all RDF supports as well as {@link MediaType#APPLICATION_FORM_URLENCODED} * - in case the data are sent from an HTML form - and * {@link MediaType#MULTIPART_FORM_DATA} - mime encoded data. * In case of an HTML form the encoding need to be specified by the parameter * "encoding" for the entity data the parameters "entity" or "content" can be * used. * @author Rupert Westenthaler * */ @Component @Service(Object.class) @Property(name="javax.ws.rs", boolValue=true) @Provider @Consumes({ //First the data types directly supported for parsing representations MediaType.APPLICATION_JSON, SupportedFormat.N3, SupportedFormat.N_TRIPLE, SupportedFormat.RDF_XML, SupportedFormat.TURTLE, SupportedFormat.X_TURTLE, SupportedFormat.RDF_JSON, //finally this also supports sending the data as form and mime multipart MediaType.APPLICATION_FORM_URLENCODED, MediaType.MULTIPART_FORM_DATA}) public class RepresentationReader implements MessageBodyReader<Map<String,Representation>> { private static final Logger log = LoggerFactory.getLogger(RepresentationReader.class); public static final Set<String> supportedMediaTypes; private static final MediaType DEFAULT_ACCEPTED_MEDIA_TYPE = MediaType.TEXT_PLAIN_TYPE; static { Set<String> types = new HashSet<String>(); //ensure everything is lower case types.add(MediaType.APPLICATION_JSON.toLowerCase()); types.add(SupportedFormat.N3.toLowerCase()); types.add(SupportedFormat.N_TRIPLE.toLowerCase()); types.add(SupportedFormat.RDF_JSON.toLowerCase()); types.add(SupportedFormat.RDF_XML.toLowerCase()); types.add(SupportedFormat.TURTLE.toLowerCase()); types.add(SupportedFormat.X_TURTLE.toLowerCase()); supportedMediaTypes = Collections.unmodifiableSet(types); } @Reference private Parser parser; @Override public boolean isReadable(Class<?> type, Type genericType, Annotation[] annotations, MediaType mediaType) { String mediaTypeWithoutParameter = mediaType.getType().toLowerCase()+'/'+ mediaType.getSubtype().toLowerCase(); log.debug("isreadable: [genericType: {}| mediaType {}]", genericType,mediaTypeWithoutParameter); //second the media type boolean mediaTypeOK = (//the MimeTypes of Representations supportedMediaTypes.contains(mediaTypeWithoutParameter) || //as well as URL encoded MediaType.APPLICATION_FORM_URLENCODED.equals(mediaTypeWithoutParameter) || //and mime multipart MediaType.MULTIPART_FORM_DATA.equals(mediaTypeWithoutParameter)); boolean typeOk = JerseyUtils.testParameterizedType(Map.class, new Class[]{String.class,Representation.class}, genericType); log.debug("type is {} for {} against Map<String,Representation>", typeOk ? "compatible" : "incompatible" ,genericType); return typeOk && mediaTypeOK; } @Override public Map<String,Representation> readFrom(Class<Map<String,Representation>> type, Type genericType, Annotation[] annotations, MediaType mediaType, MultivaluedMap<String,String> httpHeaders, InputStream entityStream) throws IOException, WebApplicationException { log.info("Read Representations from Request Data"); long start = System.currentTimeMillis(); //(1) get the charset and the acceptedMediaType String charset = "UTF-8"; if(mediaType.getParameters().containsKey("charset")){ charset = mediaType.getParameters().get("charset"); } MediaType acceptedMediaType = getAcceptedMediaType(httpHeaders); log.info("readFrom: mediaType {} | accepted {} | charset {}", new Object[]{mediaType,acceptedMediaType,charset}); // (2) read the Content from the request (this needs to deal with // MediaType.APPLICATION_FORM_URLENCODED_TYPE and // MediaType.MULTIPART_FORM_DATA_TYPE requests! RequestData content; if(mediaType.isCompatible(MediaType.APPLICATION_FORM_URLENCODED_TYPE)) { try { content = MessageBodyReaderUtils.formForm(entityStream, charset, "encoding",Arrays.asList("entity","content")); } catch (IllegalArgumentException e) { log.info("Bad Request: {}",e); throw new WebApplicationException( Response.status(Status.BAD_REQUEST).entity(e.toString()). header(HttpHeaders.ACCEPT, acceptedMediaType).build()); } if(content.getMediaType() == null){ String message = String.format( "Missing parameter %s used to specify the media type" + "(supported values: %s", "encoding",supportedMediaTypes); log.info("Bad Request: {}",message); throw new WebApplicationException( Response.status(Status.BAD_REQUEST).entity(message). header(HttpHeaders.ACCEPT, acceptedMediaType).build()); } if(!isSupported(content.getMediaType())){ String message = String.format( "Unsupported Content-Type specified by parameter " + "encoding=%s (supported: %s)", content.getMediaType().toString(),supportedMediaTypes); log.info("Bad Request: {}",message); throw new WebApplicationException( Response.status(Status.BAD_REQUEST). entity(message). header(HttpHeaders.ACCEPT, acceptedMediaType).build()); } } else if(mediaType.isCompatible(MediaType.MULTIPART_FORM_DATA_TYPE)){ log.info("read from MimeMultipart"); List<RequestData> contents; try { contents = MessageBodyReaderUtils.fromMultipart(entityStream, mediaType); } catch (IllegalArgumentException e) { log.info("Bad Request: {}",e.toString()); throw new WebApplicationException( Response.status(Status.BAD_REQUEST).entity(e.toString()). header(HttpHeaders.ACCEPT, acceptedMediaType).build()); } if(contents.isEmpty()){ String message = "Request does not contain any Mime BodyParts."; log.info("Bad Request: {}",message); throw new WebApplicationException( Response.status(Status.BAD_REQUEST).entity(message). header(HttpHeaders.ACCEPT, acceptedMediaType).build()); } else if(contents.size()>1){ //print warnings about ignored parts log.warn("{} Request contains more than one Parts: others than " + "the first will be ignored", MediaType.MULTIPART_FORM_DATA_TYPE); for(int i=1;i<contents.size();i++){ RequestData ignored = contents.get(i); log.warn(" ignore Content {}: Name {}| MediaType {}", new Object[] {i+1,ignored.getName(),ignored.getMediaType()}); } } content = contents.get(0); if(content.getMediaType() == null){ String message = String.format( "MediaType not specified for mime body part for file %s. " + "The media type must be one of the supported values: %s", content.getName(), supportedMediaTypes); log.info("Bad Request: {}",message); throw new WebApplicationException( Response.status(Status.BAD_REQUEST).entity(message). header(HttpHeaders.ACCEPT, acceptedMediaType).build()); } if(!isSupported(content.getMediaType())){ String message = String.format( "Unsupported Content-Type %s specified for mime body part " + "for file %s (supported: %s)", content.getMediaType(),content.getName(),supportedMediaTypes); log.info("Bad Request: {}",message); throw new WebApplicationException( Response.status(Status.BAD_REQUEST). entity(message). header(HttpHeaders.ACCEPT, acceptedMediaType).build()); } } else { content = new RequestData(mediaType, null, entityStream); } long readingCompleted = System.currentTimeMillis(); log.info(" ... reading request data {}ms",readingCompleted-start); Map<String,Representation> parsed = parseFromContent(content,acceptedMediaType); long parsingCompleted = System.currentTimeMillis(); log.info(" ... parsing data {}ms",parsingCompleted-readingCompleted); return parsed; } public Map<String,Representation> parseFromContent(RequestData content, MediaType acceptedMediaType){ // (3) Parse the Representtion(s) form the entity stream if(content.getMediaType().isCompatible(MediaType.APPLICATION_JSON_TYPE)){ //parse from json throw new UnsupportedOperationException("Parsing of JSON not yet implemented :("); } else if(isSupported(content.getMediaType())){ //from RDF serialisation RdfValueFactory valueFactory = RdfValueFactory.getInstance(); Map<String,Representation> representations = new HashMap<String,Representation>(); Set<BlankNodeOrIRI> processed = new HashSet<BlankNodeOrIRI>(); Graph graph = new IndexedGraph(); try { parser.parse(graph,content.getEntityStream(), content.getMediaType().toString()); } catch (UnsupportedParsingFormatException e) { //String acceptedMediaType = httpHeaders.getFirst("Accept"); //throw an internal server Error, because we check in //isReadable(..) for supported types and still we get here a //unsupported format -> therefore it looks like an configuration //error the server (e.g. a missing Bundle with the required bundle) String message = "Unable to create the Parser for the supported format" +content.getMediaType()+" ("+e+")"; log.error(message,e); throw new WebApplicationException( Response.status(Status.INTERNAL_SERVER_ERROR). entity(message). header(HttpHeaders.ACCEPT, acceptedMediaType).build()); } catch (RuntimeException e){ //NOTE: Clerezza seams not to provide specific exceptions on // parsing errors. Hence the catch for all RuntimeException String message = "Unable to parse the provided RDF data (format: " +content.getMediaType()+", message: "+e.getMessage()+")"; log.error(message,e); throw new WebApplicationException( Response.status(Status.BAD_REQUEST). entity(message). header(HttpHeaders.ACCEPT, acceptedMediaType).build()); } for(Iterator<Triple> st = graph.iterator();st.hasNext();){ BlankNodeOrIRI resource = st.next().getSubject(); if(resource instanceof IRI && processed.add(resource)){ //build a new representation representations.put(((IRI)resource).getUnicodeString(), valueFactory.createRdfRepresentation((IRI)resource, graph)); } } return representations; } else { //unsupported media type String message = String.format( "Parsed Content-Type '%s' is not one of the supported %s", content.getMediaType(),supportedMediaTypes); log.info("Bad Request: {}",message); throw new WebApplicationException( Response.status(Status.BAD_REQUEST). entity(message). header(HttpHeaders.ACCEPT, acceptedMediaType).build()); } } /** * Internally used to get the accepted media type used when returning * {@link WebApplicationException}s. * @param httpHeaders * @param acceptedMediaType * @return */ private static MediaType getAcceptedMediaType(MultivaluedMap<String,String> httpHeaders) { MediaType acceptedMediaType; String acceptedMediaTypeString = httpHeaders.getFirst("Accept"); if(acceptedMediaTypeString != null){ try { acceptedMediaType = MediaType.valueOf(acceptedMediaTypeString); if(acceptedMediaType.isWildcardType()){ acceptedMediaType = DEFAULT_ACCEPTED_MEDIA_TYPE; } } catch (IllegalArgumentException e) { acceptedMediaType = DEFAULT_ACCEPTED_MEDIA_TYPE; } } else { acceptedMediaType = DEFAULT_ACCEPTED_MEDIA_TYPE; } return acceptedMediaType; } /** * Converts the type and the subtype of the parsed media type to the * string representation as stored in {@link #supportedMediaTypes} and than * checks if the parsed media type is contained in this list. * @param mediaType the MediaType instance to check * @return <code>true</code> if the parsed media type is not * <code>null</code> and supported. */ private boolean isSupported(MediaType mediaType){ return mediaType == null ? false : supportedMediaTypes.contains( mediaType.getType().toLowerCase()+'/'+ mediaType.getSubtype().toLowerCase()); } }
{ "pile_set_name": "github" }
--- abstract: | FPGAs have found increasing adoption in data center applications since a new generation of high-level tools have become available which noticeably reduce development time for FPGA accelerators and still provide high-quality results. There is, however, no high-level benchmark suite available, which specifically enables a comparison of FPGA architectures, programming tools, and libraries for HPC applications. To fill this gap, we have developed an OpenCL-based open-source implementation of the HPCC benchmark suite for Xilinx and Intel FPGAs. This benchmark can serve to analyze the current capabilities of FPGA devices, cards, and development tool flows, track progress over time, and point out specific difficulties for FPGA acceleration in the HPC domain. Additionally, the benchmark documents proven performance optimization patterns. We will continue optimizing and porting the benchmark for new generations of FPGAs and design tools and encourage active participation to create a valuable tool for the community. author: - bibliography: - 'bibliography/meyer20\_sc.bib' - 'bibliography/IEEEabrv.bib' title: Evaluating FPGA Accelerator Performance with a Parameterized OpenCL Adaptation of the HPCChallenge Benchmark Suite --- FPGA, OpenCL, High Level Sythesis, HPC benchmarking Introduction ============ In , benchmarks are an important tool for performance comparison across systems. They are designed to stress important system properties or generate workloads that are similar to relevant applications for the user. Especially in acquisition planning they can be used to define the desired performance of the acquired system before it is built. Since it is a challenging task to select a set of benchmarks to cover all relevant device properties, benchmark suites can help by providing a pre-defined mix of applications and inputs, for example SPEC CPU [@SPEC-CPU] and  [@HPCCIntroduction]. There is an ongoing trend towards heterogeneity in , complementing by accelerators, as indicated by the Top 500 list [@top500]. From the top 10 systems in the list, seven are equipped with different types of accelerators. Nevertheless, to get the best matching accelerator for a new system, a tool is needed to measure and compare the performance across accelerators. For well-established accelerator architectures like , there are already standardized benchmarks like SPEC ACCEL [@SPECACCEL]. For , that are just emerging as accelerator architecture for data centers and HPC, existing benchmarks do not focus on and miss to measure highly relevant device properties. Similar to the compiler for applications, the framework consisting of and takes a very important role to achieve performance on an . The framework translates the accelerator code (denoted as *kernel*), most commonly from , to intermediate languages, organizes the communication with the underlying , performs optimizations and synthesizes the code to create executable configurations (bitstreams). Hence, the framework has a big impact on the used resources and the maximum kernel frequency, which might vary depending on the kernel design. An benchmark suite for should capture this impact and, for comparisons, must not be limited to a single framework. One of the core aspects of is communication. Some cards offer a new approach for scaling with their support for direct communication to other cards without involving the host CPU. Such technology is already used in first applications [@MLNetwork; @Sano-Multi-FPGA-Stencil] and research has started to explore the best abstractions and programming models for inter-FPGA communication [@SMI; @FPGAEthernet]. Thus, communication between FPGAs out of an framework is another essential characteristic that a benchmark suite targeting should consider. In this paper, we propose *HPCC FPGA*, an benchmark suite for using the applications of the benchmark suite. The motivation for choosing HPCC is that it is well-established for CPUs and covers a small set of applications that evaluate important memory access and computing patterns that are frequently used in HPC applications. Further, the benchmark also characterizes the HPC system’s network bandwidth allowing to extrapolate to the performance of parallel applications. Specifically, we make the following contributions in this paper: 1. We provide FPGA-adapted kernel implementations along with corresponding host code for setup and measurements for all benchmark applications. 2. We provide configuration options for the kernels that allow adjustments to resources and architecture of the target and board without the need to change the code manually. 3. We evaluate the execution of these benchmarks on different FPGA families and boards with Intel and Xilinx FPGAs and show the benchmarks can capture relevant device properties. 4. We make all benchmarks and the build system available as open-source on GitHub to encourage community contributions. The remainder of this paper is organized as follows: In Section \[sec:related-work\], we give an overview of existing benchmark suites. In Section \[sec:hpcc-benchmark\], we introduce the benchmarks in in more detail and briefly discuss the contained benchmarks and the configuration options provided for the base runs. In Section \[sec:evaluation\] we build the benchmarks for different architectures and evaluate the results to show the potential of the proposed configurable base runs. In Section \[sec:discussion\], we evaluate the global memory system of the boards in more detail and give insights into experienced problems and the potential of the benchmarks to describe the performance of boards and the associated frameworks. Finally, in Section \[sec:conclusion\], we draw conclusions and outline future work. Related Work {#sec:related-work} ============ There already exist several benchmark suites for and their frameworks. Most benchmark suites like Rodinia [@Rodinia], OpenDwarfs [@OpenDwarfs-first] or SHOC [@SHOC] are originally designed with GPUs in mind. Although both GPU and can be programmed using , the design of the compute kernels has to be changed and optimized specifically for to achieve good performance. In the case of Rodinia this was done [@RodiniaFPGA] for a subset of the benchmark suite with a focus on different optimization patterns for the Intel FPGA (then Altera) SDK for OpenCL. In contrast, to port OpenDwarfs to FPGAs, Feng et al. [@OpenDwarfs-first] employed a research OpenCL synthesis tool that instantiates GPU-like architectures on FPGAs. With Rosetta [@Rosetta], there also exists a benchmark suite that was designed targeting using the Xilinx HLS tools from the start. It focuses on typical FPGA streaming applications from the video processing and machine learning domains. The CHO [@CHO] benchmark targets more fundamental FPGA functionality and includes kernels from media processing and cryptography and the low-level generation of floating-point arithmetic through OpenCL, using the Altera SDK for OpenCL. The mentioned benchmarks often lack possibilities to adjust the benchmarks to the target architecture easily. Modifications have to be done manually in the kernel code, sometimes many different kernel variants are proposed or the kernels are not optimized at all, making it difficult to compare results for different . A benchmark suite that takes a different approach is Spector [@Spector]. It makes use of several optimization parameters for every benchmark, which allows modification and optimization of the kernels for a architecture. The kernel code does not have to be manually changed, and optimization options are restricted by the defined parameters. Nevertheless, the focus is more on the research of the design space than on performance characterization. To our best knowledge, there exists no benchmark suite for with a focus on characteristics at the point of writing. All of the mentioned benchmark suites lack a way to measure the inter- communication capability of recent high-end . In some of the benchmarks, the investigated input sizes are small enough to fit into local memory resources of a single . Since actual applications are highly parallel and require effective communication, an focused benchmark must also evaluate the characteristics of the communication network. HPC Challenge Benchmarks for FPGA {#sec:hpcc-benchmark} ================================= Benchmark Execution and Evaluation {#sec:evaluation} ================================== Further Findings and Investigations {#sec:discussion} =================================== Conclusion {#sec:conclusion} ========== In this paper, we proposed *HPCC FPGA*, a novel benchmark suite for . Therefore, we provide configurable base implementations and host codes for all benchmarks of the well-established benchmark suite. We showed that the configuration options allow the generation of efficient benchmark kernels for Xilinx and Intel using the same source code without manual modification. We executed the benchmarks on up to three with four different memory setups and compared the results with simple performance models. Most benchmarks showed a high-performance efficiency when compared to the models. Nevertheless, the evaluation showed that the base implementations are often unable to utilize the available resources on an board fully. Hence, it is important to discuss the base implementations and configuration options with the community to create a valuable and widely accepted performance characterization tool for . We made the code open-source and publicly available to simplify and encourage contributions to future versions of the benchmark suite. Acknowledgements {#acknowledgements .unnumbered} ================ The authors gratefully acknowledge the support of this project by computing time provided by the Paderborn Center for Parallel Computing (PC2). We also thank Xilinx for the donation of an Alveo U280 card, Intel for providing a PAC D5005 loaner board and access to the reference design BSP with SVM support, and the Systems Group at ETH Zurich as well as the Xilinx Adaptive Compute Clusters (XACC) program for access to their Xilinx FPGA evaluation system.
{ "pile_set_name": "arxiv" }
--- abstract: 'A general method is proposed for predicting the asymptotic percolation threshold of networks with bottlenecks, in the limit that the sub-net mesh size goes to zero. The validity of this method is tested for bond percolation on filled checkerboard and “stack-of-triangle" lattices. Thresholds for the checkerboard lattices of different mesh sizes are estimated using the gradient percolation method, while for the triangular system they are found exactly using the triangle-triangle transformation. The values of the thresholds approach the asymptotic values of $0.64222$ and $0.53993$ respectively as the mesh is made finer, consistent with a direct determination based upon the predicted critical corner-connection probability.' author: - 'Amir Haji-Akbari' - 'Robert M. Ziff' bibliography: - 'HajiAkbariZiffv3.bib' title: 'Percolation in Networks with Voids and Bottlenecks\' --- \[sec:Introduction\]Introduction\ ================================= Percolation concerns the formation of long-range connectivity in random systems [@Stauffer]. It has a wide range of application in problems in physics and engineering, including such topics as conductivity and magnetism in random systems, fluid flow in porous media [@Sukop2002], epidemics and clusters in complex networks [@GoltsevDorogovtsevMendes08], analysis of water structure [@BernabeiEtAl08], and gelation in polymer systems [@YilmazGelirAlverogluUysal08]. To study this phenomenon, one typically models the network by a regular lattice made random by independently making sites or bonds occupied with probability $p$. At a critical threshold $p_c$, for a given lattice and percolation type (site, bond), percolation takes place. Finding that threshold exactly or numerically to high precision is essential to studying the percolation problem on a particular lattice, and has been the subject of numerous works over the years (recent works include Refs. [@Lee08; @RiordanWalters07; @Scullard06; @ScullardZiff06; @ZiffScullard06; @ScullardZiff08; @Parviainen07; @QuintanillaZiff07; @NeherMeckeWagner08; @WiermanNaorCheng05; @JohnerGrimaldiBalbergRyser08; @KhamforoushShamsThovertAdler08; @Ambrozic08; @Kownacki08; @FengDengBlote08; @Wu06; @MajewskiMalarz07; @WagnerBalbergKlein06; @TarasevichCherkasova07; @HakobyanPapouliaGrigoriu07; @BerhanSastry07]). In this paper we investigate the percolation characteristics of networks with bottlenecks. That is, we consider models in which we increase the number of internal bonds within a sub-net while keeping the number of contact points between sub-nets constant. We want to find how $p_c$ depends upon the mesh size in the sub-nets and in particular how it behaves as the mesh size goes to zero. Studying such systems should give insight on the behavior of real systems with bottlenecks, like traffic networks, electric power transmission networks, and ecological systems. It is also interesting from a theoretical point of view because it interrelates the percolation characteristics of the sub-net and the entire network. ![image](squareFig1.eps) ![image](triFig2.eps) An interesting class of such systems includes lattices with an ordered series of vacated areas within them. Examples include the filled checkerboard lattices (Fig. \[fig:Checkerboard\_finite\]) and the “stack-of-triangles" (Fig. \[fig:strg\_finite\]). The latter can be built by partitioning the triangular lattice into triangular blocks of dimension $L$, and alternately vacating those blocks. These internal blocks of length $L$ correspond to the sub-nets, which contact other sub-nets through the three contact points at their corners. The checkerboard lattice is the square-lattice analog of the stack-of-triangles lattice, where sub-nets are $L \times L$ square lattices which contact the other sub-nets via four contact points. Note, for the stack-of-triangles sub-nets, we also use the $L \times L$ designation, here to indicate $L$ bonds on the base and the sides. The problem of finding the bond percolation threshold can be solved exactly for the stack-of-triangles lattice because it fits into a class of self-dual arrangements of triangles, and the triangle-triangle transformation (a generalization of the star-triangle transformation) can be used to write down equations for its percolation threshold [@Ziff_CellDualCell; @ChayesLei06]. This approach leads to an algebraic equation which can be solved using numerical root-finding methods. However due to lack of self-duality in the filled checkerboard lattices, no exact solution can be obtained for their thresholds. It is of interest and of practical importance to investigate the limiting behavior of systems with sub-nets of an infinite number of bonds, i.e., systems where the size of sub-nets is orders of magnitude larger than the size of a single bond in the system, or equivalently, where the mesh size of the lattice compared to the sub-net size becomes small. Due to reduced connectivity, these systems will percolate at a *higher* occupation probability than a similar regular lattice. The limiting percolation threshold for infinite sub-nets is counter-intuitively non-unity, and is argued to be governed by the connectedness of contact points to the infinite percolating clusters within sub-nets. This argument leads to a simple criterion linking the threshold to the probability that the corners connect to the giant cluster in the center of the sub-net. In this work, the limiting threshold value is computed for bond percolation on the stack-of-triangles and filled checkerboard lattices using this new criterion. Percolation thresholds are also found for a series of lattices of finite sub-net sizes. For the stack-of-triangles lattices, most percolation thresholds are evaluated analytically using the triangle-triangle transformation method, while for filled checkerboard lattices, the gradient percolation method [@Ziff_Sapoval] is used. The limiting values of $0.53993$ and $0.64222$ are found for percolation thresholds of stack-of-triangles and checkerboard lattices respectively, which are both in good agreement with the values extrapolated for the corresponding lattices of finite sub-net sizes. We note that there are some similarities between this work and studies done on the fractal Sierpiński gaskets (triangular) [@YuYao1988] and carpets (square), but in the case of the Sierpiński models, the sub-nets are repeated in a hierarchical fashion while here they are not. For the Sierpiński gasket, which is effectively all corners, the percolation threshold is known to be 1 [@GefenAharonyShapirMandelbrot84]. For Sierpiński gaskets of a finite number of generations, the formulae for the corner connectivities can be found exactly through recursion [@TaitelbaumHavlinGrassbergerMoenig90], while here they cannot. Recently another hierarchical model with bottlenecks, the so-called Apollonian networks, which are related to duals of Sierpinski networks, has also been introduced [@AutoMoreiraHerrmannAndrade08]. In this model, the percolation threshold goes to zero as the system size goes to infinity. \[sec:theory\]Theory\ ===================== Let $p$ be the probability that a bond in the system is occupied. Consider a network with sub-nets of infinitely fine mesh, each individually percolating (in the sense of forming “infinite" clusters but not necessarily connecting the corners) at $p_{c,s}$, and denote the overall bond percolation threshold of the entire network to be $p_{c,n}$. It is obvious that $p_{c,s}<p_{c,n}$, due to reduced connectivity in the entire network compared to connectivity in individual sub-nets. For $p_{c,s} < p < p_{c,n}$, an infinite cluster will form within each sub-net with probability $1$. However, the entire network will not percolate, because a sufficient number of connections has not yet been established between the contact points at the corners and central infinite clusters. Now we construct an auxiliary lattice by connecting the contact points to the center of each subnet, which represents the central infinite cluster contracted into a single site. The occupation probability of a bond on this auxiliary lattice is the probability that the contact point is connected to the central infinite cluster of the sub-net. Percolation of this auxiliary lattice is equivalent to the percolation of the entire network. That is, if this auxiliary lattice percolates at a threshold $p_{c,a}$, the percolation threshold of the entire network will be determined by: $$\begin{aligned} \label{eq:bottleneck_general} P_{\infty,{\rm corner}}(p_{c,n})=p_{c,a}\end{aligned}$$ where $P_{\infty,{\rm corner}}(p)$ gives the probability that the corner of the sub-net is connected to the central infinite cluster given that the single occupation probability is $p$. In general no analytical expression exists for $P_{\infty,{\rm corner}}(p)$, even for simple lattices such as the triangular and square lattices, and $P_{\infty,{\rm corner}}(p)$ must be evaluated by simulation. ![\[fig:stack\_of\_triangles\] (Color online.) Stack-of-triangles lattice and its auxiliary lattice. The filled blue (dark) triangles represent the sub-net, and the yellow honeycomb lattice represents the effective auxiliary lattice.](Str_LatticeFig3.eps) \[sec:theory:stack\_of\_triangles\]Stack-of-Triangles Lattice ------------------------------------------------------------- Fig. \[fig:stack\_of\_triangles\] shows a limiting stack-of-triangles lattice where each shaded triangle represents a sub-net of infinitely many bonds. The contact points are the corners of the triangular sub-nets. As shown in Fig. \[fig:stack\_of\_triangles\], the auxiliary lattice of the stack-of-triangles lattice is the honeycomb lattice, which percolates at ${p_{c,a}=1-2\sin\left(\pi/18\right)\approx0.652704}$ [@SykesEssam1964]. Thus the asymptotic percolation threshold $p_{c,n}$ of the stack-of-triangles will be determined by: $$\begin{aligned} \label{eq:bottleneck_str} P_{\infty,{\rm corner}}(p_{c,n})=1-2\sin{\frac{\pi}{18}} \ .\end{aligned}$$ Because the stack-of-triangles lattice is made up of triangular cells in a self-dual arrangement, its percolation threshold can be found exactly using the triangle-triangle transformation [@Ziff_CellDualCell; @ChayesLei06]. Denoting the corners of a single triangular sub-net with $A$, $B$ and $C$, the percolation threshold of the entire lattice is determined by the solution of the following equation: $$\begin{aligned} \label{eq:dual} P(ABC)=P(\overline{ABC})\end{aligned}$$ where ${P(ABC)}$ is the probability that $A$, $B$ and $C$ are all connected, and ${P(\overline{ABC})}$ is the probability that none of them are connected. Eq. (\[eq:dual\]) gives rise to an algebraic equation which can be solved for the exact percolation threshold of the lattices of different sub-net sizes. \[sec:theory:checkerboard\]Filled Checkerboard Lattice ------------------------------------------------------ Unlike the stack-of-triangles lattice, there is no exact solution for percolation threshold of the checkerboard lattice for finite sub-nets because no duality argument can be made for such lattices. However once again an auxiliary lattice approach can be used to find a criterion for the asymptotic value of percolation threshold. Fig. \[fig:checkerboad\_aux\] depicts the corresponding auxiliary lattice for a checkerboard lattice, which is simply the square lattice with double bonds in series. This lattice percolates at ${p_{c,a}= 1/\sqrt{2} \approx{0.707107}}$. Thus for the infinite sub-net ${p_{c,n}}$ will be determined by: $$\begin{aligned} \label{eq:bottleneck_checkerboard} P_{\infty,{\rm corner}}(p_{c,n})= \frac{1}{\sqrt{2}}\end{aligned}$$ It is interesting to note that there exists another regular lattice — the “martini" lattice — for which the bond threshold is also exactly $1/\sqrt{2}$ [@ZiffScullard06]. However, that lattice does not appear to relate to a network construction as the double-square lattice does. ![\[fig:checkerboad\_aux\](Color online.) Auxiliary lattice of the checkerboard lattice. The blue (dark) colored areas represent the subnets, and the double-bond square lattice (diagonals) represents the auxiliary lattice.](Checkerboard_auxFig4.eps) \[sec:methods\]Methods ====================== \[sec:methods:Pc\_finite\]Percolation Threshold of Systems of finite-sized sub-nets ----------------------------------------------------------------------------------- For the checkerboard lattice, we estimate the bond percolation thresholds using the gradient percolation method  [@Ziff_Sapoval]. In this method, a gradient of occupation probability is applied to the lattice, such that bonds are occupied according to the local probability determined by this gradient. A self-avoiding hull-generating walk is then made on the lattice according to the rule that an occupied bond will reflect the walk while a vacant bond will be traversed by the walk. For a finite gradient, this walk can be continued infinitely by replicating the original lattice in the direction perpendicular to the gradient using periodic boundary conditions. Such a walk will map out the boundary between the percolating and non-percolating regions, and the average value of occupation probability during the walk will be a measure of the percolation threshold. Because all bonds are occupied or vacated independent of each other, this average probability can be estimated as [@RossoGouyetSapoval1986]: $$\begin{aligned} \label{eq:Pc_gradient} p_c=\frac{N_{occ}}{N_{occ}+N_{vac}}\end{aligned}$$ It is particularly straightforward to implement this algorithm to bond percolation on a square lattice, and the checkerboard lattice can be simulated by making some of the square-lattice bonds permanently vacant. Walks are carried out in a horizontal-vertical direction and the original lattice is rotated $45^{\circ}$. We applied this approach to checkerboard lattices of different block sizes. Fig. \[fig:chkrbrd\_2b2\] and Fig. \[fig:chkrbrd\_4b4\] show the corresponding setups for lattices with $2\times2$ and $4\times4$ vacancies, where the lattice bonds are represented as dashed diagonal lines and solid horizontal and vertical lines show where the walk goes. Circles indicate the centers of permanently vacant bonds. It should be emphasized that permanently vacated bonds are not counted in Eq. (\[eq:Pc\_gradient\]) even if they are visited by the walk. The percolation threshold of stack-of-triangles lattices of finite sub-net size were calculated using Eq. (\[eq:dual\]). If the occupation probability is $p$ and $q = 1 - p$, one can express $P(ABC)$ and $P(\overline{ABC})$ as: $$\begin{aligned} P(ABC)&=&\sum_{i=0}^{3n(n+1)/2} \phi(n,i)p^iq^{3n(n+1)/2-i} \label{eq:phi_n_i}\\ P(\overline{ABC})&=&\sum_{i=0}^{3n(n+1)/2}\psi(n,i)p^i q^{3n(n+1)/2-i} \label{eq:psi_n_i}\end{aligned}$$ where $n$ denotes the number of bonds per side of the sub-net, $\phi(n,i)$ denotes the number of configurations of an $n\times n$ triangular block with precisely $i$ occupied bonds where the $A$, $B$ and $C$ are connected to each other and $\psi(n,i)$ denotes the number of configurations where none of these points are connected. There appears to be no closed-form combinatorial expression for $\phi(n,i)$ and $\psi(n,i)$, and we determined them by exhaustive search of all possible configurations. ![\[fig:chkrbrd\_2b2\]Representation of checkerboard lattices for simulation with gradient method. The original bond lattice is represented by dashed diagonal lines, while lattice on which the walk goes is vertical and horizontal. Open circles mark bonds that are permanently vacant.](checkerboard2x2Fig5.eps){width="2.5"} ![\[fig:chkrbrd\_4b4\]Checkerboard lattice with $4\times4$ vacancies, with description the same as in Fig. \[fig:chkrbrd\_2b2\].](checkerboard4x4Fig6.eps){width="2.5"} \[sec:methods:estimate\_Pinf\]Estimation of ${P_{\infty,{\rm corner}}}$ ----------------------------------------------------------------------- As mentioned in Section \[sec:theory\], the asymptotic value of percolation threshold ${p_{c,n}}$ can be calculated using Eq. (\[eq:bottleneck\_checkerboard\]). However there is no analytical expression for ${P_{\infty,{\rm corner}}(p)}$, hence it must be characterized by simulation. In order to do that, the size distribution of clusters connected to the corner must be found for different values of $p>p_{c,s}$. Cluster sizes are defined in terms of the number of sites in the cluster. In order to isolate the cluster connected to the corner, a first-in-first-out Leath or growth algorithm is used starting from the corner. In FIFO algorithm, the neighbors of every unvisited site are investigated before going to neighbors of the neighbors, so that clusters grow in a circular front. Compared to last-in-first-out algorithm used in recursive programming, this algorithm performs better for ${p{\ge}p_{c,s}}$ because it explores the space in a more compact way. At each run, the size of the cluster connected to the corner is evaluated using the FIFO growth algorithm. In order to get better statistics, clusters with sizes between ${2^i}$ and ${2^{i+1}-1}$ are counted to be in the $i-$th bin. Because simulations are always run on a finite system, there is an ambiguity on how to define the infinite cluster. However, when ${p{\ge}p_{c,s}}$, the infinite cluster occupies almost the entire lattice, and the finite-size clusters are quite small on average. This effect becomes more and more profound as $p$ increases, and the expected number of clusters in a specific bin becomes smaller and smaller. Consequently, larger bins will effectively contain no clusters, except the bin corresponding to cluster sizes comparable to the size of the entire system. Thus there is no need to set a cutoff value for defining an infinite cluster. Fig. \[fig:cluster\_size\] depicts the size distribution of clusters connected to the corner obtained for $1024 \times 1024$ triangular lattice at (a): $p=0.40$ and (b): $p=0.55$ after ${10^4}$ independent runs. As it is observed, there is a clear gap between bins corresponding to small clusters and the bin corresponding to the spanning infinite cluster even for small values of $p$, which clearly demonstrates that the largest nonempty bin corresponds to infinite percolating clusters connected to the corner. The fraction of such clusters connected to the corner is an estimate of $P_{\infty,{\rm corner}}(p)$. In the simulations, we used the four-offset shift-register random-number generator R(471,1586,6988,9689) described in Ref. [@Ziff98]. ![image](Cluster_size_0_40Fig7a.eps) ![image](Cluster_size_0_55Fig7b.eps) \[sec:results\]Results and Discussion ===================================== \[sec:results:gradient\]Gradient Percolation Data ------------------------------------------------- The gradient percolation method was used to estimate the bond percolation threshold of checkerboard lattices of five different sub-net sizes, i.e., $2\times2$, $4\times4$, $8\times8$, $16\times16$ and $32\times32$. For each lattice, six values of the gradient were used, and simulations were run for $10^{10}$ to $10^{12}$ steps for each gradient value in order to assure that the estimated percolation thresholds are accurate to at least five significant digits. The gradient was applied at an angle of $45^{\circ}$ relative to the original lattice. Figures \[fig:pc\_2b2\]- \[fig:pc\_8b8\] depict typical simulation results. Measured percolation thresholds for finite gradients were extrapolated to estimate the percolation threshold as $L\rightarrow\infty$. Our simulations show that $p_c$ fits fairly linearly when plotted against $1/L$. Table \[table:pc\_checkerboard\] gives these estimated percolation thresholds. ![image](pc_2b2Fig8a.eps) ![image](pc_4b4Fig8b.eps) ![image](pc_8b8Fig8c.eps) Sub-net size Estimated $p_{c,n}$ -------------- -- -- -- -- -- --------------------------- $1\times1$ $0.5^{a}$ $2\times2$ $0.596303\pm0.000001^{b}$ $4\times4$ $0.633685\pm0.000009^{b}$ $8\times8$ $0.642318\pm0.000005^{b}$ $16\times16$ $0.64237\pm0.00001^{b}$ $32\times32$ $0.64219\pm0.00002^{b}$ $\vdots$ $\vdots$ $\infty$ $0.642216\pm0.00001^{c}$ : \[table:pc\_checkerboard\] Percolation threshold for checkerboard lattices of different sub-net sizes: $^a$Exact result, $^{b}$from gradient percolation simulations, $^c$from corner simulations using Eq. (\[eq:bottleneck\_checkerboard\]). \[section:results:dual\]Percolation Threshold of The Stack-of-triangles Lattice ------------------------------------------------------------------------------- As mentioned in Sections \[sec:theory:stack\_of\_triangles\] and \[sec:methods:Pc\_finite\], the percolation threshold of stack-of-triangles lattice can be determined by Eq. (\[eq:dual\]). Table \[table:polynomial\] summarizes the corresponding polynomial expressions and their relevant roots for lattices having $1$, $2$, $3$ and $4$ triangles per edge. These polynomials give the $\phi(n,i)$ and $\psi(n,i)$ in Eqs. (\[eq:phi\_n\_i\]) and (\[eq:psi\_n\_i\]) for $n=1,2,3,4$ and $i=0,1,\ldots,3n(n+1)/2$. We show $p_0 = P(\overline{ABC})$, $p_2 = P(AB\overline{C})$ (the probability that a given pair of vertices are connected together and not connected to the third vertex), and $p_3 = P(ABC)$. These quantities satisfy $p_0 + 3 p_2 + p_3 = 1$. Then we use Eq. (\[eq:dual\]) to solve for $p_{c,n}$ numerically. We also show in Table \[table:polynomial\] the values of $p_0$, $p_2$ and $p_3$ evaluated at the $p_{c,n}$. Interestingly, as $n$ increases, $p_0$ at first increases somewhat but then tends back to its original value at $n= 1$, reflecting the fact that the connectivity of the infinitely fine mesh triangle is identical to that of the critical honeycomb lattice, which is identical to the connectivity of the simple triangular lattice according to the usual star-triangle arguments. It is not possible to perform this exact enumeration for larger sub-nets, so we used gradient percolation method to evaluate $p_c$ for $5\times5$. (To create the triangular bond system on a square bond lattice, alternating horizontal bonds are made permanently occupied.) The final threshold results are summarized in Table \[table:pc\_stack\_of\_triangles\]. --------- ---------------------------------------- -------------------------------------------------------------------------------------------------------- Sub-net $1\times1$ (simple triangular lattice) $2\times2$ (3 “up" triangles or 9 bonds per sub-net)\ ${p_3=P(ABC)}$ & ${p^3+3p^2q}$ & ${9p^4q^5+57p^5q^4+63p^6q^3+33p^7q^2+9p^8q+p^9}$\ $p_2 =P\left(AB\overline{C}\right)$ & ${pq^2}$ & ${p^2q^7+10p^3q^6+32p^4q^5+22p^5q^4+7p^6q^3+p^7q^2}$\ ${p_0=P(\overline{ABC})}$ & ${(p+q)^3-p_3-3p_2}$ & ${(p+q)^9-p_3-3p_2}$\ $p_c$ &$0.34729635533$ & $0.47162878827$\ ${p_0(p_c)=p_3(p_c)}$ & $0.27806614328$ & $0.28488908000$\ ${p_2(p_c)}$ & $0.14795590448$ & $0.14340728000$\ --------- ---------------------------------------- -------------------------------------------------------------------------------------------------------- --------- ------------------------------------------------------------------------------------------------------------------------------------ Sub-net $3 \times 3$ (6 “up" triangles or $18$ bonds per sub-net)\ ${p_3=P(ABC)}$ & ${29p^6q^{12}+468p^7q^{11}+3015p^8q^{10}+9648p^9q^9+16119p^{10}q^8+17076p^{11}q^7+12638p^{12}q^6}$\ & ${+6810p^{13}q^5+2694p^{14}q^4+768p^{15}q^3+150p^{16}q^2+18p^{17}q+p^{18}}$\ $p_2 =P\left(AB\overline{C}\right)$ & ${p^3q^{15}+21p^4q^{14}+202p^5q^{13}+1125p^6q^{12}+3840p^7q^{11}+7956p^8q^{10}+9697p^9q^9}$\ & ${+7821p^{10}q^8+4484p^{11}q^7+1879p^{12}q^6+572p^{13}q^5+121p^{14}q^4+16p^{15}q^3+p^{16}q^2}$\ ${p_0=P(\overline{ABC})}$ & ${(p+q)^{18}-p_3-3p_2}$\ $p_c$ &$0.50907779266$\ ${p_0(p_c)=p_3(p_c)}$ & $0.28322276251$\ ${p_2(p_c)}$ & $0.14451815833$\ --------- ------------------------------------------------------------------------------------------------------------------------------------ --------- ---------------------------------------------------------------------------------------------------------------------------- Sub-net $4 \times 4$ ($10$ “up" triangles or $30$ bonds per sub-net)\ ${p_3=P(ABC)}$ & ${99p^8q^{22}+2900p^9q^{21}+38535p^{10}q^{20}+305436p^{11}q^{19}+1598501p^{12}q^{18}}$\ & ${+5790150p^{13}q^{17}+14901222p^{14}q^{16}+27985060p^{15}q^{15}+39969432p^{16}q^{14}}$\ & ${+45060150p^{17}q^{13}+41218818p^{18}q^{12}+31162896p^{19}q^{11}+19685874p^{20}q^{10}}$\ & ${+10440740p^{21}q^9+4647369p^{22}q^8+1727208p^{23}q^7+530552p^{24}q^6+132528p^{25}q^5}$\ & ${+26265p^{26}q^4+3976p^{27}q^3+432p^{28}q^2+30p^{29}q+p^{30}}$\ $p_2 =P\left(AB\overline{C}\right)$ & ${p^4q^{26}+36p^5q^{25}+613p^6q^{24}+6533p^7q^{23}+48643p^8q^{22}+267261p^9q^{21}}$\ & ${+1114020p^{10}q^{20}+3563824p^{11}q^{19}+8766414p^{12}q^{18}+16564475p^{13}q^{17}}$\ & ${+24187447p^{14}q^{16}+27879685p^{15}q^{15}+25987202p^{16}q^{14}+19980934p^{17}q^{13}}$\ & ${+12843832p^{18}q^{12}+6950714p^{19}q^{11}+3170022p^{20}q^{10}+1212944p^{21}q^9}$\ & ${+385509p^{22}q^8+100140p^{23}q^7+20744p^{24}q^6+3300p^{25}q^5+379p^{26}q^4+28p^{27}q^3+p^{28}q^2}$\ ${p_0=P(\overline{ABC})}$ & ${(p+q)^{30}-p_3-3p_2}$\ $p_c$ &$0.52436482243 $\ ${p_0(p_c)=p_3(p_c)}$ & $0.28153957013$\ ${p_2(p_c)}$ & $0.14564028658$\ --------- ---------------------------------------------------------------------------------------------------------------------------- Sub-net size Estimated $p_c$ -------------- -- -- -- -- -- ---------------------------- $1\times1$ $0.347296355^{a}$ $2\times2$ $0.471628788^{a}$ $3\times3$ $0.509077793^{a}$ $4\times4$ $0.524364822^{a}$ $5\times5$ $0.5315976\pm0.000001^{b}$ $\vdots$ $\vdots$ $\infty$ $0.53993\pm0.00001^c$ : \[table:pc\_stack\_of\_triangles\] Percolation threshold for stack-of-triangles lattices of different sub-net sizes: $^a$From Eq. (\[eq:dual\]) using exact expressions for $p_0$ and $p_3$ from Table \[table:polynomial\], $^b$from gradient simulation, $^c$Corner simulation using Eq. (\[eq:bottleneck\_str\]). \[sec:results:estimate\_Pinf\]Estimation of ${P_{\infty,{\rm corner}}(p)}$ -------------------------------------------------------------------------- ### \[sec:results:estimate\_Pinf:checkerb\]Square Lattice The cluster growth algorithm was used to estimate ${P_{\infty,{\rm corner}}(p)}$ for different values of $p$. Simulations were run on a $2048\times2048$ square lattice. For each value of $p>1/2$, $10^5$ independent runs were performed and $P_{\infty,{\rm corner}}$ was estimated by considering the fraction of clusters falling into the largest nonempty bin as described in Section \[sec:methods:estimate\_Pinf\]. Fig. \[fig:p\_inf\_chkrbrd\] demonstrates the resulting curve for the square lattice. In order to solve Eq. (\[eq:bottleneck\_checkerboard\]), a cubic spline with natural boundary conditions was used for interpolation, and an initial estimate of ${p_{c,n}}$ was obtained to be $0.6432$. The standard deviation of $P_{\infty,{\rm corner}}(p)$ scales as $O(1/\sqrt{N})$ where $N$ is the number of independent simulation used for its estimation, so that $N=10^5$ will give us an accuracy in $P_{\infty,{\rm corner}}(p)$ of about two significant figures. In order to increase the accuracy in our estimate, further simulations were performed at the vicinity of $p=0.6432$ for $N=10^{10}$ trials with a lower cut-off size, and $p_{c,n}$ was found to be $0.642216\pm0.00001$. This number is in good agreement with percolation thresholds given in Table  \[table:pc\_checkerboard\]. Note that $p_{c,n}$ for the $16\times16$ sub-net checkerboard lattice actually overshoots the value 0.642216 for the infinite sub-net and then drops to the final value. This non-monotonic behavior is surprising at first and presumably is due to some interplay between the various corner connection probabilities that occurs for finite system. At the threshold $p_{c,n} = 0.642216$, we found that the number of corner clusters containing $s$ sites for large $s$ behaves in the expected way for supercritical clusters [@Stauffer] $n_s \sim a \exp(-b s^{1/2})$ with $\ln a = -7.0429$ and $b = -0.8177$. ![\[fig:p\_inf\_chkrbrd\]${P_{\infty,{\rm corner}}(p)}$ for the square lattice.](P_inf_checkerboardFig9.eps) ### \[sec:results:estimate\_Pinf:stotr\]Triangular lattice The cluster growth algorithm was applied to find the size distribution of clusters connected to the corner of a $1024\times1024$ triangular lattice. For each value of $p$, $10^4$ independent runs were performed and $P_{\infty,{\rm corner}}(p)$ was evaluated. Fig. \[fig:p\_inf\_strg\] depicts the results. The root of Eq. (\[eq:bottleneck\_str\]) was determined by cubic spline to be around $0.539$. Further simulations were performed around this value with $N=10^{10}$ runs for each $p$, yielding $p_{c,n}=0.539933\pm0.00001$. This value is also in good agreement with values given in Table  \[table:polynomial\] and shows fast convergence as sub-net size increases. ![\[fig:p\_inf\_strg\]${P_{\infty,{\rm corner}}(p)}$ for the triangular lattice.](P_inf_strgFig10.eps) Discussion {#sec:conclusion} ========== We have shown that the percolation threshold of checkerboard and stack-of-triangle systems approach values less than 1 as the mesh spacing in the sub-nets goes to zero. In that limit, the threshold can be found by finding the value of $p$ such that the probability a corner vertex is connected to the infinite cluster $P_{\infty,{\rm corner}}$ equals $1/\sqrt{2}$ and $1 - 2 \sin (\pi/18)$, respectively, based upon the equivalence with the double-bond square and bond honeycomb lattices. The main results of our analysis and simulations are summarized in Tables \[table:pc\_checkerboard\] and \[table:pc\_stack\_of\_triangles\]. For the case of the checkerboard, we notice a rather interesting and unexpected situation in which the threshold $p_{c,n}$ slightly overshoots the infinite-sub-net value and then decreases as the mesh size increases. The threshold here is governed by a complicated interplay of connection probabilities for each square, and evidently for intermediate sized systems it is somewhat harder to connect the corners than for larger ones, and this leads to a larger threshold. In the case of the triangular lattice, where there are fewer connection configurations between the three vertices of one triangle (namely, just $p_0$, $p_2$ and $p_3$), the value of $p_{c,n}$ appears to grow monotonically. To illustrate the general behavior of the systems, we show a typical critical cluster for the $8\times8$ checkerboard system in Fig. \[Pict8x8squaresBW\]. It can be seen that the checkerboard squares the cluster touches are mostly filled, since the threshold $p_{c,n} = 0.642318$ is so much larger than the square lattice’s threshold $p_{c,s} = 0.5$. In Fig. \[hajisquaredensity\] we show the average density of “infinite" (large) clusters in a single $64\times64$ square at the checkerboard criticality of $p_{c,n} = 0.642216$, in which case the density drops to $1/\sqrt{2}$ at the corners. In Fig. \[haji4density\] we show the corresponding densities conditional on the requirement that the cluster simultaneously touches all four corners, so that the density now goes to 1 at the corners and drops to a somewhat lower value in the center because not every site in the system belongs to the spanning cluster. Similar plots can be made of clusters touching 1, 2, or 3 corners. At the sub-net critical point $p_{c,s}$, the first two cases can be solved exactly and satisfy a factorization condition [@SimmonsKlebanZiff07; @SimmonsZiffKleban08], but this result does not apply at the higher $p_{c,n}$. The ideas discussed in this paper apply to any system with regular bottlenecks. Another example is the kagomé lattice with the triangles filled with a finer-mesh triangular lattice; this system is studied in Ref. [@ZiffGu]. Acknowledgments =============== This work was supported in part by the National Science Foundation Grants No. DMS-0553487 (RMZ). The authors also acknowledge the contribution of UROP (Undergraduate Reseach Opportunity Program) student Hang Gu for his numerical determination of $p_{c,n}$ for the triangular lattice of sub-net size $5\times5$, and thank Christian R. Scullard for helpful discussions concerning this work.
{ "pile_set_name": "arxiv" }
Richard Broun Richard Broun may refer to: Richard Broun (politician), see Members of the Western Australian Legislative Council, 1832-1870 Sir Richard Broun, 6th Baronet 13 Dec 1781 of the Broun Baronets Sir Richard Broun, 8th Baronet (1801–1858), of the Broun Baronets See also Richard Brown (disambiguation) Broun (surname)
{ "pile_set_name": "wikipedia_en" }
--- abstract: 'The purpose of this article is to study the problem of finding sharp lower bounds for the norm of the product of polynomials in the ultraproducts of Banach spaces $(X_i)_{\mathfrak U}$. We show that, under certain hypotheses, there is a strong relation between this problem and the same problem for the spaces $X_i$.' address: 'IMAS-CONICET' author: - Jorge Tomás Rodríguez title: On the norm of products of polynomials on ultraproducts of Banach spaces --- Introduction ============ In this article we study the factor problem in the context of ultraproducts of Banach spaces. This problem can be stated as follows: for a Banach space $X$ over a field ${\mathbb K}$ (with ${\mathbb K}={\mathbb R}$ or ${\mathbb K}={\mathbb C}$) and natural numbers $k_1,\cdots, k_n$ find the optimal constant $M$ such that, given any set of continuous scalar polynomials $P_1,\cdots,P_n:X\rightarrow {\mathbb K}$, of degrees $k_1,\cdots,k_n$; the inequality $$\label{problema} M \Vert P_1 \cdots P_n\Vert \ge \, \Vert P_1 \Vert \cdots \Vert P_n \Vert$$ holds, where $\Vert P \Vert = \sup_{\Vert x \Vert_X=1} \vert P(x)\vert$. We also study a variant of the problem in which we require the polynomials to be homogeneous. Recall that a function $P:X\rightarrow {\mathbb K}$ is a continuous $k-$homogeneous polynomial if there is a continuous $k-$linear function $T:X^k\rightarrow {\mathbb K}$ for which $P(x)=T(x,\cdots,x)$. A function $Q:X\rightarrow {\mathbb K}$ is a continuous polynomial of degree $k$ if $Q=\sum_{l=0}^k Q_l$ with $Q_0$ a constant, $Q_l$ ($1\leq l \leq k$) an $l-$homogeneous polynomial and $Q_k \neq 0$ . The factor problem has been studied by several authors. In [@BST], C. Benítez, Y. Sarantopoulos and A. Tonge proved that, for continuous polynomials, inequality (\[problema\]) holds with constant $$M=\frac{(k_1+\cdots + k_n)^{(k_1+\cdots +k_n)}}{k_1^{k_1} \cdots k_n^{k_n}}$$ for any complex Banach space. The authors also showed that this is the best universal constant, since there are polynomials on $\ell_1$ for which equality prevails. For complex Hilbert spaces and homogeneous polynomials, D. Pinasco proved in [@P] that the optimal constant is $$\nonumber M=\sqrt{\frac{(k_1+\cdots + k_n)^{(k_1+\cdots +k_n)}}{k_1^{k_1} \cdots k_n^{k_n}}}.$$ This is a generalization of the result for linear functions obtained by Arias-de-Reyna in [@A]. In [@CPR], also for homogeneous polynomials, D. Carando, D. Pinasco and the author proved that for any complex $L_p(\mu)$ space, with $dim(L_p(\mu))\geq n$ and $1<p<2$, the optimal constant is $$\nonumber M=\sqrt[p]{\frac{(k_1+\cdots + k_n)^{(k_1+\cdots +k_n)}}{k_1^{k_1} \cdots k_n^{k_n}}}.$$ This article is partially motivated by the work of M. Lindström and R. A. Ryan in [@LR]. In that article they studied, among other things, a problem similar to (\[problema\]): finding the so called polarization constant of a Banach space. They found a relation between the polarization constant of the ultraproduct $(X_i)_{\mathfrak U}$ and the polarization constant of each of the spaces $X_i$. Our objective is to do an analogous analysis for our problem (\[problema\]). That is, to find a relation between the factor problem for the space $(X_i)_{\mathfrak U}$ and the factor problem for the spaces $X_i$. In Section 2 we give some basic definitions and results of ultraproducts needed for our discussion. In Section 3 we state and prove the main result of this paper, involving ultraproducts, and a similar result on biduals. Ultraproducts ============= We begin with some definitions, notations and basic results on filters, ultrafilters and ultraproducts. Most of the content presented in this section, as well as an exhaustive exposition on ultraproducts, can be found in Heinrich’s article [@H]. A filter ${\mathfrak U}$ on a family $I$ is a collection of non empty subsets of $I$ closed by finite intersections and inclusions. An ultrafilter is maximal filter. In order to define the ultraproduct of Banach spaces, we are going to need some topological results first. Let ${\mathfrak U}$ be an ultrafilter on $I$ and $X$ a topological space. We say that the limit of $(x_i)_{i\in I} \subseteq X$ respect of ${\mathfrak U}$ is $x$ if for every open neighborhood $U$ of $x$ the set $\{i\in I: x_i \in U\}$ is an element of ${\mathfrak U}$. We denote $$\displaystyle\lim_{i,{\mathfrak U}} x_i = x.$$ The following is Proposition 1.5 from [@H]. \[buenadef\] Let ${\mathfrak U}$ be an ultrafilter on $I$, $X$ a compact Hausdorff space and $(x_i)_{i\in I} \subseteq X$. Then, the limit of $(x_i)_{i\in I}$ respect of ${\mathfrak U}$ exists and is unique. Later on, we are going to need the next basic Lemma about limits of ultraproducts, whose proof is an easy exercise of basic topology and ultrafilters. \[lemlimit\] Let ${\mathfrak U}$ be an ultrafilter on $I$ and $\{x_i\}_{i\in I}$ a family of real numbers. Assume that the limit of $(x_i)_{i\in I} \subseteq {\mathbb R}$ respect of ${\mathfrak U}$ exists and let $r$ be a real number such that there is a subset $U$ of $\{i: r<x_i\}$ with $U\in {\mathfrak U}$. Then $$r \leq \displaystyle\lim_{i,{\mathfrak U}} x_i.$$ We are now able to define the ultraproduct of Banach spaces. Given an ultrafilter ${\mathfrak U}$ on $I$ and a family of Banach spaces $(X_i)_{i\in I}$, take the Banach space $\ell_\infty(I,X_i)$ of norm bounded families $(x_i)_{i\in I}$ with $x_i \in X_i$ and norm $$\Vert (x_i)_{i\in I} \Vert = \sup_{i\in I} \Vert x_i \Vert.$$ The ultraproduct $(X_i)_{\mathfrak U}$ is defined as the quotient space $\ell_\infty(I,X_i)/ \sim $ where $$(x_i)_{i\in I}\sim (y_i)_{i\in I} \Leftrightarrow \displaystyle\lim_{i,{\mathfrak U}} \Vert x_i - y_i \Vert = 0.$$ Observe that Proposition \[buenadef\] assures us that this limit exists for every pair $(x_i)_{i\in I}, (y_i)_{i\in I}\in \ell_\infty(I,X_i)$. We denote the class of $(x_i)_{i\in I}$ in $(X_i)_{\mathfrak U}$ by $(x_i)_{\mathfrak U}$. The following result is the polynomial version of Definition 2.2 from [@H] (see also Proposition 2.3 from [@LR]). The reasoning behind is almost the same. \[pollim\] Given two ultraproducts $(X_i)_{\mathfrak U}$, $(Y_i)_{\mathfrak U}$ and a family of continuous homogeneous polynomials $\{P_i\}_{i\in I}$ of degree $k$ with $$\displaystyle\sup_{i\in I} \Vert P_i \Vert < \infty,$$ the map $P:(X_i)_{\mathfrak U}\longrightarrow (Y_i)_{\mathfrak U}$ defined by $P((x_i)_{\mathfrak U})=(P_i(x_i))_{\mathfrak U}$ is a continuous homogeneous polynomial of degree $k$. Moreover $\Vert P \Vert = \displaystyle\lim_{i,{\mathfrak U}} \Vert P_i \Vert$. If ${\mathbb K}={\mathbb C}$, the hypothesis of homogeneity can be omitted, but in this case the degree of $P$ can be lower than $k$. Let us start with the homogeneous case. Write $P_i(x)=T_i(x,\cdots,x)$ with $T_i$ a $k-$linear continuous function. Define $T:(X_i)_{\mathfrak U}^k \longrightarrow (Y_i)_{\mathfrak U}$ by $$T((x^1_i)_{\mathfrak U},\cdots,(x^k_i)_{\mathfrak U})=(T_i(x^1_i,\cdots ,x^k_i))_{\mathfrak U}.$$ $T$ is well defined since, by the polarization formula, $ \displaystyle\sup_{i\in I} \Vert T_i \Vert \leq \displaystyle\sup_{i\in I} \frac{k^k}{k!}\Vert P_i \Vert< \infty$. Seeing that for each coordinate the maps $T_i$ are linear, the map $T$ is linear in each coordinate, and thus it is a $k-$linear function. Given that $$P((x_i)_{\mathfrak U})=(P_i(x_i))_{\mathfrak U}=(T_i(x_i,\cdots,x_i))_{\mathfrak U}=T((x_i)_{\mathfrak U},\cdots,(x_i)_{\mathfrak U})$$ we conclude that $P$ is a $k-$homogeneous polynomial. To see the equality of the norms for every $i$ choose a norm $1$ element $x_i\in X_i$ where $P_i$ almost attains its norm, and from there is easy to deduce that $\Vert P \Vert \geq \displaystyle\lim_{i,{\mathfrak U}} \Vert P_i \Vert$. For the other inequality we use that $$|P((x_i)_{\mathfrak U})|= \displaystyle\lim_{i,{\mathfrak U}}|P_i(x_i)| \leq \displaystyle\lim_{i,{\mathfrak U}}\Vert P_i \Vert \Vert x_i \Vert^k = \left(\displaystyle\lim_{i,{\mathfrak U}}\Vert P_i \Vert \right)\Vert (x_i)_{\mathfrak U}\Vert^k .$$ Now we treat the non homogeneous case. For each $i\in I$ we write $P_i=\sum_{l=0}^kP_{i,l}$, with $P_{i,0}$ a constant and $P_{i,l}$ ($1\leq l \leq k$) an $l-$homogeneous polynomial. Take the direct sum $X_i \oplus_\infty {\mathbb C}$ of $X_i$ and ${\mathbb C}$, endowed with the norm $\Vert (x,\lambda) \Vert =\max \{ \Vert x \Vert, | \lambda| \}$. Consider the polynomial $\tilde{P_i}:X_i \oplus_\infty {\mathbb C}\rightarrow Y_i$ defined by $\tilde{P}_i(x,\lambda)=\sum_{l=0}^k P_{i,l}(x)\lambda^{k-l}$. The polynomial $\tilde{P}_i$ is an homogeneous polynomial of degree $k$ and, using the maximum modulus principle, it is easy to see that $\Vert P_i \Vert = \Vert \tilde{P_i} \Vert $. Then, by the homogeneous case, we have that the polynomial $\tilde{P}:(X_i \oplus_\infty {\mathbb C})_{\mathfrak U}\rightarrow (Y_i)_{\mathfrak U}$ defined as $\tilde{P}((x_i,\lambda_i)_{\mathfrak U})=(\tilde{P}_i(x_i,\lambda_i))_{\mathfrak U}$ is a continuous homogeneous polynomial of degree $k$ and $\Vert \tilde{P} \Vert =\displaystyle\lim_{i,{\mathfrak U}} \Vert \tilde{P}_i \Vert =\displaystyle\lim_{i,{\mathfrak U}} \Vert P_i \Vert$. Via the identification $(X_i \oplus_\infty {\mathbb C})_{\mathfrak U}=(X_i)_{\mathfrak U}\oplus_\infty {\mathbb C}$ given by $(x_i,\lambda_i)_{\mathfrak U}=((x_i)_{\mathfrak U},\displaystyle\lim_{i,{\mathfrak U}} \lambda_i)$ we have that the polynomial $Q:(X_i)_{\mathfrak U}\oplus_\infty {\mathbb C}\rightarrow {\mathbb C}$ defined as $Q((x_i)_{\mathfrak U},\lambda)=\tilde{P}((x_i,\lambda)_{\mathfrak U})$ is a continuous homogeneous polynomial of degree $k$ and $\Vert Q\Vert =\Vert \tilde{P}\Vert$. Then, the polynomial $P((x_i)_{\mathfrak U})=Q((x_i)_{\mathfrak U},1)$ is a continuous polynomial of degree at most $k$ and $\Vert P\Vert =\Vert Q\Vert =\displaystyle\lim_{i,{\mathfrak U}} \Vert P_i \Vert$. If $\displaystyle\lim_{i,{\mathfrak U}} \Vert P_{i,k} \Vert =0 $ then the degree of $P$ is lower than $k$. Note that, in the last proof, we can take the same approach used for non homogeneous polynomials in the real case, but we would not have the same control over the norms. Main result ============= This section contains our main result. As mentioned above, this result is partially motivated by Theorem 3.2 from [@LR]. We follow similar ideas for the proof. First, let us fix some notation that will be used throughout this section. In this section, all polynomials considered are continuous scalar polynomials. Given a Banach space $X$, $B_X$ and $S_X$ denote the unit ball and the unit sphere of $X$ respectively, and $X^*$ is the dual of $X$. Given a polynomial $P$ on $X$, $deg(P)$ stands for the degree of $P$. For a Banach space $X$ let $D(X,k_1,\cdots,k_n)$ denote the smallest constant that satisfies (\[problema\]) for polynomials of degree $k_1,\cdots,k_n$. We also define $C(X,k_1,\cdots,k_n)$ as the smallest constant that satisfies (\[problema\]) for homogeneous polynomials of degree $k_1,\cdots,k_n$. Throughout this section most of the results will have two parts. The first involving the constant $C(X,k_1,\cdots,k_n)$ for homogeneous polynomials and the second involving the constant $D(X,k_1,\cdots,k_n)$ for arbitrary polynomials. Given that the proof of both parts are almost equal, we will limit to prove only the second part of the results. Recall that a space $X$ has the $1 +$ uniform approximation property if for all $n\in {\mathbb N}$, exists $m=m(n)$ such that for every subspace $M\subset X$ with $dim(M)=n$ and every $\varepsilon > 0$ there is an operator $T\in \mathcal{L}(X,X)$ with $T|_M=id$, $rg(T)\leq m$ and $\Vert T\Vert \leq 1 + \varepsilon$ (i.e. for every $\varepsilon > 0$ $X$ has the $1+\varepsilon$ uniform approximation property). \[main thm\] If ${\mathfrak U}$ is an ultrafilter on a family $I$ and $(X_i)_{\mathfrak U}$ is an ultraproduct of complex Banach spaces then 1. $C((X_i)_{\mathfrak U},k_1,\cdots,k_n) \geq \displaystyle\lim_{i,{\mathfrak U}}(C(X_i,k_1,\cdots,k_n)).$ 2. $D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \geq \displaystyle\lim_{i,{\mathfrak U}}(D(X_i,k_1,\cdots,k_n)).$ Moreover, if each $X_i$ has the $1+$ uniform approximation property, equality holds in both cases. In order to prove this Theorem some auxiliary lemmas are going to be needed. The first one is due to Heinrich [@H]. \[aprox\] Given an ultraproduct of Banach spaces $(X_i)_{\mathfrak U}$, if each $X_i$ has the $1+$ uniform approximation property then $(X_i)_{\mathfrak U}$ has the metric approximation property. When working with the constants $C(X,k_1,\cdots,k_n)$ and $D(X,k_1,\cdots,k_n)$, the following characterization may result handy. \[alternat\] a) The constant $C(X,k_1,\cdots,k_n)$ is the biggest constant $M$ such that given any $\varepsilon >0$ there exist a set of homogeneous continuous polynomials $\{P_j\}_{j=1}^n$ with $deg(P_j)\leq k_j$ such that $$\label{condition} M\left \Vert \prod_{j=1}^{n} P_j \right \Vert \leq (1+\varepsilon) \prod_{j=1}^{n} \Vert P_j \Vert.$$ b\) The constant $D(X,k_1,\cdots,k_n)$ is the biggest constant satisfying the same for arbitrary polynomials. To prove this Lemma it is enough to see that $D(X,k_1,\cdots,k_n)$ is decreasing as a function of the degrees $k_1,\cdots, k_n$ and use that the infimum is the greatest lower bound. \[rmkalternat\] It is clear that in Lemma \[alternat\] we can take the polynomials $\{P_j\}_{j=1}^n$ with $deg(P_j)= k_j$ instead of $deg(P_j)\leq k_j$. Later on we will use both versions of the Lemma. One last lemma is needed for the proof of the Main Theorem. \[normas\] Let $P$ be a (not necessarily homogeneous) polynomial on a complex Banach space $X$ with $deg(P)=k$. For any point $x\in X$ $$|P(x)|\leq \max\{\Vert x \Vert, 1\}^k \Vert P\Vert . \nonumber$$ If $P$ is homogeneous the result is rather obvious since we have the inequality $$|P(x)|\leq \Vert x \Vert^k \Vert P\Vert . \nonumber$$ Suppose that $P=\sum_{l=0}^k P_l$ with $P_l$ an $l-$homogeneous polynomial. Consider the space $X \oplus_\infty {\mathbb C}$ and the polynomial $\tilde{P}:X \oplus_\infty {\mathbb C}\rightarrow {\mathbb C}$ defined by $\tilde{P}(x,\lambda)=\sum_{l=0}^k P_l(x)\lambda^{k-l}$. The polynomial $\tilde{P}$ is homogeneous of degree $k$ and $\Vert P \Vert = \Vert \tilde{P} \Vert $. Then, using that $\tilde{P}$ is homogeneous we have $$|P(x)|=|\tilde{P} (x,1)| \leq \Vert (x,1) \Vert^k \Vert \tilde{P} \Vert = \max\{\Vert x \Vert, 1\}^k \Vert P\Vert . \nonumber$$ We are now able to prove our main result. Throughout this proof we regard the space $({\mathbb C})_{\mathfrak U}$ as ${\mathbb C}$ via the identification $(\lambda_i)_{\mathfrak U}=\displaystyle\lim_{i,{\mathfrak U}} \lambda_i$. First, we are going to see that $D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \geq \displaystyle\lim_{i,{\mathfrak U}}(D(X_i,k_1,\cdots,k_n))$. To do this we only need to prove that $\displaystyle\lim_{i,{\mathfrak U}}(D(X_i,k_1,\cdots,k_n))$ satisfies (\[condition\]). Given $\varepsilon >0$ we need to find a set of polynomials $\{P_{j}\}_{j=1}^n$ on $(X_i)_{\mathfrak U}$ with $deg(P_{j})\leq k_j$ such that $$\displaystyle\lim_{i,{\mathfrak U}}(D(X_i,k_1,\cdots,k_n)) \left \Vert \prod_{j=1}^{n} P_j \right \Vert \leq (1+\varepsilon) \prod_{j=1}^{n} \left \Vert P_j \right \Vert .$$ By Remark \[rmkalternat\] we know that for each $i\in I$ there is a set of polynomials $\{P_{i,j}\}_{j=1}^n$ on $X_i$ with $deg(P_{i,j})=k_j$ such that $$D(X_i,k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} P_{i,j} \right \Vert \leq (1 +\varepsilon)\prod_{j=1}^{n} \left \Vert P_{i,j} \right \Vert.$$ Replacing $P_{i,j}$ with $P_{i,j}/\Vert P_{i,j} \Vert$ we may assume that $\Vert P_{i,j} \Vert =1$. Define the polynomials $\{P_j\}_{j=1}^n$ on $(X_i)_{\mathfrak U}$ by $P_j((x_i)_{\mathfrak U})=(P_{i,j}(x_i))_{\mathfrak U}$. Then, by Proposition \[pollim\], $deg(P_j)\leq k_j$ and $$\begin{aligned} \displaystyle\lim_{i,{\mathfrak U}}(D(X_i,k_1,\cdots,k_n)) \left \Vert \prod_{j=1}^{n} P_{j} \right \Vert &=& \displaystyle\lim_{i,{\mathfrak U}} \left(D(X_i,k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} P_{i,j} \right \Vert \right) \nonumber \\ &\leq& \displaystyle\lim_{i,{\mathfrak U}}\left((1+\varepsilon)\prod_{j=1}^{n}\Vert P_{i,j} \Vert \right)\nonumber \\ &=& (1+\varepsilon)\prod_{j=1}^{n} \Vert P_{j} \Vert \nonumber \nonumber \end{aligned}$$ as desired. To prove that $D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \leq \displaystyle\lim_{i,{\mathfrak U}}(D(X_i,k_1,\cdots,k_n))$ if each $X_i$ has the $1+$ uniform approximation property is not as straightforward. Given $\varepsilon >0$, let $\{P_j\}_{j=1}^n$ be a set of polynomials on $(X_i)_{\mathfrak U}$ with $deg(P_j)=k_j$ such that $$D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} P_j \right \Vert \leq (1+\varepsilon)\prod_{j=1}^{n} \Vert P_j \Vert .$$ Let $K\subseteq B_{(X_i)_{\mathfrak U}}$ be the finite set $K=\{x_1,\cdots, x_n\}$ where $ x_j$ is such that $$|P_j(x_j)| > \Vert P_j\Vert (1- \varepsilon) \mbox{ for }j=1,\cdots, n.$$ Being that each $X_i$ has the $1+$ uniform approximation property, then, by Lemma \[aprox\], $(X_i)_{\mathfrak U}$ has the metric approximation property. Therefore, exist a finite rank operator $S:(X_i)_{\mathfrak U}\rightarrow (X_i)_{\mathfrak U}$ such that $\Vert S\Vert \leq 1 $ and $$\Vert P_j - P_j \circ S \Vert_K< |P_j(x_j)|\varepsilon \mbox{ for }j=1,\cdots, n.$$ Now, define the polynomials $Q_1,\cdots, Q_n$ on $(X_i)_{\mathfrak U}$ as $Q_j=P_j\circ S$. Then $$\left\Vert \prod_{j=1}^n Q_j \right\Vert \leq \left\Vert \prod_{j=1}^n P_j \right\Vert$$ $$\Vert Q_j\Vert_K > | P_j(x_j)|-\varepsilon | P_j(x_j)| =| P_j(x_j)| (1-\varepsilon) \geq \Vert P_j \Vert(1-\varepsilon)^2.$$ The construction of this polynomials is a slight variation of Lemma 3.1 from [@LR]. We have the next inequality for the product of the polynomials $\{Q_j\}_{j=1}^n$ $$\begin{aligned} D((X_i)_{\mathfrak U},k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} Q_{j} \right \Vert &\leq& D((X_i)_{\mathfrak U},k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} P_{j} \right \Vert \nonumber \\ &\leq& (1+\varepsilon) \prod_{j=1}^{n} \left \Vert P_{j} \right \Vert . \label{desq}\end{aligned}$$ Since $S$ is a finite rank operator, the polynomials $\{ Q_j\}_{j=1}^n$ have the advantage that are finite type polynomials. This will allow us to construct polynomials on $(X_i)_{\mathfrak U}$ which are limit of polynomials on the spaces $X_i$. For each $j$ write $Q_j=\sum_{t=1}^{m_j}(\psi_{j,t})^{r_{j,t}}$ with $\psi_{j,t}\in (X_i)_{\mathfrak U}^*$, and consider the spaces $N=\rm{span} \{x_1,\cdots,x_n\}\subset (X_i)_{\mathfrak U}$ and $M=\rm{span} \{\psi_{j,t} \}\subset (X_i)_{\mathfrak U}^*$. By the local duality of ultraproducts (see Theorem 7.3 from [@H]) exist $T:M\rightarrow (X_i^*)_{\mathfrak U}$ an $(1+\varepsilon)-$isomorphism such that $$JT(\psi)(x)=\psi(x) \mbox{ } \forall x\in N, \mbox{ } \forall \psi\in M$$ where $J:(X_i^*)_{\mathfrak U}\rightarrow (X_i)_{\mathfrak U}^*$ is the canonical embedding. Let $\phi_{j,t}=JT(\psi_{j,t})$ and consider the polynomials $\bar{Q}_1,\cdots, \bar{Q}_n$ on $(X_i)_{\mathfrak U}$ with $\bar{Q}_j=\sum_{t=1}^{m_j}(\phi_{j,t})^{r_{j,t}}$. Clearly $\bar{Q}_j$ is equal to $Q_j$ in $N$ and $K\subseteq N$, therefore we have the following lower bound for the norm of each polynomial $$\Vert \bar{Q}_j \Vert \geq \Vert \bar{Q}_j \Vert_K = \Vert Q_j \Vert_K >\Vert P_j \Vert(1-\varepsilon)^2 \label{desbarq}$$ Now, let us find an upper bound for the norm of the product $\Vert \prod_{j=1}^n \bar{Q}_j \Vert$. Let $x=(x_i)_{\mathfrak U}$ be any point in $B_{(X_i)_{\mathfrak U}}$. Then, we have $$\begin{aligned} \left|\prod_{j=1}^n \bar{Q}_j(x)\right| &=& \left|\prod_{j=1}^n \sum_{t=1}^{m_j}(\phi_{j,t} (x))^{r_{j,t}}\right|=\left|\prod_{j=1}^n \sum_{t=1}^{m_j} (JT\psi_{j,t}(x))^{r_{j,t}} \right| \nonumber \\ &=& \left|\prod_{j=1}^n \sum_{t=1}^{m_j}((JT)^*\hat{x}(\psi_{j,t}))^{r_{j,t}}\right|\nonumber\end{aligned}$$ Since $(JT)^*\hat{x}\in M^*$, $\Vert (JT)^*\hat{x}\Vert =\Vert JT \Vert \Vert x \Vert \leq \Vert J \Vert \Vert T \Vert \Vert x \Vert< 1 + \varepsilon$ and $M^*=\frac{(X_i)_{\mathfrak U}^{**}}{M^{\bot}}$, we can chose $z^{**}\in (X_i)_{\mathfrak U}^{**}$ with $\Vert z^{**} \Vert < \Vert (JT)^*\hat{x}\Vert+\varepsilon < 1+2\varepsilon$, such that $\prod_{j=1}^n \sum_{t=1}^{m_j} ((JT)^*\hat{x}(\psi_{j,t}))^{r_{j,t}}= \prod_{j=1}^n \sum_{t=1}^{m_j} (z^{**}(\psi_{j,t}))^{r_{j,t}}$. By Goldstine’s Theorem exist a net $\{z_\alpha\} \subseteq (X_i)_{\mathfrak U}$ $w^*-$convergent to $z$ in $(X_i)_{\mathfrak U}^{**}$ with $\Vert z_\alpha \Vert = \Vert z^{**}\Vert$. In particular, $ \psi_{j,t}(z_\alpha)$ converges to $z^{**}(\psi_{j,t})$. If we call ${\mathbf k}= \sum k_j$, since $\Vert z_\alpha \Vert< (1+2\varepsilon)$, by Lemma \[normas\], we have $$\left \Vert \prod_{j=1}^{n} Q_j \right \Vert (1+2\varepsilon)^{\mathbf k}\geq \left|\prod_{j=1}^n Q_j(z_\alpha)\right| = \left|\prod_{j=1}^n \sum_{t=1}^{m_j} ((\psi_{j,t})(z_\alpha))^{r_{j,t}}\right| . \label{usecomplex}$$ Combining this with the fact that $$\begin{aligned} \left|\prod_{j=1}^{n} \sum_{t=1}^{m_j} ((\psi_{j,t})(z_\alpha))^{r_{j,t}}\right| &\longrightarrow& \left|\prod_{j=1}^{n} \sum_{t=1}^{m_j} (z^{**}(\psi_{j,t}))^{r_{j,t}}\right|\nonumber\\ &=& \left|\prod_{j=1}^{n} \sum_{t=1}^{m_j} ((JT)^*\hat{x}(\psi_{j,t}))^{r_{j,t}}\right| = \left|\prod_{j=1}^{n} \bar{Q}_j(x)\right|\nonumber\end{aligned}$$ we conclude that $\left \Vert \prod_{j=1}^{n} Q_j \right \Vert (1+2\varepsilon)^{\mathbf k}\geq |\prod_{j=1}^{n} \bar{Q}_j(x)|$. Since the choice of $x$ was arbitrary we arrive to the next inequality $$\begin{aligned} D((X_i)_{\mathfrak U},k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} \bar{Q}_j \right \Vert &\leq& (1+2\varepsilon)^{\mathbf k}D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} Q_j \right \Vert \nonumber \\ &\leq& (1+2\varepsilon)^{\mathbf k}(1+\varepsilon) \prod_{j=1}^{n} \left \Vert P_{j} \right \Vert \label{desbarq2} \\ &<& (1+2\varepsilon)^{\mathbf k}(1+\varepsilon) \frac{\prod_{j=1}^{n} \Vert \bar{Q}_j \Vert }{(1-\varepsilon)^{2n}} .\label{desbarq3} \\end{aligned}$$ In (\[desbarq2\]) and (\[desbarq3\]) we use (\[desq\]) and (\[desbarq\]) respectively. The polynomials $\bar{Q}_j$ are not only of finite type, these polynomials are also generated by elements of $(X_i^*)_{\mathfrak U}$. This will allow us to write them as limits of polynomials in $X_i$. For any $i$, consider the polynomials $\bar{Q}_{i,1},\cdots,\bar{Q}_{i,n}$ on $X_i$ defined by $\bar{Q}_{i,j}= \displaystyle\sum_{t=1}^{m_j} (\phi_{i,j,t})^{r_{j,t}}$, where the functionals $\phi_{i,j,t}\in X_i^*$ are such that $(\phi_{i,j,t})_{\mathfrak U}=\phi_{j,t}$. Then $\bar{Q}_j(x)=\displaystyle\lim_{i,{\mathfrak U}} \bar{Q}_{i,j}(x)$ $\forall x \in (X_i)_{\mathfrak U}$ and, by Proposition \[pollim\], $\Vert \bar{Q}_j \Vert = \displaystyle\lim_{i,{\mathfrak U}} \Vert \bar{Q}_{i,j} \Vert$. Therefore $$\begin{aligned} D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \displaystyle\lim_{i,{\mathfrak U}} \left \Vert \prod_{j=1}^{n} \bar{Q}_{i,j} \right \Vert &=& D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} \bar{Q}_{j} \right \Vert \nonumber \\ &<& \frac{(1+\varepsilon)(1+2\varepsilon)^{\mathbf k}}{(1-\varepsilon)^{2n}} \prod_{j=1}^{n} \Vert \bar{Q}_{j} \Vert \nonumber \\ &=& \frac{(1+\varepsilon)(1+2\varepsilon)^{\mathbf k}}{(1-\varepsilon)^{2n}} \prod_{j=1}^{n} \displaystyle\lim_{i,{\mathfrak U}} \Vert \bar{Q}_{i,j} \Vert . \nonumber \end{aligned}$$ To simplify the notation let us call $\lambda = \frac{(1+\varepsilon)(1+2\varepsilon)^{\mathbf k}}{(1-\varepsilon)^{2n}} $. Take $L>0$ such that $$D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \displaystyle\lim_{i,{\mathfrak U}} \left \Vert \prod_{j=1}^{n} \bar{Q}_{i,j} \right \Vert < L < \lambda \prod_{j=1}^{n} \displaystyle\lim_{i,{\mathfrak U}} \Vert \bar{Q}_{i,j} \Vert . \nonumber$$ Since $(-\infty, \frac{L}{D((X_i)_{\mathfrak U},k_1,\cdots,k_n)})$ and $(\frac{L}{\lambda},+\infty)$ are neighborhoods of $\displaystyle\lim_{i,{\mathfrak U}} \left \Vert \prod_{j=1}^{n} \bar{Q}_{i,j} \right \Vert$ and $\prod_{j=1}^{n} \displaystyle\lim_{i,{\mathfrak U}} \Vert \bar{Q}_{i,j} \Vert$ respectively, and $\prod_{j=1}^{n} \displaystyle\lim_{i,{\mathfrak U}} \Vert \bar{Q}_{i,j} \Vert= \displaystyle\lim_{i,{\mathfrak U}} \prod_{j=1}^{n} \Vert \bar{Q}_{i,j} \Vert$, by definition of $\displaystyle\lim_{i,{\mathfrak U}}$, the sets $$A=\{i_0: D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} \bar{Q}_{i_0,j} \right \Vert <L\} \mbox{ and }B=\{i_0: \lambda \prod_{j=1}^{n} \Vert \bar{Q}_{i_0,j} \Vert > L \}$$ are elements of ${\mathfrak U}$. Since ${\mathfrak U}$ is closed by finite intersections $A\cap B\in {\mathfrak U}$. If we take any element $i_0 \in A\cap B$ then, for any $\delta >0$, we have that $$D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} \bar{Q}_{i_0,j} \right \Vert \frac{1}{\lambda}\leq \frac{L}{\lambda} \leq \prod_{j=1}^{n} \Vert \bar{Q}_{i_0,j} \Vert < (1+ \delta)\prod_{j= 1}^{n} \Vert \bar{Q}_{i_0,j} \Vert \nonumber$$ Then, since $\delta$ is arbitrary, the constant $D((X_i)_{\mathfrak U},k_1,\cdots,k_n)\frac{1}{\lambda}$ satisfy (\[condition\]) for the space $X_{i_0}$ and therefore, by Lemma \[alternat\], $$\frac{1}{\lambda}D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \leq D(X_{i_0},k_1,\cdots,k_n). \nonumber$$ This holds true for any $i_0$ in $A\cap B$. Since $A\cap B \in {\mathfrak U}$, by Lemma \[lemlimit\], $\frac{1}{\lambda}D((X_i)_{\mathfrak U},k_1,\cdots,k_n)\leq \displaystyle\lim_{i,{\mathfrak U}} D(X_i,k_1,\cdots,k_n) $. Using that $\lambda \rightarrow 1$ when $\varepsilon \rightarrow 0$ we conclude that $D((X_i)_{\mathfrak U},k_1,\cdots,k_n)\leq \displaystyle\lim_{i,{\mathfrak U}} D(X_i,k_1,\cdots,k_n).$ Similar to Corollary 3.3 from [@LR], a straightforward corollary of our main result is that for any complex Banach space $X$ with $1+$ uniform approximation property $C(X,k_1,\cdots,k_n)=C(X^{**},k_1,\cdots,k_n)$ and $D(X,k_1,\cdots,k_n)=D(X^{**},k_1,\cdots,k_n)$ . Using that $X^{**}$ is $1-$complemented in some adequate ultrafilter $(X)_{{\mathfrak U}}$ the result is rather obvious. For a construction of the adequate ultrafilter see [@LR]. But following the previous proof, and using the principle of local reflexivity applied to $X^*$ instead of the local duality of ultraproducts, we can prove the next stronger result. Let $X$ be a complex Banach space. Then 1. $C(X^{**},k_1,\cdots,k_n)\geq C(X,k_1,\cdots,k_n).$ 2. $D(X^{**},k_1,\cdots,k_n \geq D(X,k_1,\cdots,k_n)).$ Moreover, if $X^{**}$ has the metric approximation property, equality holds in both cases. The inequality $D(X^{**},k_1,\cdots,k_n) \geq D(X,k_1,\cdots,k_n)$ is a corollary of Theorem \[main thm\] (using the adequate ultrafilter mentioned above). Let us prove that if $X^{**}$ has the metric approximation property then $D((X^{**},k_1,\cdots,k_n)\geq D(X,k_1,\cdots,k_n)$. Given $\varepsilon >0$, let $\{P_j\}_{j=1}^n$ be a set of polynomials on $X^{**}$ with $deg(P_j)=k_j$ such that $$D(X^{**},k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} P_{j} \right \Vert \leq (1+\varepsilon)\prod_{j=1}^{n} \left \Vert P_{j} \right \Vert .\nonumber$$ Analogous to the proof of Theorem \[main thm\], since $X^{**}$ has the metric approximation, we can construct finite type polynomials $Q_1,\cdots,Q_n$ on $X^{**}$ with $deg(Q_j)=k_j$, $\Vert Q_j \Vert_K \geq \Vert P_j \Vert (1-\varepsilon)^2$ for some finite set $K\subseteq B_{X^{**}}$ and that $$D(X^{**},k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} Q_{j} \right \Vert < (1+\varepsilon)\prod_{j=1}^{n} \left \Vert P_{j} \right \Vert . \nonumber$$ Suppose that $Q_j=\sum_{t=1}^{m_j}(\psi_{j,t})^{r_{j,t}}$ and consider the spaces $N=\rm{span} \{K\}$ and $M=\rm{span} \{\psi_{j,t} \}$. By the principle of local reflexivity (see [@D]), applied to $X^*$ (thinking $N$ as a subspaces of $(X^*)^*$ and $M$ as a subspaces of $(X^*)^{**}$), there is an $(1+\varepsilon)-$isomorphism $T:M\rightarrow X^*$ such that $$JT(\psi)(x)=\psi(x) \mbox{ } \forall x\in N, \mbox{ } \forall \psi\in M\cap X^*=M,$$ where $J:X^*\rightarrow X^{***}$ is the canonical embedding. Let $\phi_{j,t}=JT(\psi_{j,t})$ and consider the polynomials $\bar{Q}_1,\cdots, \bar{Q}_n$ on $X^{**}$ defined by $\bar{Q}_j=\sum_{t=1}^{m_j}(\phi_{j,t})^{r_{j,t}}$. Following the proof of the Main Theorem, one arrives to the inequation $$D(X^{**},k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} \bar{Q_j} \right \Vert < (1+ \delta) \frac{(1+\varepsilon)(1+2\varepsilon)^{\mathbf k}}{(1-\varepsilon)^{2n}} \prod_{j=1}^{n} \Vert \bar{Q_j} \Vert \nonumber$$ for every $\delta >0$. Since each $\bar{Q}_j$ is generated by elements of $J(X^*)$, by Goldstine’s Theorem, the restriction of $\bar{Q}_j$ to $X$ has the same norm and the same is true for $\prod_{j=1}^{n} \bar{Q_j}$. Then $$D(X^{**},k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} \left.\bar{Q_j}\right|_X \right \Vert < (1+ \delta) \frac{(1+\varepsilon)(1+2\varepsilon)^{\mathbf k}}{(1-\varepsilon)^{2n}} \prod_{j=1}^{n} \Vert \left.\bar{Q_j}\right|_X \Vert \nonumber$$ By Lemma \[alternat\] we conclude that $$\frac{(1-\varepsilon)^{2n}}{(1+\varepsilon)(1+2\varepsilon)^{\mathbf k}}D(X^{**},k_1,\cdots,k_n)\leq D(X,k_1,\cdots,k_n).$$ Given that the choice of $\varepsilon$ is arbitrary and that $\frac{(1-\varepsilon)^{2n}}{(1+\varepsilon)(1+2\varepsilon)^{\mathbf k}} $ tends to $1$ when $\varepsilon$ tends to $0$ we conclude that $D(X^{**},k_1,\cdots,k_n)\leq D(X,k_1,\cdots,k_n)$. Note that in the proof of the Main Theorem the only parts where we need the spaces to be complex Banach spaces are at the beginning, where we use Proposition \[pollim\], and in the inequality (\[usecomplex\]), where we use Lemma \[normas\]. But both results holds true for homogeneous polynomials on a real Banach space. Then, copying the proof of the Main Theorem we obtain the following result for real spaces. If ${\mathfrak U}$ is an ultrafilter on a family $I$ and $(X_i)_{\mathfrak U}$ is an ultraproduct of real Banach spaces then $$C((X_i)_{\mathfrak U},k_1,\cdots,k_n) \geq \displaystyle\lim_{i,{\mathfrak U}}(C(X_i,k_1,\cdots,k_n)).$$ If in addition each $X_i$ has the $1+$ uniform approximation property, the equality holds. Also we can get a similar result for the bidual of a real space. Let $X$ be a real Banach space. Then 1. $C(X^{**},k_1,\cdots,k_n)\geq C(X,k_1,\cdots,k_n).$ 2. $D(X^{**},k_1,\cdots,k_n) \geq D(X,k_1,\cdots,k_n).$ If $X^{**}$ has the metric approximation property, equality holds in $(a)$. The proof of item $(a)$ is the same that in the complex case, so we limit to prove $D(X^{**},k_1,\cdots,k_n) \geq D(X,k_1,\cdots,k_n))$. To do this we will show that given an arbitrary $\varepsilon >0$, there is a set of polynomials $\{P_{j}\}_{j=1}^n$ on $X^{**}$ with $deg(P_{j})\leq k_j$ such that $$D(X,k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} P_j \right \Vert \leq (1+\varepsilon) \prod_{j=1}^{n} \left \Vert P_j \right \Vert .$$ Take $\{Q_{j}\}_{j=1}^n$ a set of polynomials on $X$ with $deg(Q_j)=k_j$ such that $$D(X,k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} Q_{j} \right \Vert \leq (1 +\varepsilon)\prod_{j=1}^{n} \left \Vert Q_{j} \right \Vert.$$ Consider now the polynomials $P_j=AB(Q_j)$, where $AB(Q_j)$ is the Aron Berner extension of $Q_j$ (for details on this extension see [@AB] or [@Z]). Since $AB\left( \prod_{j=1}^n P_j \right)=\prod_{j=1}^n AB(P_j)$, using that the Aror Berner extension preserves norm (see [@DG]) we have $$\begin{aligned} D(X,k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} P_{j} \right \Vert &=& D(X,k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} Q_{j} \right \Vert\nonumber \\ &\leq& (1 +\varepsilon)\prod_{j=1}^{n} \left\Vert Q_{j} \right\Vert \nonumber \\ &=& (1 +\varepsilon)\prod_{j=1}^{n} \left \Vert P_{j} \right \Vert \nonumber \end{aligned}$$ as desired. As a final remark, we mention two types of spaces for which the results on this section can be applied. Corollary 9.2 from [@H] states that any Orlicz space $L_\Phi(\mu)$, with $\mu$ a finite measure and $\Phi$ an Orlicz function with regular variation at $\infty$, has the $1+$ uniform projection property, which is stronger than the $1+$ uniform approximation property. In [@PeR] Section two, A. Pełczyński and H. Rosenthal proved that any ${\mathcal L}_{p,\lambda}-$space ($1\leq \lambda < \infty$) has the $1+\varepsilon-$uniform projection property for every $\varepsilon>0$ (which is stronger than the $1+\varepsilon-$uniform approximation property), therefore, any ${\mathcal L}_{p,\lambda}-$space has the $1+$ uniform approximation property. Acknowledgment {#acknowledgment .unnumbered} ============== I would like to thank Professor Daniel Carando for both encouraging me to write this article, and for his comments and remarks which improved its presentation and content. [HD]{} R. M. J. Arias-de-Reyna. *Gaussian variables, polynomials and permanents*. Linear Algebra Appl. 285 (1998), 107–114. R. M. Aron and P. D. Berner. *A Hahn-Banach extension theorem for analytic mapping*. Bull. Soc. Math. France 106 (1978), 3–24. C. Benítez, Y. Sarantopoulos and A. Tonge. *Lower bounds for norms of products of polynomials*. Math. Proc. Cambridge Philos. Soc. 124 (1998), 395–408. D. Carando, D. Pinasco y J.T. Rodríguez. *Lower bounds for norms of products of polynomials on $L_p$ spaces*. Studia Math. 214 (2013), 157–166. A. M. Davie and T. W. Gamelin. *A theorem on polynomial-star approximation*. Proc. Amer. Math. Soc. 106 (1989) 351–356. D. W. Dean. *The equation $L(E,X^{**})=L(E,X)^{**}$ and the principle of local reflexivity*. Proceedings of the American Mathematical Society. 40 (1973), 146-148. S. Heinrich. *Ultraproducts in Banach space theory*. J. Reine Angew. Math. 313 (1980), 72–104. M. Lindström and R. A. Ryan. *Applications of ultraproducts to infinite dimensional holomorphy*. Math. Scand. 71 (1992), 229–242. A. Pełczyński and H. Rosenthal. *Localization techniques in $L_p$ spaces*. Studia Math. 52 (1975), 265–289. D. Pinasco. *Lower bounds for norms of products of polynomials via Bombieri inequality*. Trans. Amer. Math. Soc. 364 (2012), 3993–4010. I. Zalduendo. *Extending polynomials on Banach Spaces - A survey*. Rev. Un. Mat. Argentina 46 (2005), 45–72.
{ "pile_set_name": "arxiv" }
Brown Man of the Muirs In the folklore of the Anglo-Scottish border the Brown Man of the Muirs is a dwarf who serves as a guardian spirit of wild animals. Folklore William Henderson provides an account of the Brown Man and a pair of hunters in Folklore of the Northern Counties (1879), taken from a letter sent by the historian Robert Surtees to Sir Walter Scott: In the year before the Great Rebellion two young men from Newcastle were sporting on the high moors above Elsdon, and at last sat down to refresh themselves in a green glen near a mountain stream. The younger lad went to drink at the brook, and raising his head again saw the "Brown man of the Muirs", a dwarf very strong and stoutly built, his dress brown like withered bracken, his head covered with frizzled red hair, his countenance ferocious, and his eyes glowing like those of a bull. After some parley, in which the stranger reproved the hunter for trespassing on his demesnes and slaying the creatures who were his subjects, and informed him how he himself lived only on whortleberries, nuts, and apples, he invited him home. The youth was on the point of accepting the invitation and springing across the brook, when he was arrested by the voice of his companion, who thought he had tarried long, and looking round again "the wee brown man was fled." It was thought that had the young man crossed the water the dwarf would have torn him to pieces. As it was he died within the year, in consequence, it was supposed, of his slighting the dwarf's admonition, and continuing his sport on the way home.Taylor, George and Raine, James (1852). A Memoir of Robert Surtees. Durham: George Andrews. pp. 81–2. Walter Scott in a return letter to Surtees suggested that the Brown Man may be related to the duergar (dwarfs) of Northumberland. Fairy tales In folklore the Brown Man appears as a solitary fairy, but in fairy tale literature he is a member of a tribe of similar beings. They once lived all over England and Scotland, but in the wake of human progress they dwindled in number and now live in a cave in Cumberland. Known as the Brown Men of the Moors and Mountains, they have great strength that allows them to hurl small boulders. By day they mine the mountains for gold and diamonds, and by night they feast in their underground hall or dance on the moors. They kidnap human children and kill any man they catch alone in the wilderness. However, they can be made subservient by repeating the incantation, "Munko tiggle snobart tolwol dixy crambo". See also Brownie (folklore) Redcap References Category:Dwarves (mythology) Category:English folklore Category:Scottish folklore
{ "pile_set_name": "wikipedia_en" }
Association of Chief Police Officers The Association of Chief Police Officers (ACPO), officially The Association of Chief Police Officers of England, Wales and Northern Ireland, was a not-for-profit private limited company that for many years led the development of policing practices in England, Wales, and Northern Ireland. Established in 1948, ACPO provided a forum for chief police officers to share ideas and coordinate their strategic operational responses, and advised government in matters such as terrorist attacks and civil emergencies. ACPO coordinated national police operations, major investigations, cross-border policing, and joint law enforcement. ACPO designated Senior Investigative Officers for major investigations and appointed officers to head ACPO units specialising in various areas of policing and crime reduction. ACPO was led by Chief Constable Sir Hugh Orde, QPM, who was, until 2009, the Chief Constable of the Police Service of Northern Ireland. He was elected as president by fellow members of ACPO in April 2009. ACPO was funded by Home Office grants, profits from commercial activities and contributions from the 44 police authorities in England, Wales, and Northern Ireland. Following the Parker Review into ACPO, it was replaced in 2015 by a new body, the National Police Chiefs' Council, set up under a police collaboration agreement under Section 22A of the Police Act 1996. Background UK policing sprang from local communities in the 1800s. Since the origins of policing, chief officers have regularly associated to discuss and share policing issues. Although ACPO as now recognised was formed in 1948, records of prior bodies go back to the early 1900s. The UK retains a decentralised model of policing based around the settlement which emerged from the Royal Commission on the work of the Police in 1962. ACPO continued to provide a forum for chief officers across 44 local police forces and 13 national areas across England, Wales and Northern Ireland, and provided local forces with agreed national policies and guidelines. ACPO failed to convince its sponsors to contribute to its survival and in May 2011 the BBC reported that ACPO would run out of money in February 2012 without extra funding. ACPO was half-funded by the Home Office and half by 44 police authorities. A third of police authorities refused to pay in 2010 and another third were undecided. The Association of Police Authorities said the withdrawal of funding by police authorities was "partly due to a squeeze on their income". ACPO was due to wind up formally in April 2015. Constitutional status Over time, demands for coordination across the police service increased as society changed, for example to take account of new developments in international terrorism and organised crime, or roles such as monitoring offenders on release from prison or working with young people to divert them from crime. In 1997 ACPO was incorporated as a private company limited by guarantee. As a private company, ACPO was not subject to freedom of information legislation. It was not a staff association; the staff association for senior police officers was a separate body, the Chief Police Officers Staff Association (CPOSA). The change in structure from a "band of volunteers" to a limited company allowed the organisation to employ staff, enter into contracts for accommodation and publish accounts. A number of options were considered for the status of ACPO, including charitable status, but all were discounted. Chief Constables and Commissioners are responsible for the direction and control of policing in their force areas. Although a national body and recognized by the government for consultation, ACPO had no powers of its own, nor any mandate to instruct chief officers. However, the organisation allowed chief officers to form a national policy rather than replicate the work in each of their forces. For example, after the 1980–81 riots in 27 British cities including in St. Pauls and Brixton ACPO began to prepare the Public Order Manual of Tactical Operations and Related Matters. Police forces began training in its tactics late in 1983. Membership ACPO was not a staff association. It acted for the police service, not its members. The separate Chief Police Officers Staff Association acts for chief officers. ACPO was composed of the chief police officers of the 44 police forces in England & Wales and Northern Ireland, the Deputy Chief Constable and Assistant Chief Constable of 42 of those forces and the Deputy Commissioner, Assistant Commissioner, Deputy Assistant Commissioner and Commanders of the remaining two - the Metropolitan Police and City of London Police. Certain senior non-police staff and senior members of national police agencies and certain other specialised and non-geographical forces in the UK, the Isle of Man and the Channel Islands were also members. As of March 2010 there were 349 members of ACPO. The membership elected a full-time President, who held the office of Chief Constable under the Police Reform Act 2002. ACPO bodies ACPO was responsible for several ancillary bodies, which it either funded or which received Home Office funding but which reported to ACPO: ACPO Criminal Records Office The ACPO Criminal Records Office (ACRO) was set up in 2006 in response to a perceived gap in the police service's ability to manage criminal records and in particular to improve links to biometric data. The initial aim of ACRO was to provide operational support relating to criminal records and associated biometric data, including DNA and fingerprint recognition. It also issues police certificates, for a fee, needed to obtain immigration visas for countries including Australia, Belgium, Canada, Cayman Islands, New Zealand, South Africa and the United States. The organization continues under the style "ACRO Criminal Records Office" under the control of Hampshire Constabulary. ACPO Vehicle Crime Intelligence Service The Association of Chief Police Officers Vehicle Crime Intelligence Service (AVCIS), later the National Vehicle Crime Intelligence Service (NAVCIS), was managed by ACPO, and was responsible for combating organised vehicle crime and the use of vehicles in crime. National Community Tension Team The National Community Tension Team (NCTT) was an ACPO body which monitored religious, racial, or other tensions within communities, and provided liaison between police forces and community organisations. National Counter Terrorism Security Office The National Counter Terrorism Security Office was funded by, and reported to, ACPO and advised the British government on its counter terrorism strategy. Police National Information and Co-ordination Centre ACPO was responsible for coordinating the national mobilisation of police resources at times of national need through the Police National Information and Co-ordination Centre (PNICC), which it set up in 2003. This included ensuring policing resilience during major events such as emergency response to serious flooding or the investigation of a terrorist attack. PNICC sat alongside the government in COBR (Cabinet Office Briefing Room) to advise on national issues. PNICC also handled support to overseas crises involving UK nationals. It employed three full-time staff, with other staff seconded to it as needed and is funded by contributions from each of the police forces. Counter Terrorism Internet Referral Unit The Counter Terrorism Internet Referral Unit (CTIRU) was set up in 2010 by ACPO (and run by the Metropolitan Police) to remove unlawful terrorist material content from the Internet with a focus on UK based material. The December 2013 report of the Prime Minister's Extremism task force said that it would "work with internet companies to restrict access to terrorist material online which is hosted overseas but illegal under UK law" and "work with the internet industry to help them in their continuing efforts to identify extremist content to include in family-friendly filters" which would likely involve lobbying ISPs to add the CTIRU list to their filters without the need for additional legislation. National Wildlife Crime Unit The National Wildlife Crime Unit is a national police unit that gathers intelligence on wildlife crime and provides analytical and investigative support to law enforcement agencies. Controversies Freedom of information ACPO had been criticised as being unaccountable to Parliament or the public by virtue of its limited company status. In October 2009 Sir Hugh Orde stated that ACPO would be "more than happy" to be subject to the Freedom of Information Act. On 30 March 2010, the Ministry of Justice announced that ACPO would be included under the FOI Act from October 2011. In its response, the organisation stated that "Although organisations cannot voluntarily comply with the Act, a large proportion of ACPO's work is public already or available under FOI through any police force". In January 2011 its website still said it: "is unable to do is to respond to requests for information under the Act. The organisation is too small and there are too few members of staff to be able to conduct the necessary research and to compile the responses". From November 2011, however, FOI requests could be made to ACPO. Confidential Intelligence Unit In February 2009, the Mail on Sunday highlighted the involvement of ACPO in setting up the "Confidential Intelligence Unit" as a specialised unit to monitor left-wing and right-wing political groups throughout the UK. Commercial activities The February 2009 Mail on Sunday investigation also highlighted other activities of the ACPO including selling information from the Police National Computer for £70 despite it costing them only 60p to access it, marketing "police approval" logos to firms selling anti-theft devices and operating a separate private firm offering training to speed camera operators. Apartments The organisation was criticised in February 2010 for allegedly spending £1.6 million per year from government anti-terrorist funding grants on renting up to 80 apartments in the centre of London which were reported as being empty most of the time. The organisation responded that it had reviewed this policy and would reduce the number of apartments. Undercover activities As a result of The Guardian articles with regards to the activities and accusations of PC Mark Kennedy of the National Public Order Intelligence Unit within the National Extremism Tactical Co-ordination Unit, and the collapse of the subsequent trial of six activists, a number of initiatives and changes were announced: Acknowledging that "something had gone very wrong" in the Kennedy case to the Home Affairs Select Committee, Home Office minister Nick Herbert stated that ACPO would lose control of three teams involved in tackling domestic extremism. Herbert announced that the units would be transferred to the Metropolitan Police, with acting commissioner Tim Godwin confirming that this would occur at the earliest possible timescale. Her Majesty's Inspectorate of Constabulary announced that Bernard Hogan-Howe would lead an investigation into ACPO, to assess whether undercover operations had been "authorised in accordance with law" and "proportionate". The Association of Police Authorities said it was ending its annual £850,000 grant to ACPO. DNA database ACPO has supervised the creation of one of the world's largest per-capita DNA databases, containing the DNA profiles of more than one million innocent people. ACPO's guidelines that these profiles should only be deleted in "exceptional circumstances" were found to be unlawful by the UK Supreme Court in May 2011. They were found to be incompatible with the European Convention on Human Rights, following the ruling by the European Court of Human Rights in S and Marper v United Kingdom. On 1 May 2012, the Protection of Freedoms Act 2012 completed its passage through Parliament and received Royal Assent. To date, ACPO has not reissued revised guidelines to replace its unlawful DNA exceptional procedure. Big Brother Watch, in a report of June 2012, concludes that despite the Protection of Freedoms Act, the retention of DNA in England and Wales remains an uncertain and illiberal regime. Fake uniforms During the summer of 2011, Hugh Orde, then president of the ACPO, was seen wearing a dark blue police-style uniform with ACPO insignia, and was accused of wearing a fake uniform. Senior police officers claimed that the uniform was not that of any police force in the country but "closely resembled" the uniform worn by former Metropolitan Police Commissioner, Paul Stephenson. Sam Leith, an author, journalist and literary editor of The Spectator, mocked Orde's decision "to wear this Gadaffi-style pretend uniform on television", and suggested it was "a subliminal pitch for the Met Commissioner's job." Brian Paddick, at the time the Police Commander for the London Borough of Lambeth, said: "It's unusual for the president of ACPO to appear in all these interviews in uniform. He is sending a clear signal: how would I look in the commissioner's uniform?" One officer noted: "If anything, Hugh should be wearing the uniform of the Police Service of Northern Ireland because that's where he served. But their uniform is green, not the dark blue he currently wears." An ACPO spokesperson stated that the "Police Reform Act 2002 states that the President of the Association of Chief Police Officers holds the rank of chief constable. Not being a member of a particular force, the President wears a generic police uniform". Parker Review In 2013, an independent review of ACPO by General Sir Nick Parker was published. It recommended that ACPO be replaced by a new body, in the interests of greater transparency and cost effectiveness. On the basis of these recommendations, a new organization, the National Police Chiefs' Council, was set up to replace ACPO, which it did on 1 April 2015. Notable members Commander Christine Jones (Metropolitan Police), lead on mental health issues References External links Association of Chief Police Officers website (archived link from March 2015) Category:Law enforcement in England and Wales Category:Law enforcement in Northern Ireland Category:Organizations established in 1948 Category:British intelligence agencies Category:Privately held companies of the United Kingdom Category:Counter-intelligence agencies Category:1948 establishments in the United Kingdom Category:2015 disestablishments in the United Kingdom Category:Law enforcement-related professional associations
{ "pile_set_name": "wikipedia_en" }
Lets sing! ♫♪♬♩ Eat food 🍅🍕
{ "pile_set_name": "github" }
--- abstract: '[Cr$_{2}$Ge$_{2}$Te$_{6}$]{} has been of interest for decades, as it is one of only a few naturally forming ferromagnetic semiconductors. Recently, this material has been revisited due to its potential as a 2 dimensional semiconducting ferromagnet and a substrate to induce anomalous quantum Hall states in topological insulators. However, many relevant properties of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} still remain poorly understood, especially the spin-phonon coupling crucial to spintronic, multiferrioc, thermal conductivity, magnetic proximity and the establishment of long range order on the nanoscale. We explore the interplay between the lattice and magnetism through high resolution micro-Raman scattering measurements over the temperature range from 10 K to 325 K. Strong spin-phonon coupling effects are confirmed from multiple aspects: two low energy modes splits in the ferromagnetic phase, magnetic quasielastic scattering in paramagnetic phase, the phonon energies of three modes show clear upturn below [T$_{C}$]{}, and the phonon linewidths change dramatically below [T$_{C}$]{} as well. Our results provide the first demonstration of spin-phonon coupling in a potential 2 dimensional atomic crystal.' address: - 'Department of Physics, University of Toronto, ON M5S 1A7 Canada' - 'Department of Physics, Boston College 140 Commonwealth Ave Chestnut Hill MA 02467-3804 USA' - 'Department of Chemistry, Princeton University, Princeton, NJ 08540 USA' - 'Department of Chemistry, Princeton University, Princeton, NJ 08540 USA' - 'Department of Physics, Boston College 140 Commonwealth Ave Chestnut Hill MA 02467-3804 USA' author: - Yao Tian - 'Mason J. Gray' - Huiwen Ji - 'R. J. Cava' - 'Kenneth S. Burch' title: 'Magneto-Elastic Coupling in a potential ferromagnetic 2D Atomic Crystal' --- \[sec:intro\]Introduction ========================= [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} is a particularly interesting material since it is in the very rare class of ferromagnetic semiconductors and possesses a layered, nearly two dimensional structure due to van Der Waals bonds[@CGT_original; @li2014crxte]. Recently this material has been revisited as a substrate for the growth of the topological insulator [Bi$_{2}$Te$_{3}$]{} to study the anomalous quantum Hall effect[@BT_CGT_quantum_hall]. Furthermore the van Der Waals bonds make it a candidate two dimensional atomic crystal, which is predicted as a platform to study 2D semiconducting ferromagnets and for single layered spintronics devices[@sivadas2015magnetic]. In such devices, spin-phonon coupling can be a key factor in the magnetic and thermal relaxation processes[@golovach2004phonon; @ganzhorn2013strong; @jaworski2011spin], while generating other novel effects such as multiferroticity[@wesselinowa2012origin; @issing2010spin]. Combined with the fact that understanding heat dissipation in nanodevices is crucial, it is important to explore the phonon dynamics and their interplay with the magnetism in [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}. Indeed, recent studies have shown the thermal conductivity of its cousin compound [Cr$_{2}$Si$_{2}$Te$_{6}$]{} linearly increases with temperature in the paramagnetic phase, suggesting strong spin-phonon coupling is crucial in these materials[@casto2015strong]. However there are currently no direct probes of the phonon dynamics of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}, let alone the spin-phonon coupling. Such studies are crucial for understanding the potential role of magneto-elastic effects that could be central to the magnetic behavior of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} as a 2D atomic crystal and potential nano magneto-elastic device. Polarized temperature dependent Raman scattering is perfectly suited for such studies as it is well established for measuring the phonon dynamics and the spin-phonon coupling in bulk and 2D atomic crystals[@compatible_Heterostructure_raman; @Raman_Characterization_Graphene; @Raman_graphene; @Pandey2013; @sandilands2010stability; @zhao2011fabrication; @calizo2007temperature; @sahoo2013temperature; @polarized_raman_study_of_BFO; @dresselhaus2010characterizing]. Compared to other techniques, a high resolution Raman microscope can track sub-[cm$^{-1}$]{} changes to uncover subtle underlining physics. A demonstration of Raman studies of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} can be extremely meaningful for the future study of the exfoliated 2D ferromagnets. ![Crystal structure of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}. The unit cell is indicated by the black frame. Cr and Ge dimer are inside the octahedra formed by Te atoms. One third of the octahedra are filled by Ge-Ge dimers while the other is filled by Cr ions forming a distorted honeycomb lattice.[]{data-label="fig:CGT_structure"}](CGT_lattice_structure.eps){width="0.5\columnwidth"} ![(a): Raman spectra of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} taken in different conditions. All the spectra are taken at 300 K. The Raman spectra of the air-exposed sample shows broader and fewer Raman modes, indicating the formation of oxides. (b): Normalized Raman Spectra of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} in XX and XY geometry at 270 K, showing the different symmetry of the phonon modes.[]{data-label="633_532_oldsample_raman"}](CGT_532nm_airexposed_cleaved_300k_XX_XY.eps "fig:"){width="\figuresize"}\ In this paper, we demonstrate the ease of exfoliating [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}, as well as the dangers of doing so in air. Namely we find the Raman spectra of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} are strongly deteriorated by exposure to air, but [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} exfoliated in a glovebox reveals bulk like spectra. In addition, we find strong evidence for spin-phonon coupling in bulk [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}, via polarized temperature dependent Raman spectroscopy. The spin-phonon coupling has been confirmed in multiple ways: below [T$_{C}$]{} we observe a split of two phonon modes due to the breaking of time reversal symmetry; a drastic quenching of magnetic quasielastic scattering; an anomalous hardening of an additional three modes; and a dramatic decrease of the phonon lifetimes upon warming into the paramagnetic phase. Our results also suggest the possibility of probing the magneto-elastic coupling using Raman spectroscopy, opening the door for further studies of exfoliated 2D [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}. \[sec:exp\][Method Section]{} ============================= Single crystal [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} was grown with high purity elements mixed in a molar ratio of 2:6:36; the extra Ge and Te were used as a flux. The materials were heated to 700$^{o}$C for 20 days and then slow cooled to 500$^{o}$C over a period of 1.5 days. Detailed growth procedures can be found elsewhere[@Huiwen_doc]. The Raman spectra on the crystal were taken in a backscattering configuration with a home-built Raman microscope[@RSI_unpublished]. The spectra were recorded with a polarizer in front of the spectrometer. Two Ondax Ultra-narrow-band Notch Filters were used to reject Rayleigh scattering. This also allows us to observe both Stokes and anti-Stokes Raman shifts and provides a way to confirm the absence of local laser heating. A solid-state 532 nm laser was used for the excitation. The temperature variance was achieved by using an automatically controlled closed-cycle cryostation designed and manufactured by Montana Instrument, Inc. The temperature stability was within 5 mK. To maximize the collection efficiency, a 100x N.A. 0.9 Zeiss objective was installed inside the cryostation. A heater and a thermometer were installed on the objective to prevent it from being damaged by the cold and to keep the optical response uniform at all sample temperatures. The laser spot size was 1 micron in diameter and the power was kept fairly low (80 $\mu$W) to avoid laser-induced heating. This was checked at 10 K by monitoring the anti-Stokes signal as the laser power was reduced. Once the anti-Stokes signal disappeared, the power was cut an additional $\approx 50\%$. ![(a): The reflection optical image of the exfoliated [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}. The texts indicate different position of the prepared samples. (b): AFM topography image of the rectangle region in a. (c): The height distribution of the rectangle region in b. The height difference between the peaks reveals the thickness of the [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} flakes, which are region 1: 30 nm and region 2: 4 nm. (d): Raman spectra of the two exfoliated [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} flakes []{data-label="fig:exfoliated_CGT"}](CGT_exfoliation.eps){width="\textwidth"} \[sec:Results\_and\_discussion\]Results ======================================= Raman studies at room temperature --------------------------------- We first delve into the lattice structure of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} (shown in Fig. \[fig:CGT\_structure\]). This material contains van der Waals bonded layers, with the magnetic ions (Cr, [T$_{C}$]{}=61 K) forming a distorted honeycomb lattice[@Huiwen_doc]. The Cr atoms are locally surrounded by Te octahedra, and thus the exchange between Cr occurs via the Te atoms. Based on the group theory analysis, the Raman-active modes in [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} are of A$_{g}$, E$_{1g}$ and E$_{2g}$ symmetry, and E$_{1g}$ and E$_{2g}$ are protected by time-reversal symmetry. In the paramagnetic state we expect to see 10 Raman-active modes, because the E$_{1g}$ and E$_{2g}$ mode are not distinguishable by energy (see details in the supplemental materials). Keeping the theoretical analysis in mind, we now turn to the mode symmetry assignment of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}. This analysis was complicated by the oxidation of the [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} surface. Indeed, many chalcogenide materials suffer from easy oxidation of the surface, which is particularly problematic for [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} as TeO$_{x}$ contains a strong Raman signal[@Raman_aging_effect]. The role of oxidation and degradation are becoming increasingly important in many potential 2D atomic crystals[@osvath2007graphene], thus a method to rapidly characterize its presence is crucial for future studies. For this purpose, we measured the Raman response at room temperature in freshly cleaved, as well as air-exposed [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} (shown in Fig. \[633\_532\_oldsample\_raman\]a). The air-exposed sample reveals fewer phonon modes which are also quite broad, suggesting the formation of an oxide. A similar phenomena was also observed in similar materials and assigned to the formation of TeO$_{x}$[@Raman_amorphous_crystalline_transition_CGTfamily]. ![Temperature dependent collinear (XX) Raman spectra of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} measured in the temperature range of 10 K $ - $ 325 K. T$_{c}$ is indicated by the black dash line.[]{data-label="XX_temp"}](Raman_colorplot_newscolor.eps "fig:"){width="\textwidth"}\ ![(a): Temperature dependent collinear (XX) Raman spectra of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} for the E$_{g}^{1}$ and E$_{g}^{2}$ modes. T$_{c}$ is indicated by the black dash line. (b): Raw spectra of E$_{g}^{1}$ and E$_{g}^{2}$ modes. Four Lorentzians (shown in dash line) were used to account for the splitting.[]{data-label="fig:low_energy_colorplot"}](CGT_lowenergy.eps){width="\columnwidth"} From the Raman spectra of the freshly cleaved [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} sample, we can see that at room temperature there are 7 modes. They center at 78.6 [cm$^{-1}$]{}, 85.3 [cm$^{-1}$]{}, 110.8 [cm$^{-1}$]{}, 136.3 [cm$^{-1}$]{}, 212.9 [cm$^{-1}$]{}, 233.9 [cm$^{-1}$]{} and 293.8 [cm$^{-1}$]{} at 270 K. The other three modes might be too weak or out of our spectral range. To identify the symmetry of these modes, we turn to their polarization dependence (see Fig. \[633\_532\_oldsample\_raman\]b). From the Raman tensor (see the supplemental materials), we know that all modes should be visible in the co-linear (XX) geometry and A$_{g}$ modes should vanish in crossed polarized (XY) geometry. To test these predictions we compare the spectra taken at 270 K in XX and XY configurations. As can be seen from Fig. \[633\_532\_oldsample\_raman\]b, only the two modes located at 136.3 [cm$^{-1}$]{} and 293.8 [cm$^{-1}$]{} vanish in the XY configuration. Therefore, these two modes are of A$_{g}$ symmetry, and the other five modes are of E$_{g}$ symmetry. Before proceeding to the temperature dependent Raman studies, it is useful to confirm the quasi-two-dimensional nature of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}. To achieve this, we exfoliated [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} on mica inside an argon filled glovebox to avoid oxidation. The results are shown in Fig. \[fig:exfoliated\_CGT\]. We can see from the optical image (Fig. \[fig:exfoliated\_CGT\]a) that many thin-flake [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} samples can be produced through the mechanical exfoliation method. To verify the thickness, we also performed atomic force microscope (AFM) measurement on two flakes (region 1 and 2). The results are shown in Fig. \[fig:exfoliated\_CGT\]b and \[fig:exfoliated\_CGT\]c. Both flakes are in nano regime and the region 2 is much thinner than region 1, showing the great promise of preparing 2D [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} samples through this method. To be sure that no dramatic structural changes occur during exfoliation, we also took Raman spectra on the flakes, the results of which are shown in Fig. \[fig:exfoliated\_CGT\]d. As can be seen from the plot, the Raman spectra of exfoliated [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} are very similar to the bulk, confirming the absence of structural changes. Besides, the Raman intensity of the [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} nanoflakes increases dramatically as its thickness decreases. This is due to the onset of interference effect and has been observed on many other 2D materials[@sandilands2010stability; @yoon2009interference; @zhao2011fabrication]. Temperature Dependence ---------------------- To search for the effects of magnetism (i.e. magneto-elastic and degeneracy lifting), we measured the Raman spectra well above and below the ferromagnetic transition temperature. The temperature resolution was chosen to be 10 K below 100 K and 20 K above. The full temperature dependence is shown in Fig. \[XX\_temp\], while we focus on the temperature dependence of the lowest energy E$_{g}$ modes in Fig. \[fig:low\_energy\_colorplot\] to search for the lifting of degeneracy due to time reversal symmetry breaking first. Indeed, as the temperature is lowered, additional modes appear in the spectra [near [T$_{C}$]{}]{} in Fig. \[fig:low\_energy\_colorplot\]a. As can be seen more clearly from the Raw spectra in Fig. \[fig:low\_energy\_colorplot\]b, the extra feature near the E$_{g}^{1}$ mode and the extremely broad and flat region of the E$_{g}^{2}$ mode appear below [T$_{C}$]{}. We note that the exact temperature at which this splitting occurs is difficult to determine precisely due to our spectral resolution and the low signal levels of these modes. Nonetheless, the splitting clearly grows as the temperature is lowered and magentic order sets in. At the lowest temperatures we find a 2.9 [cm$^{-1}$]{} splitting for the E$_{g}^{1}$ mode and a 4.5 [cm$^{-1}$]{} splitting for the E$_{g}^{2}$ mode. This confirms our prediction that the lifting of time reversal symmetry leads to splitting of the phonon modes, and suggests significant spin-phonon coupling. Indeed, a similar effect has been observed in numerous three dimensional materials such as MnO[@PhysRevB.77.024421], ZnCr$_{2}$X$_{4}$ (X = O, S, Se)[@yamashita2000spin; @rudolf2007spin], and CeCl$_{3}$. In CeCl$_{3}$ the E$_{g}$ symmetry in its point group C$_{4v}$ is also degenerate by time reversal symmetry. For CeCl$_{3}$ it was found that increasing the magnetic field led to a splitting of two E$_{g}$ modes and a hardening of a second set of E$_{g}$ modes[@schaack1977magnetic]. The phonon splitting and energy shifts (discussed in the later section) match well with our observation in [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}. ![Temperature dependence of phonon frequency shifts. The phonons frequency shifts are shown in blue. The red curves indicate the fit results using the anharmonic model mentioned in the texts. T$_{c}$ is indicated by the dash vertical lines.[]{data-label="CGT_phonon_fits"}](phononpos_stack.eps){width="0.5\columnwidth"} Further evidence of spin-phonon coupling as the origin of the splitting comes for the energy of the modes. The ferromagnetism of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} originates from the Cr-Te-Cr super-exchange interaction where the Cr octahedra are edge-sharing and the Cr-Te-Cr angle is $91.6^{o}$[@CGT_original]. The energy of these two modes are very close to the Te-displacement mode in [Cr$_{2}$Si$_{2}$Te$_{6}$]{}. Thus, it is very likely the E$_{g}^{1}$ and E$_{g}^{2}$ modes involve atomic motions of the Te atoms whose bond strength can be very susceptible to the spin ordering, since the Te atoms mediate the super-exchange between the two Cr atoms. Before continuing let us consider some alternative explanations for the splitting. For example, structural transitions can also result from magnetic order, however previous X-ray diffraction studies in both the paramagnetic (270 K) and ferromagnetic phases (5 K) found no significant differences in the structure[@CGT_original]. Alternatively, the dynamic Jahn-Teller effect can cause phonon splitting[@klupp2012dynamic], but the Cr$^{3+}$ ion is Jahn-Teller inactive in [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}, thus eliminating this possibility as well. One-magnon scattering is also highly unlikely since the Raman tensor of one-magnon scattering is antisymmetric which means the scattering only shows in crossed polarized geometry (XY, XZ and YZ). However, we observed this splitting under XX configuration[@fleury1968scattering]. ![Temperature dependence of phonon linewidths (green). The red curves indicate the fit results using equation \[eqn:Klemens\_model\_width\] above [T$_{C}$]{}. The mode located at 110.8 (136.3) [cm$^{-1}$]{} is shown on left (right). T$_{c}$ is indicated by the vertical dash lines.[]{data-label="fig:phononlinewidth"}](phononwidth_stack.eps){width="0.5\columnwidth"} Other than the phonon splitting, we also note a dramatic change in the background Raman scattering at [T$_{C}$]{} in Fig. \[fig:low\_energy\_colorplot\]a. We believe this is due to magnetic quasielastic scattering. In a low dimensional magnetic material with spin-phonon coupling, the coupling will induce magnetic energy fluctuation and allow the fluctuations to become observable as a peak centered at 0 [cm$^{-1}$]{} in Raman spectra[@reiter1976light; @kaplan2006physics]. Typically the peak is difficult to be observed not just due to weak spin-phonon coupling, but since the area under the peak is determined by the magnetic specific heat $C_{m}$ and the width by $D_{t}=C_{m}/\kappa$ where $D_{t}$ is the spin diffusion coefficient, and $\kappa$ is the thermal conductivity. However in low dimensional materials the fluctuations are typically enhanced, increasing the specific heat and lowering the thermal conductivity, making these fluctuations easier to observe in Raman spectra. This effect has been also observed in many other low dimensional magnetic materials evidenced by the quenching of the scattering amplitude as the temperature drops below [T$_{C}$]{}[@choi2004coexistence; @lemmens2003magnetic]. To further investigate the spin-phonon coupling, we turn our attention to the temperature dependence of the mode frequencies and linewidths. Our focus is on the higher energy modes (E$_{g}^{3}$-A$_{g}^{2}$) as they are easily resolved. To gain more quantitative insights into these modes, we fit the Raman spectra with the Voigt function: $$\label{voigt_function} V(x,\sigma,\Omega,\Gamma)=\int^{+\infty}_{-\infty}G(x',\sigma)L(x-x',\Omega,\Gamma)dx'$$ which is a convolution of a Gaussian and a Lorentzian[@olivero1977empirical]. Here the Gaussian is employed to account for the instrumental resolution and the width $\sigma$ (1.8[cm$^{-1}$]{}) is determined by the central Rayleigh peak. The Lorentzian represents a phonon mode. In Fig. \[CGT\_phonon\_fits\], we show the temperature dependence of the extracted phonon energies. All phonon modes soften as the material is heated up. This result is not surprising since the anharmonic phonon-phonon interaction is enhanced at high temperatures and typically leads to a softening of the mode[@PhysRevB.29.2051]. However, for the E$_{g}^{3}$, E$_{g}^{4}$ and A$_{g}^{1}$ modes, their phonon energies change dramatically as the temperature approaches [T$_{C}$]{}. In fact the temperature dependence is much stronger than we would expect from standard anharmonic interactions. Especially for the E$_{g}^{4}$ mode, a 2 [cm$^{-1}$]{} downturn occurs from 10 K to 60 K. This sudden drop of phonon energy upon warming to [T$_{C}$]{} is a further evidence for the spin-phonon coupling in [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}. Other mechanisms which can induce the shift of phonon energies are of very small probability in this case. For example, an electronic mechanism for the strong phonon energy renormalization is unlikely due to the large electronic gap in [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} (0.202 eV)[@Huiwen_doc]. The lattice expansion that explains the anomalous phonon shifts in some magnetic materials[@kim1996frequency] is also an unlikely cause. Specifically, the in-plane lattice constant of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} grows due to the onset of magnetic order[@CGT_original], which should lead to a softening of the modes. However we observe a strong additional hardening of the modes below [T$_{C}$]{}. The spin-phonon coupling is also confirmed by the temperature dependence of the phonon linewidths, which are not directly affected by the lattice constants[@PhysRevB.29.2051]. In Fig. \[fig:phononlinewidth\], we show the temperature dependent phonon linewidths of the E$_{g}^{3}$, and A$_{g}^{1}$ modes due to their larger signal level. We can see the phonon lifetimes are enhanced as the temperature drops below [T$_{C}$]{}, as the phase space for phonons to scatter into magnetic excitations is dramatically reduced[@ulrich2015spin]. This further confirms the spin-phonon coupling. To further uncover the spin-phonon interaction, we first remove the effect of the standard anharmonic contributions to the phonon temperature dependence. In a standard anharmonic picture, the temperature dependence of a phonon energy and linewidth is described by: $$\begin{aligned} \omega(T)=&\omega_{0}+C(1+2n_{B}(\omega_{0}/2))+ D(1+3n_{B}(\omega_{0}/3)+3n_{B}(\omega_{0}/3)^{2})\\ \Gamma(T)=&\Gamma_{0}+A(1+2n_{B}(\omega_{0}/2))+ B(1+3n_{B}(\omega_{0}/3)+3n_{B}(\omega_{0}/3)^{2})\label{eqn:Klemens_model_width}\end{aligned}$$ where $\omega_{0}$ is the harmonic phonon energy, $\Gamma_{0}$ is the disorder induced phonon broadening, $n_{B}$ is the Bose-factor, and $C$ ($A$) and $B$ ($D$) are constants determined by the cubic and quartic anharmonicity respectively. The second term in both equations results from an optical phonon decaying into two phonons with opposite momenta and a half of the energy of the original mode. The third term describes the optical phonon decaying into three phonons with a third of the energy of the optical phonon[@PhysRevB.28.1928]. The results of fitting the phonon energy and linewidths are shown in red in Fig. \[CGT\_phonon\_fits\] and \[fig:phononlinewidth\] with the resulting parameters listed in in table \[table:Anharmonic\_fit\_data\]. We can see that the temperature dependent frequencies of the two highest energy modes E$_{g}^{5}$, A$_{g}^{2}$ follow the anharmonic prediction very well throughout the entire range. However, for the other three modes, there is a clear deviation from the anharmonic prediction below [T$_{C}$]{} confirming the existence of spin-phonon coupling. Moreover, we notice that for the E$_{g}^{3}$, A$_{g}^{1}$ and E$_{g}^{4}$ modes, the phonon energies start to deviate from the anharmonic prediction even above [T$_{C}$]{} (circled in Fig. \[CGT\_phonon\_fits\]). This is probably due to the short-ranged two-dimensional magnetic correlations that persist to temperatures above [T$_{C}$]{}. Indeed finite magnetic moments[@Huiwen_doc] and magneto-striction[@CGT_original] were observed in [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} above [T$_{C}$]{}. Mode $\omega_{0}$ Error C Error D Error A Error B Error ------------- -------------- ------- ------- ------- -------- ------- ------ ------- ------- ------- E$_{g}^{3}$ 113.0 0.1 -0.09 0.03 -0.013 0.002 0.10 0.03 0.003 0.002 A$_{g}^{1}$ 138.7 0.1 -0.12 0.04 -0.024 0.003 0.13 0.03 0.003 0.003 E$_{g}^{4}$ 220.9 0.4 -1.9 0.1 -0.02 0.01 – – – – E$_{g}^{5}$ 236.7 0.1 -0.01 0.07 -0.12 0.01 – – – – A$_{g}^{2}$ 298.1 0.3 -0.4 0.3 -0.20 0.04 – – – – : Anharmonic interaction parameters. The unit is in [cm$^{-1}$]{}.[]{data-label="table:Anharmonic_fit_data"} Discussion ========== The spin-phonon coupling in 3d-electron systems usually results from the modulation of the electron hopping amplitude by ionic motion, leading to a change in the exchange integral $J$. In the unit cell of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} there are two in-equivalent magnetic ions (Cr atoms), therefore the spin-phonon coupling Hamiltonian to the lowest order can be written as[@woods2001magnon; @PhysRev.127.432], $${H_{int} = \sum\limits_{i,\delta}\frac{\partial J}{\partial u}(\mathbf{S}_{i}^{a}{}\cdot\mathbf{S}_{i+\delta}^{b})u\label{equ:hamitionian_sp} }$$ where $\mathbf{S}$ is a spin operator, $u$ stands for the ionic displacement of atoms on the exchange path, the index ($i$) runs through the lattice, $\delta$ is the index of its adjacent sites, and the subscripts $a$ and $b$ indicate the in-equivalent Cr atoms in the unit cell. The strength of the coupling to a specific mode depends on how the atomic motion associated with that mode, modulates the exchange coupling. This in turn results from the detailed hybridization and/or overlap of orbitals on different lattice sites. Thus, some phonon modes do not show the coupling effect regardless of their symmetry. To extract the spin-phonon coupling coefficients, we use a simplified version of equation \[equ:hamitionian\_sp\][@fennie2006magnetically; @lockwood1988spin], $$\omega\approx\omega_{0}^{ph}+\lambda{}<\mathbf{S}_{i}^{a}{}\cdot\mathbf{S}_{i+\delta}^{b}>\label{spin_phonon_coupling_equation}$$ where $\omega$ is the frequency of the phonon mode, $\omega_{0}^{ph}$ is the phonon energy free of the spin-phonon interaction, $<\mathbf{S}_{i}^{a}\cdot\mathbf{S}_{i+\delta}^{b}>$ denotes a statistical average for adjacent spins, and $\lambda$ represents the strength of the spin-phonon interaction which is proportional to $\frac{\partial{}J}{\partial{}u}u$. The saturated magnetization value of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} reaches 3$\mu$B per Cr atom at 10 K, consistent with the expectation for a high spin configuration state of Cr$^{3+}$[@Huiwen_doc]. Therefore, $<\mathbf{S}_{i}^{a}{}\cdot\mathbf{S}_{i+\delta}^{b}> \approx 9/4$ for Cr$^{3+}$ at 10 K and the spin-phonon coupling constants can be estimated using equation \[spin\_phonon\_coupling\_equation\]. The calculated results are given in table \[spin\_phonon\_table\]. Compared to the geometrically ([CdCr$_{2}$O$_{4}$]{}, [ZnCr$_{2}$O$_{4}$]{}) or bond frustrated ([ZnCr$_{2}$S$_{4}$]{}) chromium spinels the coupling constants are smaller in [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}[@rudolf2007spin]. This is probably not surprising, because in the spin frustrated materials, the spin-phonon couplings are typically very strong[@rudolf2007spin]. On the other hand, in comparison with the cousin compound [Cr$_{2}$Si$_{2}$Te$_{6}$]{} where the coupling constants were obtained for the phonon modes at 90.5 [cm$^{-1}$]{} ($\lambda$=0.1) and 369.3 [cm$^{-1}$]{} ($\lambda$=-0.2)[@casto2015strong], the coupling constants in [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} is larger. ----------------------- ---------- ------------------- ----------- -- -- Mode $\omega$ $\omega_{0}^{ph}$ $\lambda$ \[0.5ex\] E$_{g}^{3}$ 113.4 112.9 0.24 A$_{g}^{1}$ 139.3 138.5 0.32 E$_{g}^{4}$ 221.7 219.0 1.2 ----------------------- ---------- ------------------- ----------- -- -- : Spin-phonon interaction parameters at 10 K. The unit is in [cm$^{-1}$]{}. \[spin\_phonon\_table\] \[sec:exp\]Conclusion ===================== In summary, we have demonstrated spin-phonon coupling in a potential 2D atomic crystal for the first time. In particular we studied the polarized temperature dependent Raman spectra of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}. The two lowest energy modes of E$_{g}$ symmetry split below [T$_{C}$]{}, which is ascribed to the time reversal symmetry breaking by the spin ordering. The temperature dependence of the five modes at higher energies were studied in detail revealing additional evidence for spin-phonon coupling. Among the five modes, three modes show significant renormalization of the phonon lifetime and frequency due to the onset of magnetic order. Interestingly, this effect appears to emerge above [T$_{C}$]{}, consistent with other evidence for the onset of magnetic correlations at higher temperatures. Besides, magnetic quasielastic scattering was also observed in [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}, which is consistent with the spin-phonon coupling effect. Our results also show the possibility to study magnetism in exfoliated 2D ferromagnetic [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} from the perspective of the phonon modes and magnetic quasielastic scattering using micro-Raman scattering. Acknowledgements ================ We are grateful for numerous discussions with Y. J. Kim and H. Y. Kee at University of Toronto. Work at University of Toronto was supported by NSERC, CFI, and ORF and K.S.B. acknowledges support from the National Science Foundation (Grant No. DMR-1410846). The crystal growth at Princeton University was supported by the NSF MRSEC Program, grant number NSF-DMR-1005438. References ========== [10]{} url \#1[[\#1]{}]{} urlprefix \[2\]\[\][[\#2](#2)]{} Carteaux V, Brunet D, Ouvrard G and Andre G 1995 [*Journal of Physics: Condensed Matter*]{} [**7**]{} 69 <http://stacks.iop.org/0953-8984/7/i=1/a=008> Li X and Yang J 2014 [*Journal of Materials Chemistry C*]{} [**2**]{} 7071–7076 Alegria L D, Ji H, Yao N, Clarke J J, Cava R J and Petta J R 2014 [*Applied Physics Letters*]{} [**105**]{} 053512 <http://scitation.aip.org/content/aip/journal/apl/105/5/10.1063/1.4892353> Sivadas N, Daniels M W, Swendsen R H, Oakamoto S and Xiao D 2015 [*arXiv preprint arXiv:1503.00412*]{} Golovach V N, Khaetskii A and Loss D 2004 [*Physical Review Letters*]{} [ **93**]{} 016601 Ganzhorn M, Klyatskaya S, Ruben M and Wernsdorfer W 2013 [*Nature nanotechnology*]{} [**8**]{} 165–169 Jaworski C, Yang J, Mack S, Awschalom D, Myers R and Heremans J 2011 [ *Physical review letters*]{} [**106**]{} 186601 Wesselinowa J 2012 [*physica status solidi (b)*]{} [**249**]{} 615–619 Issing S, Pimenov A, Ivanov Y V, Mukhin A and Geurts J 2010 [*The European Physical Journal B*]{} [**78**]{} 367–372 Casto L, Clune A, Yokosuk M, Musfeldt J, Williams T, Zhuang H, Lin M W, Xiao K, Hennig R, Sales B [*et al.*]{} 2015 [*APL Materials*]{} [**3**]{} 041515 Hushur A, Manghnani M H and Narayan J 2009 [*Journal of Applied Physics*]{} [**106**]{} 054317 <http://scitation.aip.org/content/aip/journal/jap/106/5/10.1063/1.3213370> Oznuluer T, Pince E, Polat E O, Balci O, Salihoglu O and Kocabas C 2011 [ *Applied Physics Letters*]{} [**98**]{} 183101 <http://scitation.aip.org/content/aip/journal/apl/98/18/10.1063/1.3584006> Lin J, Guo L, Huang Q, Jia Y, Li K, Lai X and Chen X 2011 [*Physical Review B*]{} [**83**]{}(12) 125430 Pandey P K, Choudhary R J, Mishra D K, Sathe V G and Phase D M 2013 [ *Applied Physics Letters*]{} [**102**]{} 142401 ISSN 00036951 <http://link.aip.org/link/APPLAB/v102/i14/p142401/s1&Agg=doi> Sandilands L, Shen J, Chugunov G, Zhao S, Ono S, Ando Y and Burch K 2010 [ *Physical Review B*]{} [**82**]{} 064503 Zhao S, Beekman C, Sandilands L, Bashucky J, Kwok D, Lee N, LaForge A, Cheong S and Burch K 2011 [*Applied Physics Letters*]{} [**98**]{} 141911 Calizo I, Balandin A, Bao W, Miao F and Lau C 2007 [*Nano Letters*]{} [**7**]{} 2645–2649 Sahoo S, Gaur A P, Ahmadi M, Guinel M J F and Katiyar R S 2013 [*The Journal of Physical Chemistry C*]{} [**117**]{} 9042–9047 Singh M K, Jang H M, Ryu S and Jo M H 2006 [*Applied Physics Letters*]{} [ **88**]{} 042907 <http://scitation.aip.org/content/aip/journal/apl/88/4/10.1063/1.2168038> Dresselhaus M, Jorio A and Saito R 2010 [*Annual Review of Condensed Matter Physics*]{} [**1**]{} 89–108 Ji H, Stokes R A, Alegria L D, Blomberg E C, Tanatar M A, Reijnders A, Schoop L M, Liang T, Prozorov R, Burch K S, Ong N P, Petta J R and Cava R J 2013 [*Journal of Applied Physics*]{} [**114**]{} 114907 <http://scitation.aip.org/content/aip/journal/jap/114/11/10.1063/1.4822092> Tian Y, Reijnders A A, Osterhoudt G B, Valmianski I, Ramirez J G, Urban C, Zhong R, Schneeloch J, Gu G, Henslee I and Burch K S 2016 [*Review of Scientific Instruments*]{} [**87**]{} 043105 <http://scitation.aip.org/content/aip/journal/rsi/87/4/10.1063/1.4944559> Xia T L, Hou D, Zhao S C, Zhang A M, Chen G F, Luo J L, Wang N L, Wei J H, Lu Z Y and Zhang Q M 2009 [*Physical Review B*]{} [**79**]{}(14) 140510 <http://link.aps.org/doi/10.1103/PhysRevB.79.140510> Osv[á]{}th Z, Darabont A, Nemes-Incze P, Horv[á]{}th E, Horv[á]{}th Z and Bir[ó]{} L 2007 [*Carbon*]{} [**45**]{} 3022–3026 Avachev A, Vikhrov S, Vishnyakov N, Kozyukhin S, Mitrofanov K and Terukov E 2012 [*Semiconductors*]{} [**46**]{} 591–594 ISSN 1063-7826 <http://dx.doi.org/10.1134/S1063782612050041> Yoon D, Moon H, Son Y W, Choi J S, Park B H, Cha Y H, Kim Y D and Cheong H 2009 [*Physical Review B*]{} [**80**]{} 125422 Rudolf T, Kant C, Mayr F and Loidl A 2008 [*Phys. Rev. B*]{} [**77**]{}(2) 024421 <http://link.aps.org/doi/10.1103/PhysRevB.77.024421> Yamashita Y and Ueda K 2000 [*Physical Review Letters*]{} [**85**]{} 4960 Rudolf T, Kant C, Mayr F, Hemberger J, Tsurkan V and Loidl A 2007 [*New Journal of Physics*]{} [**9**]{} 76 Schaack G 1977 [*Zeitschrift f[ü]{}r Physik B Condensed Matter*]{} [**26**]{} 49–58 Klupp G, Matus P, Kamar[á]{}s K, Ganin A Y, McLennan A, Rosseinsky M J, Takabayashi Y, McDonald M T and Prassides K 2012 [*Nature Communications*]{} [**3**]{} 912 Fleury P and Loudon R 1968 [*Physical Review*]{} [**166**]{} 514 Reiter G 1976 [*Physical Review B*]{} [**13**]{} 169 Kaplan T and Mahanti S 2006 [*Physics of manganites*]{} (Springer Science & Business Media) Choi K Y, Zvyagin S, Cao G and Lemmens P 2004 [*Physical Review B*]{} [ **69**]{} 104421 Lemmens P, G[ü]{}ntherodt G and Gros C 2003 [*Physics Reports*]{} [**375**]{} 1–103 Olivero J and Longbothum R 1977 [*Journal of Quantitative Spectroscopy and Radiative Transfer*]{} [**17**]{} 233–236 Men[é]{}ndez J and Cardona M 1984 [*Physical Review B*]{} [**29**]{} 2051 Kim K, Gu J, Choi H, Park G and Noh T 1996 [*Physical Review Letters*]{} [ **77**]{} 1877 Ulrich C, Khaliullin G, Guennou M, Roth H, Lorenz T and Keimer B 2015 [ *Physical review letters*]{} [**115**]{} 156403 Balkanski M, Wallis R F and Haro E 1983 [*Physical Review B*]{} [**28**]{}(4) 1928–1934 <http://link.aps.org/doi/10.1103/PhysRevB.28.1928> Woods L 2001 [*Physical Review B*]{} [**65**]{} 014409 Sinha K P and Upadhyaya U N 1962 [*Physical Review*]{} [**127**]{}(2) 432–439 <http://link.aps.org/doi/10.1103/PhysRev.127.432> Fennie C J and Rabe K M 2006 [*Physical Review Letters*]{} [**96**]{} 205505 Lockwood D and Cottam M 1988 [*Journal of Applied Physics*]{} [**64**]{} 5876–5878
{ "pile_set_name": "arxiv" }
Supermen (anthology) Supermen is an anthology of science fiction short stories edited by Isaac Asimov, Martin H. Greenberg and Charles G. Waugh as the third volume in their Isaac Asimov's Wonderful Worlds of Science Fiction series. It was first published in paperback by Signet/New American Library in October 1984. The first British edition was issued in paperback by Robinson in 1988. The book collects twelve novellas, novelettes and short stories by various science fiction authors, together with an introduction by Asimov. Contents "Introduction: Super" (Isaac Asimov) "Angel, Dark Angel" (Roger Zelazny) "Worlds to Kill" (Harlan Ellison) "In the Bone" (Gordon R. Dickson) "What Rough Beast?" (Damon Knight) "Death by Ecstasy" (Larry Niven) "Un-Man" (Poul Anderson) "Muse" (Dean R. Koontz) "Resurrection" (A. E. van Vogt) "Pseudopath" (Philip E. High) "After the Myths Went Home" (Robert Silverberg) "Before the Talent Dies" (Henry Slesar) "Brood World Barbarian" (Perry A. Chapdelaine) Notes Category:1984 short story collections Category:Science fiction anthologies Category:Martin H. Greenberg anthologies
{ "pile_set_name": "wikipedia_en" }
Frithy and Chadacre Woods Frithy and Chadacre Woods is a 28.7 hectare biological Site of Special Scientific Interest (SSSI) in the parishes of Lawshall and Shimpling in Suffolk, England. Description Three ancient and semi-natural woods form the SSSI, namely Frithy Wood in Lawshall parish and Ashen Wood and Bavins Wood on the Chadacre Estate in Shimpling parish. All three woods are of the wet ash (Fraxinus excelsior) / maple (Acer campestre) type, with hazel (Corylus avellana) also present in considerable quantity. There are pedunculate oak (Quercus robur) trees and other tree and shrub species include aspen (Populus tremula), wild cherry (Prunus avium), midland hawthorn (Crataegus laevigata), hornbeam (Carpinus betulus), crab apple (Malus sylvestris), holly (Ilex aquifolium), spindle (Euonymus europaeus) and common dogwood (Cornus sanguinea). The structure of the woods has been greatly influenced by management of the coppice. The three woods have a diverse woodland floor vegetation, which is dominated by either dog's mercury (Mercurialis perennis) or brambles (Rubus spp.). They contain a number of plants characteristic of woodlands of this type including herb paris (Paris quadrifolia) in Ashen Wood and wood spurge (Euphorbia amygdaloides), woodruff (Galium odoratum), sanicle (Sanicula europaea) and stinking iris (Iris foetidissima) in Frithy Wood. The SSSI lies within the distribution of oxlip (Primula elatior) and all three woods contain this species. There are many other woodland floor plants including early purple orchid (Orchis mascula), twayblade (Neottia ovata), gromwell (Lithospermum officinale) and bluebell (Hyacinthoides non-scriptus). There are several well-vegetated rides in the group of woods that support a mixture of woodland and meadow plant species and which attract considerable numbers of common butterflies. Frithy Wood also contains an area of pasture which projects into the wood which is partly shaded by a number of standard trees. The birdlife of Frithy Wood has been recorded in detail with species including the nightingale, European green woodpecker, great spotted woodpecker and lesser spotted woodpecker which breed regularly. Roe deer, fallow deer and muntjac can also be seen in the woods but they have caused considerable damage to the ground vegetation. Forest school Forest school sessions are held in Frithy Wood by permission of the landowners. The 'school' represents an initiative of All Saints Primary School, Lawshall and the Green Light Trust, an environmental and educational charity. History Oliver Rackham has stated that "a wood now called The Frith is almost certain to be pre-conquest, from Old English Fyrhp." In a later book he stated that "an Anglo-Saxon (parallel) is fyrth, a wood, which has given rise to many Frith or Frithy Woods." There is documentary evidence for the existence of Frithy (formerly Frith) Wood back to 1545 and its Saxon name would imply that the wood is much older than that. All three woods are part of ancient woodland and contain broad boundary banks and ditches typical of coppice woods dating from the medieval period or before. In more recent times in the twentieth century pigs were kept in Frithy Wood and at one time the wood extended as far as The Street. Newspaper records On 31 August 1921 it was reported in the Suffolk Free Press that the remains of George Nunn aged 55 of Lawshall were discovered hanging in Frithy Wood. He had been missing for around 4 months since 22 April and was found a short distance from where he lived. Access The woods are not private with easy access. References Category:Forests and woodlands of Suffolk Category:Lawshall Category:Sites of Special Scientific Interest in Suffolk Category:Sites of Special Scientific Interest notified in 1987
{ "pile_set_name": "wikipedia_en" }
--- abstract: 'We investigate the driven quantum phase transition between oscillating motion and the classical nearly free rotations of the Josephson pendulum coupled to a harmonic oscillator in the presence of dissipation. This model describes the standard setup of circuit quantum electrodynamics, where typically a transmon device is embedded in a superconducting cavity. We find that by treating the system quantum mechanically this transition occurs at higher drive powers than expected from an all-classical treatment, which is a consequence of the quasiperiodicity originating in the discrete energy spectrum of the bound states. We calculate the photon number in the resonator and show that its dependence on the drive power is nonlinear. In addition, the resulting multi-photon blockade phenomenon is sensitive to the truncation of the number of states in the transmon, which limits the applicability of the standard Jaynes–Cummings model as an approximation for the pendulum-oscillator system. compare two different approaches to dissipation, namely the Floquet–Born–Markov and the Lindblad formalisms.' author: - 'I. Pietikäinen$^1$' - 'J. Tuorila$^{1,2}$' - 'D. S. Golubev$^2$' - 'G. S. Paraoanu$^2$' bibliography: - 'nonlinear\_bib.bib' title: 'Photon blockade and the quantum-to-classical transition in the driven-dissipative Josephson pendulum coupled to a resonator' --- Introduction ============ The pendulum, which can be seen as a rigid rotor in a gravitational potential [@baker2005], is a quintessential nonlinear system. It has two extreme dynamical regimes: the low-energy regime, where it can be approximated as a weakly anharmonic oscillator, and the high-energy regime, where it behaves as a free rotor. Most notably, the pendulum physics appears in systems governed by the Josephson effect, where the Josephson energy is analogous to the gravitational energy, and the role of the momentum is taken by the imbalance in the number of particles due to tunneling across the weak link. Such a system is typically referred to as a Josephson pendulum. In ultracold degenerate atomic gases, several realizations of the Josephson pendulum have been studied [@Leggett2001; @paraoanu2001; @Smerzi1997; @Marino1999]. While the superfluid-fermion case [@Paraoanu2002; @Heikkinen2010] still awaits experimental realization, the bosonic-gas version has been already demonstrated [@Albiez2005; @Levy2007]. Also in this case two regimes have been identified: small Josephson oscillations, corresponding to the low-energy limit case described here, and the macroscopic self-trapping regime [@Smerzi1997; @Marino1999], corresponding to the free-rotor situation. Another example is an oscillating $LC$ electrical circuit as a tunnel barrier between two superconducting leads. This is the case of the transmon circuit [@koch2007], which is currently one of the most promising approaches to quantum processing of information, with high-fidelity operations and good prospects for scalability. Its two lowest eigenstates are close to those of a harmonic oscillator, with only weak perturbations caused by the anharmonicity of the potential. The weak anharmonicity also guarantees that the lowest states of the transmon are immune to charge noise, which is a major source of decoherence in superconducting quantum circuits. In this paper we consider a paradigmatic model which arises when the Josephson pendulum is interacting with a resonator. Circuit quantum electrodynamics offers a rigorous embodyment of the above model as a transmon device coupled to a superconducting resonator - fabricated either as a three-dimensional cavity or as a coplanar waveguide segment. In this realization, the system is driven by an external field of variable frequency, and dissipation affects both the transmon and the resonator. We study in detail the onset of nonlinearity in the driven-dissipative phase transition between the quantum and the classical regimes. We further compare the photon number in detail with the corresponding transmon occupation, and demonstrate that the onset of nonlinearities is accompanied by the excitation of all bound states of the transmon and, thus, it is sensitive to the transmon truncation. We also find that the onset of the nonlinearities is sensitive to the energy level structure of the transmon, [*e.g.*]{} on the gate charge which affects the eigenenergies near and outside the edge of the transmon potential. The results also show that the full classical treatment is justified only in the high-amplitude regime, yielding significant discrepancies in the low-amplitude regime - where the phenomenology is governed by photon blockade. This means that the system undergoes a genuine quantum-to-classical phase transition. our numerical simulations demonstrate that the multi-photon blockade phenomenon is qualitatively different for a realistic multilevel anharmonic system compared to the Jaynes–Cummings case studied extensively in the literature. namely the conventional Lindblad master equation and the Floquet–Born–Markov master equation, which is developed especially to capture the effects of the drive on the dissipation. We show that both yield relatively close results. However, we emphasize that the Floquet–Born–Markov approach should be preferred because its numerical implementation is considerably more efficient than that of the corresponding Lindblad equation. While our motivation is to elucidate fundamental physics, in the burgeoning field of quantum technologies several applications of our results can be envisioned. For example, the single-photon blockade can be employed to realize single-photon sources, and the two-photon blockade can be utilized to produce transistors controlled by a single photon and filters that yield correlated two-photon outputs from an input of random photons [@Kubanek2008]. In the field of quantum simulations, the Jaynes–Cummings model can be mapped the Dirac electron in an electromagnetic field, with the coupling and the drive amplitude corresponding respectively to the magnetic and electric field: the photon blockade regime is associated with a discrete spectrum of the Dirac equation, while the breakdown of the blockade corresponds to a continuous spectrum [@Gutierrez-Jauregui2018]. Finally, the switching behavior of the pendulum in the transition region can be used for designing bifurcation amplifiers for the single-shot nondissipative readout of qubits [@Vijay2009]. The paper is organized as follows. In Section \[sec:II\] we introduce the electrical circuit which realizes the pendulum-oscillator system, calculate its eigenenergies, identify the two dynamical regimes of the small oscillations and the free rotor. In Section \[sec:III\], we introduce the drive and dissipation. We discuss two formalisms for dissipation, namely the Lindblad equation and the Floquet–Born–Markov approach. Section \[sec:IV\] presents the main results for the quantum-to-classical transition and the photon blockade, focusing on the resonant case. Here, we also discuss the gate dependence, the ultra-strong coupling regime, Section \[sec:V\] is dedicated to conclusions. Circuit-QED implementation of a Josephson pendulum coupled to a resonator {#sec:II} ========================================================================= We discuss here the physical realization of the Josephson pendulum-resonator system as an electrical circuit consisting of a transmon device coupled capacitively to an $LC$ oscillator, as depicted in Fig. \[fig:oscpendevals\](a). The coupled system is modeled by the Hamiltonian $$\label{eq:H0} \hat H_{0} = \hat H_{\rm r} + \hat H_{\rm t} + \hat H_{\rm c},$$ where $$\begin{aligned} \hat H_{\rm r} &=& \hbar\omega_{\rm r}\hat a^{\dag}\hat a, \label{eq:Hr}\\ \hat H_{\rm t} &=& 4E_{\rm C}(\hat n-n_{\rm g})^2 - E_{\rm J}\cos \hat \varphi, \label{eq:transmonHam}\\ \hat H_{\rm c} &=& \hbar g\hat n(\hat a^{\dag}+\hat a)\end{aligned}$$ describe the resonator, the transmon, and their coupling, respectively. We have defined $\hat a$ as the annihilation operator of the harmonic oscillator, and used $\hat n = -i\partial /\partial\varphi$ as the conjugate momentum operator of the superconducting phase difference $\hat \varphi$. These operators obey the canonical commutation relation $[\hat \varphi,\hat n]=i$. The angular frequency of the resonator is given by $\omega_{\rm r}$. We have also denoted the Josephson energy with $E_{\rm J}$, and the charging energy with $E_{\rm C } = e^2/(2C_\Sigma)$ where the capacitance on the superconducting island of the transmon is given as $C_{\Sigma}=C_{\rm B} + C_{\rm J} + C_{\rm g}$. Using the circuit diagram in Fig. \[fig:oscpendevals\](a), we obtain the coupling constant $g = 2 e C_{\rm g}q_{\rm zp}/(\hbar C_{\Sigma} C_{\rm r})$, where the zero-point fluctuation amplitude of the oscillator charge is denoted with $q_{\rm zp} = \sqrt{C_{\rm r}\hbar \omega_{\rm r}/2}$ [@koch2007]. Let us briefly discuss the two components of this system: the Josephson pendulum and the resonator. The pendulum physics is realized by the superconducting transmon circuit [@koch2007] in Fig. \[fig:oscpendevals\](a) and described by the Hamiltonian $\hat H_{\rm t}$ in Eq. (\[eq:transmonHam\]). As discussed in Ref. [@koch2007], the Hamiltonian of the transmon is analogous to that of an electrically charged particle whose motion is restricted to a circular orbit and subjected to homogeneous and perpendicular gravitational and magnetic fields. By fixing the x and z directions as those of the gravity and the magnetic field, respectively, the position of the particle is completely determined by the motion along the polar angle in the xy plane. The polar angle can be identified as the $\varphi$ coordinate of the pendulum. Thus, the kinetic energy part of the Hamiltonian (\[eq:transmonHam\]) describes a free rotor in a homogeneous magnetic field. In the symmetric gauge, the vector potential of the field imposes an effective constant shift for the $\varphi$ component of the momentum, which is analogous to the offset charge $n_{\textrm g}$ on the superconducting island induced either by the environment or by a gate voltage. In the following, the ’plasma’ frequency for the transmon is given by $\omega_{\rm p} = \sqrt{8E_{\rm C}E_{\rm J}}/\hbar$ and describes the classical oscillations of the linearized circuit. The parameter $\eta=E_{\rm J}/E_{\rm C}$ is the ratio between the potential and kinetic energy of the pendulum, and determines, by the condition $\eta \gg 1$, whether the device is in the charge-insensitive regime of the free rotor and the gravitational potential. The eigenvalues $\{\hbar \omega_k\}$ and the corresponding eigenvectors $\{|k\rangle\}$, with $k=0,1,\ldots$, of the Hamiltonian in Eq. (\[eq:transmonHam\]) can be obtained by solving the Mathieu equation, see Appendix \[app:eigenvalue\]. In general, the eigenvalues of the coupled system Hamiltonian $\hat H_0$ in Eq. (\[eq:H0\]) have to be solved numerically. With a sufficient truncation in the Hilbert spaces of the uncoupled systems in Eq. (\[eq:Hr\]) and Eq. (\[eq:transmonHam\]), one can represent the Hamiltonian $\hat H_0$ in a matrix form. The resulting eigenvalues of the truncated Hamiltonian $\hat H_0$ are shown in Fig. \[fig:oscpendevals\]. We see that the coupling creates avoided crossings at the locations where the pendulum transition frequencies are equal to positive integer multiples of the resonator quantum. Also, the density of states increases drastically with the energy. The nonlinearity in the system is characterized by the non-equidistant spacings between the energy levels. Their origin is the sinusoidal Josephson potential of the transmon. Here, we are especially interested in the regime where the resonator frequency is (nearly) resonant with the frequency of the lowest transition of the pendulum, i.e. when $\omega_{\rm r}\approx\omega_{\rm q} = \omega_{01}=\omega_1-\omega_0$. ![Electrical circuit and the corresponding eigenenergy spectrum. (a) Lumped-element schematic of a transmon-resonator superconducting circuit. The resonator and the transmon are marked with blue and magenta rectangles. (b) Numerically obtained eigenenergies of the resonator-pendulum Hamiltonian in Eq. (\[eq:H0\]) are shown in blue as a function of the resonator frequency. The bare pendulum eigenenergies $\hbar \omega_k$ are denoted with dashed horizontal lines and indicated with the label $k$. The eigenenergies of the uncoupled system, defined Eq. (\[eq:Hr\]) and Eq. (\[eq:transmonHam\]), are given by the dashed lines whose slope increases in integer steps with the number of quanta in the oscillator as $n\hbar \omega_{\rm r}$. We only show the eigenenergies of the uncoupled system for the case of pendulum in its ground state, but we note that one obtains a similar infinite fan of energies for each pendulum eigenstate. Note that in general $\omega_{\rm q}\neq \omega_{\rm p}$. We have used the parameters in Table \[tab:params1\] and fixed $n_{\rm g}=0$. []{data-label="fig:oscpendevals"}](fig1){width="1.0\linewidth"} The Hamiltonian $\hat H_0$ in Eq. (\[eq:H0\]) can be represented in the eigenbasis $\{|k\rangle\}$ of the Josephson pendulum as $$\hat{H}_{0} = \hbar\omega_{\rm r} \hat{a}^{\dag}\hat{a} + \sum_{k=0}^{K-1} \hbar\omega_{k} \vert k\rangle\langle k\vert + \hbar g(\hat{a}^{\dag}+\hat{a})\sum_{k,\ell=0}^{K-1}\hat{\Pi}_{k\ell}.\label{eq:ManyStatesHam}$$ Here, $K$ is the number of transmon states included in the truncation. We have also defined $\hat\Pi_{k\ell}\equiv \langle k|\hat{n}|\ell\rangle |k\rangle\langle \ell|$ which is the representation of the Cooper-pair-number operator in the eigenbasis of the transmon. A useful classification of the eigenstates can be obtained by using the fact that the transmon can be approximated as a weakly anharmonic oscillator [@koch2007], thus $\langle k|\hat{n}|\ell\rangle$ is negligible if $k$ and $\ell$ differ by more than 1. Together with the rotating-wave approximation, this results in $$\begin{split} \hat{H}_{0} \approx &\hbar\omega_{\rm r} \hat{a}^{\dag}\hat{a} + \sum_{k=0}^{K-1} \hbar\omega_{k}\vert k\rangle\langle k\vert \\ &+ \hbar g\sum_{k=0}^{K-2} \left(\hat{a}\hat{\Pi}_{k,k+1}^{\dag} + \hat{a}^{\dag} \hat{\Pi}_{k,k+1} \right),\label{eq:ManyStatesHam_simple} \end{split}$$ Here, we introduce the total excitation-number operator as $$\hat N = \hat a^{\rm \dag}\hat a + \sum_{k=0}^{K-1} k\vert k\rangle\langle k\vert,\label{eq:exitationN_K}$$ which commutes with the Hamiltonian in Eq. (\[eq:ManyStatesHam\_simple\]). Thus, the eigenstates of this Hamiltonian can be labeled by the eigevalues of $\hat N$, which is a representation that we will find useful transitions between these states. The terms neglected in the rotating-wave approximation can be treated as small perturbations except for transitions where the coupling frequency $g_{\ell k} = g\langle k|\hat n|\ell\rangle$ becomes a considerable fraction of the corresponding transition frequency $\omega_{\ell k}=\omega_{k}-\omega_\ell$ and, thus, enters the ultrastrong coupling regime with $g_{k\ell} \geq 0.1\times \omega_{\ell k}$. In the ultrastrong coupling regime and beyond, the eigenstates are superpositions of states with different excitation numbers and cannot, thus, anymore be labeled with $N$. Another important approximation for the Hamiltonian in Eq.(\[eq:ManyStatesHam\]) is the two-state truncation ($K=2$), which reduces it to the Rabi Hamiltonian $$\label{eq:HRabi} \hat H_{\rm R} = \hbar \omega_{\rm r}\hat a^{\dag}\hat a+\hbar \omega_{\rm q} \hat \sigma_+\hat \sigma_- + \hbar g_{01}(\hat a^{\dag}+\hat a)\hat \sigma_{\rm x}.$$ Here $g_{01}=g\langle 1|\hat n|0\rangle$, the qubit annihilation operator is $\hat \sigma_- = |0\rangle\langle 1|$, and the Pauli spin matrix $\hat \sigma_{\rm x}=\hat \sigma_-+\hat \sigma_+$. The Rabi Hamiltonian is a good approximation to the pendulum-oscillator system as long as the corrections for the low-energy eigenvalues and eigenstates, arising from the higher excited states of the pendulum, are taken properly into account [@boissonneault2009a; @boissonneault2012b; @boissonneault2012c]. Further, by performing a rotating-wave approximation, we obtain the standard Jaynes–Cummings model $$\hat H_{\rm JC} = \hbar \omega_{\rm r}\hat a^{\dag}\hat a+\hbar \omega_{\rm q} \hat \sigma_+\hat \sigma_- + \hbar g_{01}(\hat a^{\dag}\hat \sigma_-+\hat a\hat \sigma_+),\label{eq:HJC}$$ which also results from a truncation of Eq. (\[eq:ManyStatesHam\_simple\]) to the low-energy subspace spanned by the lowest two eigenstates of the transmon. Apart from the non-degenerate ground state $|0,0\rangle$ with zero energy, the excited-state eigenenergies of the Jaynes–Cummings Hamiltonian in Eq. (\[eq:HJC\]) form a characteristic doublet structure. In the resonant case, the excited-state eigenenergies and the corresponding eigenstates are given by $$\begin{aligned} E_{n_{r},\pm} &=& n_{r}\hbar \omega_{\rm r} \pm \sqrt{n_{r}}\hbar g_{01}, \label{eq:JCener}\\ |n_{r},\pm\rangle &=& \frac{1}{\sqrt{2}}(|n_{r},0\rangle \pm |n_{r}-1,1\rangle). \label{eq:JCstates}\end{aligned}$$ Here, $n_r=1,2,\ldots$ and we have denoted eigenstates of the uncoupled Jaynes–Cummings Hamiltonian with $\{|n_{r},0\rangle , |n_{r},1\rangle\}$ where $|n_{r}\rangle$ are the eigenstates of the resonator with $n_{r}=0,1,\ldots$. Due to the rotating-wave approximation, the Jaynes–Cummings Hamiltonian commutes with the excitation-number operator in Eq. (\[eq:exitationN\_K\]) truncated to two states and represented as $$\hat N = \hat a^{\rm \dag}\hat a + \hat \sigma_+\hat \sigma_-. \label{eq:exitationN_2}$$ Thus, they have joint eigenstates and, in addition, the excitation number $N$ is a conserved quantity. For a doublet with given $n_{r}$, the eigenvalue of the excitation-number operator is $N=n_{r}$, while for the ground state $N=0$. We note that the transition energies between the Jaynes–Cummings eigenstates depend nonlinearly on $N$. Especially, the transition energies from the ground state $\vert 0,0\rangle$ to the eigenstate $\vert n_{r},\pm \rangle$ are given by $n_{r}\hbar\omega_{\rm r} \pm \sqrt{n_{r}}\hbar g_{01}$. Models for the driven-dissipative Josephson pendulum coupled to the harmonic oscillator {#sec:III} ======================================================================================= Here, we provide a master equation approach that incorporates the effects of the drive and dissipation to the coupled system. Previous studies on this system have typically truncated the transmon to the low-energy subspace spanned by the two lowest energy eigenstates [@Bishop2010; @Reed2010], or treated the dissipation in the conventional Lindblad formalism [@bishop2009]. Recent studies [@pietikainen2017; @pietikainen2018; @verney2018; @lescanne2018] have treated the dissipation at the detuned limit using the Floquet–Born–Markov approach. We will apply a similar formalism for the case where the pendulum and resonator are in resonance in the low-energy subspace. Especially, we study the driven-dissipative transition between the low-energy and the free rotor regimes of the pendulum in terms of the dependence of the number $N_{\rm r}$ of quanta in the resonator on the drive power. Coupling to the drive --------------------- The system shown in Fig. \[fig:oscpendevals\] and described by the Hamiltonian in Eq. (\[eq:H0\]) can be excited by coupling the resonator to a monochromatic driving signal modeled with the Hamiltonian $$\label{eq:Hd} \hat H_{\rm d} = \hbar A \cos(\omega_{\rm d}t)[\hat a^{\dag}+\hat a],$$ where $A$ and $\omega_{\rm d}$ are the amplitude and the angular frequency of the drive, respectively. This results in a total system Hamiltonian $\hat H_{\rm S} = \hat H_{0} + \hat{H}_{\rm d}$. For low-amplitude drive, only the first two states of the pendulum have a significant occupation and, thus, the Hamiltonian $\hat H_0$ can be truncated into the form of the well-known Rabi Hamiltonian in Eq. (\[eq:HRabi\]), which in turn, under the rotating-wave approximation, yields the standard Jaynes–Cummings Hamiltonian in Eq. (\[eq:HJC\]). The transitions induced by the drive in the Jaynes–Cummings system are subjected to a selection rule – the occupation number can change only by one, i.e. $N \rightarrow N\pm 1$. This follows from the relations $$\begin{aligned} \langle n_{r},\pm|(\hat a^{\dag}+\hat a)|0,0\rangle &=& \frac{1}{\sqrt{2}}\delta_{n_{r},1}, \label{eq:selection1}\\ \langle n_{r},\pm|(\hat a^{\dag}+\hat a)|\ell_{r},\pm \rangle &=& \frac{1}{2}\left(\sqrt{n_{r}}+\sqrt{n_{r}-1}\right)\delta_{n_{r},\ell_{r}+1} \nonumber\\ &+&\frac{1}{2}\left(\sqrt{n_{r}+1}+\sqrt{n_{r}}\right)\delta_{n_{r},\ell_{r}-1}\label{eq:selection2}\end{aligned}$$ As a consequence, the system climbs up the Jaynes–Cummings ladder by one step at a time. Particularly, a system in the ground state is coupled directly only to states $|1,\pm\rangle$. Indeed, in such a system the Jaynes–Cummings ladder has been observed [@fink2008], as well as the effect of strong drive in the off-resonant [@pietikainen2017] and on-resonant case [@fink2017]. The Jaynes–Cummings model offers a good starting point for understanding the phenomenon of photon blockade in the pendulum-resonator system, which will be discussed later in detail. Indeed, it is apparent from Eq. (\[eq:JCener\]) that, as the system is driven externally by not too intense fields, the excitation to higher levels in the resonator is suppressed by the higher levels being off-resonant, due to the nonlinearity induced by the coupling. This is referred to as photon blockade. As the drive amplitude increases further, the entire Jaynes–Cummings hierarchy breaks down [@carmichael2015]. However, in weakly anharmonic systems such as the transmon, as the drive amplitude is increased, the higher excited states of the Josephson pendulum become occupied and the two-state approximation becomes insufficient. As a consequence, the system has to be modeled by a In the resonant case, the need to take into account the second excited state of the transmon has been pointed out already in Ref. [@fink2017]. , at larger drive amplitudes, the pendulum escapes the low-energy subspace defined by the states localized in a well of the cosine potential and the unbound free rotor states also become occupied [@pietikainen2017; @pietikainen2018; @lescanne2018; @verney2018] even in the case of strongly detuned drive frequency. In the limit of very high drive power, the pendulum behaves as a free rotor and the nonlinear potential can be neglected. Consequently, the resonance frequency of the system is set by the bare resonator frequency, instead of the normal modes. Dissipative coupling -------------------- The dissipation is treated by modeling the environment thermal bosonic bath which is coupled bilinearly to the resonator. The Hamiltonian of the driven system coupled to the bath can be written as $$\label{eq:totHam} \hat H = \hat H_{\rm S} + \hat H_{\rm B} + \hat H_{\rm int},$$ where $$\begin{aligned} \hat H_{\rm B} &=& \hbar \sum_k \Omega_k\hat b_k^{\dag}\hat b_k,\\ \hat H_{\rm int} &=& \hbar (\hat a^{\dag}+\hat a) \sum_k g_k(\hat b_k^{\dag}+\hat b_k).\label{eq:dissint}\end{aligned}$$ Above, $\{\hat b_k\}$, $\{\Omega_k\}$, and $\{g_k\}$ are the annihilation operators, the angular frequencies, and the coupling frequencies of the bath oscillators. We use this model in the derivation of a master equation for the reduced density operator of the system. We proceed in the conventional way and assume the factorized initial state $\hat \rho(0) = \hat\rho_{\rm S}(0)\otimes \hat \rho_{\rm B}(0)$, apply the standard Born and Markov approximations, trace over the bath, and perform the secular approximation. As a result, we obtain a master equation in the standard Lindblad form. Lindblad master equation {#sec:Lindblad} ------------------------ Conventionally, the dissipation in the circuit QED setup has been treated using independent Lindblad dissipators for the resonator and for the pendulum. Formally, this can be achieved by coupling the pendulum to another heat bath formed by an infinite set of harmonic oscillators. This interaction can be described with the Hamiltonian $$\label{eq:transdissint} \hat H_{\rm int}^{\rm t} = \hbar \hat n \sum_k f_k(\hat c_k^{\dag}+\hat c_k),$$ where $\{f_k\}$ and $\{\hat c_k\}$ are the coupling frequencies and the annihilation operators of the bath oscillators. The bath is coupled to the transmon through the charge operator $\hat n$ which is the typical source of decoherence in the charge-based superconducting qubit realizations. By following the typical Born–Markov derivation of the master equation for the uncoupled subsystems, one obtains a Lindblad equation where the dissipators induce transitions between the eigenstates of the uncoupled ($g=0$) system [@Breuer2002; @scala2007; @beaudoin2011; @tuorila2017] $$\begin{split} \frac{{\,\text{d}\hat\rho\,}}{{\,\text{d}t\,}} =& -\frac{i}{\hbar}[\hat{H}_{\rm S},\hat{\rho}] +\kappa[n_{\rm th}(\omega_{\rm r})+1]\mathcal{L}[\hat{a}]\hat\rho \\ &+\kappa n_{\rm th}(\omega_{\rm r})\mathcal{L}[\hat{a}^\dagger]\hat\rho \\ &+\sum_{k\ell} \Gamma_{k\ell}\mathcal{L}[|\ell\rangle\langle k|]\hat{\rho}, \end{split} \label{eq:LindbladME}$$ where $\mathcal{L}[\hat{A}]\hat\rho = \frac12 (2\hat{A}\hat\rho \hat{A}^{\dag} - \hat{A}^{\dag}\hat{A}\hat\rho-\hat\rho \hat{A}^{\dag}\hat{A})$ is the Lindblad superoperator and $n_{\rm th}(\omega)=1/[e^{\hbar\omega/(k_{\rm B} T)}-1]$ is the Bose–Einstein occupation. Note that the treatment of dissipation as superoperators acting separately on the qubit and on the resonator is valid if their coupling strength and the drive amplitude are weak compared to the transition frequencies of the uncoupled system-environment. Above, we have also assumed an ohmic spectrum for the resonator bath. In the Lindblad master equation (\[eq:LindbladME\]), we have included the effects arising from the coupling $g$ and the drive into the coherent von Neumann part of the dynamics. The first two incoherent terms cause transitions between the eigenstates of the resonator and arise from the interaction Hamiltonian in Eq. (\[eq:dissint\]). The strength of this interaction is characterized with the spontaneous emission rate $\kappa$. The last term describes the relaxation, excitation, and dephasing of the transmon caused by the interaction Hamiltonian in Eq. (\[eq:transdissint\]). The transition rates $\Gamma_{k\ell}$ between the transmon eigenstates follow the Fermi’s golden rule as $$\Gamma_{k\ell} = |\langle \ell | \hat n| k\rangle|^2 S(\omega_{k\ell}).$$ In our numerical implementation, we have assumed that the fluctuations of the transmon bath can also be characterised with an ohmic spectrum $S(\omega)=\frac{\gamma_0\omega}{1-\exp[-\hbar\omega/k_{\rm B}T]}$, where $\gamma_0$ is a dimensionless factor describing the bath-coupling strength. We have also denoted the transition frequencies of the transmon with $\omega_{k\ell} = \omega_{\ell}-\omega_k$. Here, the magnitude of the transition rate from state $|k\rangle$ to the state $|\ell\rangle$ is given by the corresponding matrix element of the coupling operator $\hat n$ and the coupling strength $\gamma_0$. We note that in a typical superconducting resonator-transmon realization one has $\gamma=\gamma_0 \omega_{01}\ll \kappa$. In this so-called bad-cavity limit, the effects of the transmon bath are negligible especially if the coupling frequency $g$ with the resonator is large. Thus, the main contribution of the transmon dissipators in the master equation Eq. (\[eq:LindbladME\]) is that it results to faster convergence in the numerical implementation of the dynamics. Floquet–Born–Markov formalism {#sec:FBM} ----------------------------- The dissipators in the Lindblad model above are derived under the assumption of weak driving and weak coupling between the transmon and the resonator. However, both the driving and the coupling affect the eigenstates of the system and, thus, have to be taken into account in the derivation of the master equation. This can be achieved in the so-called Floquet–Born–Markov approach, where the drive and the transmon-resonator coupling are explicitly included throughout the derivation of the dissipators [@tuorila2013; @pietikainen2017; @pietikainen2018; @lescanne2018; @verney2018]. For this purpose, we represent the system in terms of the quasienergy states which can be obtained only numerically. Since the drive in Eq. (\[eq:Hd\]) is $\tau=2\pi/\omega_{\rm d}$-periodic, the solution to the time-dependent Schrödinger equation $$\label{eq:tdse} i\hbar\frac{{\,\text{d}}\,}{{\,\text{d}t\,}}|\Psi(t)\rangle = \hat{H}_{\rm S}(t) |\Psi(t)\rangle,$$ corresponding to the Hamiltonian $\hat{H}_{\rm S}(t)$ in Eq. (\[eq:totHam\]), can be written in the form $$\label{eq:FloqState} |\Psi(t)\rangle = e^{-i\varepsilon t/\hbar} |\Phi(t)\rangle,$$ where $\varepsilon$ are the quasienergies and $|\Phi(t)\rangle$ are the corresponding $\tau$-periodic quasienergy states. By defining the unitary time-propagator as $$\label{eq:FloqProp} \hat{U}(t_2,t_1)|\Psi(t_1)\rangle =|\Psi(t_2)\rangle,$$ one can rewrite the Schrödinger equation (\[eq:tdse\]) in the form $$i\hbar\frac{{\,\text{d}}\,}{{\,\text{d}t\,}}\hat{U}(t,0) = \hat{H}_{\rm S}(t)\hat{U}(t,0).$$ Using Eqs. (\[eq:FloqState\]) and (\[eq:FloqProp\]), we obtain $$\begin{aligned} \hat{U}(\tau,0)|\Phi(0)\rangle &=& e^{-i\varepsilon \tau/\hbar} |\Phi(0)\rangle, \label{eq:QEproblem}\end{aligned}$$ from which the quasienergies $\varepsilon_\alpha$ and the corresponding quasienergy states $|\Phi_\alpha(0)\rangle$ can be solved. Using the propagator $\hat U$, one can obtain the quasienergy states for all times from $$\hat{U}(t,0)|\Phi_\alpha(0)\rangle = e^{-i\varepsilon_\alpha t/\hbar} |\Phi_\alpha(t)\rangle.$$ Due to the periodicity of $|\Phi_\alpha(t)\rangle$, it is sufficient to find the quasienergy states for the time interval $t\in[0,\tau ]$. Also, if $\varepsilon_\alpha$ is a solution for Eq. (\[eq:QEproblem\]), then $\varepsilon_\alpha +\ell\hbar\omega_{\rm d}$ is also a solution. Indeed, all solutions of Eq. (\[eq:QEproblem\]) can be obtained from the solutions of a single energy interval of $\hbar\omega_{\rm d}$. These energy intervals are called Brillouin zones, in analogy with the terminology used in solid-state physics for periodic potentials. The master equation for the density operator in the quasienergy basis can be written as [@Blumel1991; @Grifoni1998] $$\label{eq:FBM} \begin{split} \dot{\rho}_{\alpha\alpha}(t) &= \sum_{\nu} \left[\Gamma_{\nu\alpha}\rho_{\nu\nu}(t)-\Gamma_{\alpha\nu}\rho_{\alpha\alpha}(t)\right],\\ \dot{\rho}_{\alpha\beta}(t) &= -\frac12 \sum_{\nu}\left[\Gamma_{\alpha\nu}+\Gamma_{\beta\nu}\right]\rho_{\alpha\beta}(t), \ \ \alpha\neq \beta, \end{split}$$ where $$\begin{split} \Gamma_{\alpha\beta}&=\sum_{\ell=-\infty}^{\infty} \left[\gamma_{\alpha\beta \ell}+n_{\rm th}(|\Delta_{\alpha\beta \ell}|)\left(\gamma_{\alpha\beta \ell}+\gamma_{\beta \alpha -\ell}\right)\right],\\ \gamma_{\alpha\beta \ell} &= \frac{\pi}{2} \kappa \theta(\Delta_{\alpha\beta\ell})\frac{\Delta_{\alpha\beta\ell}}{\omega_{\rm r}}|X_{\alpha\beta\ell}|^2. \end{split}$$ Above, $\theta(\omega)$ is the Heaviside step-function and $\hbar\Delta_{\alpha \beta \ell} = \varepsilon_{\alpha} - \varepsilon_{\beta} + \ell\hbar\omega_{\rm d}$ is the energy difference between the states $\alpha$ and $\beta$ in Brillouin zones separated by $\ell$. Also, $$X_{\alpha\beta \ell} = \frac{1}{\tau}\int_{t_0}^{t_0 +\tau} {\,\text{d}t\,} e^{-i\ell\omega_d t} \langle \Phi_\alpha(t)|(\hat{a}^\dagger+\hat{a})|\Phi_\beta(t)\rangle,$$ where $t_0$ is some initial time after the system has reached a steady state. From Eq. (\[eq:FBM\]), we obtain the occupation probabilities $p_\alpha=\rho_{\alpha\alpha}(t\rightarrow \infty)$ in the steady state as $$p_{\alpha} = \frac{\sum_{\nu\neq \alpha} \Gamma_{\nu\alpha}p_{\nu}}{\sum_{\nu\neq \alpha}\Gamma_{\alpha\nu}},$$ and the photon number $$\label{eq:FBMNr} N_{\rm r} = \sum_\alpha p_\alpha\langle \hat{a}^\dagger\hat{a}\rangle_\alpha,$$ where $$\langle \hat{a}^\dagger\hat{a}\rangle_\alpha= \frac{1}{\tau}\int_{t_0}^{t_0 +\tau} {\,\text{d}t\,} \langle \Phi_\alpha(t)|\hat{a}^\dagger\hat{a}|\Phi_\alpha(t)\rangle,$$ is the photon number in a single quasienergy state. The occupation probability for the transmon state $\vert k\rangle$ is given by $$\label{eq:FBMPk} P_k= \frac{1}{\tau}\sum_\alpha p_\alpha\int_{t_0}^{t_0 +\tau} {\,\text{d}t\,} \langle \Phi_\alpha(t)|k\rangle\langle k|\Phi_\alpha(t)\rangle \,.$$ We emphasize that this method assumes weak coupling to the bath but no such restrictions are made for the drive and pendulum-resonator coupling strengths. As a consequence, the dissipators induce transitions between the quasienergy states of the driven coupled system. Parameters ---------- The parameter space is spanned by seven independent parameters which are shown in Table \[tab:params1\]. Symbol Parameter Value ------------------ ------------------------------------ ------- $\omega_{\rm q}$ qubit frequency 1.0 $\omega_{\rm d}$ drive frequency 0.98 $\omega_{\rm p}$ plasma oscillation frequency 1.08 $g$ coupling frequency 0.04 $\kappa$ resonator dissipation rate 0.002 $k_{\rm B}T$ thermal energy 0.13 $E_{\rm C}$ charging energy 0.07 $\eta$ energy ratio $E_{\rm J}/E_{\rm C}$ 30 : Parameters of the driven and dissipative oscillator-pendulum system. The numerical values of the angular frequencies and energies used in the numerical simulations are given in units of $\omega_{\rm r}$ and $\hbar\omega_{\rm r}$, respectively. We note that $\omega_{\rm q}$ is determined by $E_{\rm C}$ and $\eta$, see the text.[]{data-label="tab:params1"} We fix the values of the energy ratio $\eta=E_{\rm J}/E_{\rm C}$ and the coupling strengths $g$ and $\kappa$. The ratio $\eta$ sets the number $K_{\rm b}$ of bound states in the pendulum, see Appendix \[app:eigenvalue\], but does not qualitatively affect the response. We have used a moderate value of $\eta$ in the transmon regime, in order to keep $K_{\rm b}$ low allowing more elaborate discussion of the transient effects between the low-energy oscillator and rotor limits. We use the Born, Markov, and secular approximations in the description of dissipation which means that the value of $\kappa$ has to be smaller than the system frequencies. In addition, we work in the experimentally relevant strong coupling regime where the oscillator-pendulum coupling $g\gg \kappa$. The choice of parameters is similar to the recently realized circuit with the same geometry [@pietikainen2017]. The transition energies of the transmon are determined by the Josephson energy $E_{\rm J}$ and by the charging energy $E_{\rm C}$, which can be adjusted by the design of the shunting capacitor $C_{\rm B}$, see Fig. \[fig:oscpendevals\]. The transition energy between the lowest two energy eigenstates is given by $\hbar\omega_{\rm q} \approx \sqrt{8E_{\rm J}E_{\rm C}}-E_{\rm C} = E_{\rm C} (\sqrt{8\eta}-1)$. We will study the onset of the nonlinearities for different drive detunings $\delta_{\rm d}=\omega_{\rm d}-\omega_{\rm r}$ as a function of the drive amplitude $A$. We are especially interested in the resonant case $\delta_{\rm q}=\omega_{\rm q}-\omega_{\rm r}=0$. The detuned case has been previously studied in more detail in Refs. [@pietikainen2017; @pietikainen2018; @lescanne2018; @verney2018]. We have used a temperature value of $k_{\rm B}T/(\hbar \omega_{\rm r})=0.13$ which corresponds to $T\approx 30$ mK for a transmon with $\omega_{\rm q}/(2\pi) = 5$ GHz. Numerical results {#sec:IV} ================= Classical system ---------------- Classically, we can understand the behaviour of our system as follows: the pendulum-resonator forms a coupled system, whose normal modes can be obtained. However, because the pendulum is nonlinear, the normal-mode frequencies of the coupled-system depend on the oscillation amplitude of the pendulum. The resonator acts also as a filter for the drive, which is thus applied to the pendulum. As the oscillation amplitude of the pendulum increases, the normal-mode frequency shifts, an effect which is responsible for photon blockade. Eventually the pendulum reaches the free rotor regime, where the Josephson energy becomes negligible. As a consequence, the nonlinearity no longer plays any role, and the resulting eigenmode of the system is that of the bare resonator. We first solve the classical equation of motion (see Appendix \[app:classeom\]) for the driven and damped resonator-transmon system. We study the steady-state occupation $N_{\rm r}$ of the resonator as a function of the drive amplitude. Classically, one expects that the coupling to the transmon causes deviations from the bare resonator occupation $$\label{eq:anocc} N_{\rm bare} = \frac14\frac{A^2}{\delta_{\rm d}^2 + \kappa^2/4}.$$ We emphasize that $N_{\rm bare} \leq A^2/\kappa^2$ where the equality is obtained if the drive is in resonance, i.e. if $\delta_{\rm d}=0$. The numerical data for $\delta_{\rm d}/\omega_{\rm r} =-0.02$ is shown in Fig. \[fig:classsteps\]. We compare the numerical data against the bare-resonator photon number in Eq. (\[eq:anocc\]), and against the photon number of the linearized system, see Appendix \[app:classeom\], $$\label{eq:linearNr} N_{\rm lin} = \frac{A^2}{4}\frac{1}{\left(\delta_{\rm d}-g_{\rm eff}^2\frac{\delta_{\rm p}}{\delta_{\rm p}^2+\gamma^2/4}\right)^2+\left(\frac{\kappa}{2}+g_{\rm eff}^2\frac{\gamma/2}{\delta_{\rm p}^2+\gamma^2/4}\right)^2},$$ where $\delta_{\rm p} = \omega_{\rm d}-\omega_{\rm p}$, $\hbar \omega_{\rm p} = \sqrt{8E_{\rm J}E_{\rm C}}$, $g_{\rm eff} = g\sqrt[4]{\eta/32}$, and $\gamma$ is the dissipation rate of the pendulum. The above result is obtained by linearizing the pendulum potential which results to system that is equivalent to two coupled harmonic oscillators. We find in Fig. \[fig:classsteps\] that for small drive amplitude $A/\kappa = 0.005$, the steady state of the resonator photon number is given by that of the linearized system. As a consequence, both degrees of freedom oscillate at the drive frequency and the system is classically stable. The small deviation between the numerical and analytic steady-state values is caused by the rotating-wave approximations that were made for the coupling and the drive in the derivation of Eq. (\[eq:linearNr\]). If the drive amplitude is increased to $A/\kappa =7$, the nonlinearities caused by the cosinusoidal Josephson potential generate chaotic behavior in the pendulum. As a consequence, the photon number does not find a steady state but, instead, displays aperiodic chaotic oscillations around some fixed value between those of the bare resonator and the linearized system. This value can be found by studying the long-time average in the steady state. For very high drive amplitude $A/\kappa = 500$, the photon number in the classical system is given by that of the bare resonator in Eq. (\[eq:anocc\]). Physically this means that for strong driving, the pendulum experiences rapid free rotations and, as a consequence, its contribution to the photon dynamics is zero on average. In Fig. \[fig:Occupation7\](a), we study in more detail how the classical steady-state photon number of the resonator changes as the coupled system goes through the transition between the linearized oscillations in the weak driving regime and the bare-resonator oscillations for strong driving. In the absence of driving the steady-state photon number is zero in accordance to Eq. (\[eq:linearNr\]). For low drive amplitudes, the resonator-transmon system can be approximated as a driven and damped Duffing oscillator. We show in Appendix \[app:classeom\] that the system has one stable solution for drive amplitudes $A<A_{\rm min}$ and $A>A_{\rm crit}$, and two stable solutions for $A_{\rm min}<A<A_{\rm crit}$ where $$\begin{aligned} A_{\rm min} &=& \tilde\gamma\sqrt{2(\tilde\omega_{\rm p}^2-\omega_{\rm d}^2)}\frac{\sqrt{(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)^2+\kappa^2\omega_{\rm d}^2}}{g\omega_{\rm r}\omega_{\rm p}},\label{eq:duffminimal}\\ A_{\rm crit} &=& \sqrt{\frac{8}{27}}\sqrt{(\tilde{\omega}_{\rm p}^2-\omega_{\rm d}^2)^3}\frac{\sqrt{(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)^2+\kappa^2\omega_{\rm d}^2}}{g\omega_{\rm r}\omega_{\rm d}\omega_{\rm p}},\label{eq:duffan}\end{aligned}$$ where we have defined the renormalized oscillator frequency and transmon dissipation rate as $\tilde{\omega}_{\rm r}^2 = \omega_{\rm r}^2 - g^2\hbar \omega_{\rm r}/(4E_{\rm C})$ and $\tilde{\gamma} = \gamma+gg_1\kappa\omega_{\rm d}^2/[(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)^2+\kappa^2\omega_{\rm d}^2]$, respectively, the classical oscillation frequency of the linearized transmon as $\hbar\omega_{\rm p}=\sqrt{8E_{\rm J}E_{\rm C}}$, and the renormalized linearized transmon frequency as $\tilde{\omega}_{\rm p}^2 = \omega_{\rm p}^2-g^2 \omega_{\rm d}^2/(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)[\hbar \omega_{\rm r}/(4E_{\rm C})]$. For amplitudes $A<A_{\rm min}$, the classical system behaves as a two-oscillator system and the photon number has the typical quadratic dependence on the drive amplitude in Eq. (\[eq:linearNr\]). As the drive amplitude becomes larger than $A_{\rm min}$ deviations from the linearized model emerge. In addition, the system becomes bistable. If $A \approx A_{\rm crit}$, the number of stable solutions for the Duffing oscillator is reduced from two to one. This is displayed by the abrupt step in the photon number of the classical solution in Fig. \[fig:Occupation7\] around $A_{\rm crit}/\kappa =1.2$. The remaining high-amplitude stable solution appears as a plateau which reaches up to the drive amplitude $A/\kappa \approx 5.6$. If the drive amplitude is further increased, the higher order terms beyond the Duffing approximation render the motion of the classical system chaotic, as described already in Fig. \[fig:classsteps\]. For large drives, the classical photon number approaches asymptotically the photon number of the bare resonator. ![Classical dynamics of the resonator occupation $N_{\rm r}$ of the driven and dissipative resonator-transmon system. We show data for linear ($A/\kappa = 0.005$, ), chaotic ($A/\kappa = 7$, ), and the bare-resonator ($A/\kappa = 500$, ) regimes. The bare-oscillator occupation in the steady state is given by Eq. (\[eq:anocc\]) and indicated with dashed lines. We also show with dot-dashed lines the steady-state photon numbers for the linearized system, as given in Eq. (\[eq:linearNr\]). We have used the pendulum dissipation rate $\gamma/\omega_{\rm r} =2\times 10^{-4}$. The other parameters are listed in Table \[tab:params1\].[]{data-label="fig:classsteps"}](fig2){width="1.0\linewidth"} Quantum description {#sec:quantdesc} ------------------- The transition between the motion of linearized and bare-resonator oscillations is characteristic to oscillator-pendulum systems. However, we show here that in the quantum mechanical context, the onset of the nonlinear dynamical behaviour turns out to be quantitatively different from that provided by the above classical model. This was also observed in recent experimental realization with superconducting circuits [@pietikainen2017]. In the quantum-mechanical treatment, we calculate the steady-state photon number in the resonator as a function of the drive amplitude using the Floquet–Born–Markov master equation presented in Sec. \[sec:FBM\]. We have confirmed that for the used values of the drive amplitude the simulation has converged for the truncation of seven transmon states and 60 resonator states. We compare the quantum results against those given by the classical equation of motion and study also deviations from the results obtained with the two-state truncation of the transmon. In Fig. \[fig:Occupation7\], we present the results corresponding to gate charge $n_{\rm g}=0$, where the resonator, the transmon, and the drive are nearly resonant at low drive amplitudes. The used parameters are the same as in Fig. \[fig:classsteps\] and listed in Table \[tab:params1\]. ![Onset of the nonlinearities in the driven system. (a) The steady-state photon number $N_{\rm r}$ as a function of the drive amplitude. We compare the Floquet–Born–Markov (FBM) simulation with seven transmon states against the corresponding solutions for the Rabi Hamiltonian and the classical system. The classical region of bistability occurs between $A_{\rm min}/\kappa = 0.97$ and $A_{\rm crit}/\kappa=1.2$, given by Eqs. (\[eq:duffminimal\]) and (\[eq:duffan\]), respectively. The classical simulation demonstrates switching between the two stable solutions at $A\approx A_{\rm crit}$. We also show the photon numbers of the linearized system and the bare resonator, as given by Eqs. (\[eq:anocc\]) and (\[eq:linearNr\]), respectively. Note that both axes are shown in logarithmic scale. (b) Occupation probabilities $P_k$ of the transmon eigenstates calculated using FBM. We indicate the regime of classical response with the shaded region in both figures. (c) Order parameter $\Xi$ defined in Eq. (\[eq:orderparameter\]). We have used $n_{\rm g}=0$ and the drive detuning $\delta_{\rm d}/\omega_{\rm r} = -0.02$. Other parameters are listed in Table \[tab:params1\].[]{data-label="fig:Occupation7"}](fig3){width="1.0\linewidth"} First, we notice in Fig. \[fig:Occupation7\](a) that even in the absence of driving there always exists a finite photon occupation of $N_{\rm r} \approx 10^{-3}$ in the ground state, contrary to the classical solution which approaches zero. At zero temperature, the existence of these ground-state photons [@lolli2015] originates from the terms in the interaction Hamiltonian that do not conserve the number of excitations and are neglected in the rotating-wave approximation resulting in Eq. (\[eq:ManyStatesHam\_simple\]). For the two-state truncation of the transmon, one can derive a simple analytic result for the ground-state photon number by treating these terms as a small perturbation. In the second order in the perturbation parameter $g/\omega_{\rm r}$, one obtains that the number of ground-state photons is given by $N_{\rm r} \approx (g/2\omega_{\rm r})^2$. We have confirmed that our simulated photon number at zero driving is in accordance with this analytic result if $T=0$ and $g/\omega_{\rm r}\ll 1$. The photon number at zero driving obtained in Fig. \[fig:Occupation7\](a) is slightly higher due to additional thermal excitation - in the simulations we use a finite value for temperature, see Table \[tab:params1\]. As was discussed in the previous section, the resonator photon number of a classical system increases quadratically with the drive amplitude. For amplitudes $A<A_{\rm crit}$, the classical system can be approximated with a linearized model formed by two coupled harmonic oscillators. However, in the quantum case the energy levels are discrete and, thus, the system responds only to a drive which is close to resonance with one of the transitions. In addition, the energy levels have non-equidistant separations which leads to a reduction of the photon number compared to the corresponding classical case, referred to as the photon blockade. This is also apparent in Fig. \[fig:Occupation7\](a). We emphasize that the photon-blockade is quantitatively strongly-dependent on the transmon truncation. This can be seen as the deviation between the two and seven state truncation results for $A/\kappa>1$ in Fig. \[fig:Occupation7\](a). We further demonstrate this by showing the transmon occupations $P_{\rm k}$ in Fig. \[fig:Occupation7\](b). For weak drive amplitudes, the transmon stays in its ground state. The excitation of the two-level system is accompanied by excitations of the transmon to several its bound states. If $A/\kappa \geq 30$, the transmon escapes its potential well and also the free rotor states start to gain a finite occupation. This can be interpreted as a transition between the Duffing oscillator and free rotor limits of the transmon, see Appendix \[app:eigenvalue\]. As a consequence, the response of the quantum system resembles its classical counterpart. We will study the photon blockade in more detail in the following section. ### Order parameter {#order-parameter .unnumbered} In order to characterize the transition between the quantum and classical regime, we can also study the behaviour of the order parameter $\Xi$ defined as the expectation value of the coupling part of the Hamiltonian in Eq. (\[eq:ManyStatesHam\]), normalized with $\hbar g$, as $$\label{eq:orderparameter} \Xi = \left|\left\langle (\hat a^{\dag} + \hat a)\sum_{k,\ell}\hat \Pi_{k,\ell}\right\rangle\right|,$$ previously introduced and used for the off-resonant case in Ref. [@pietikainen2017]. To get an understanding of its behavior, let us evaluate it for the resonant Jaynes–Cummings model, $$\label{eq:orderparameterJC} \Xi_{\rm JC} = \left|\left\langle n_{r}, \pm| (\hat a^{\dag} + \hat a)\sigma_{x} |n_{r},\pm \right\rangle \right| = \sqrt{n_{r}},$$ therefore it correctly estimates the absolute value of the cavity field operator. At the same time, when applied to the full Rabi model, it includes the effect of the terms that do not conserve the excitation number. In Fig. \[fig:Occupation7\](c), we present $\Xi$ as a function of the drive amplitude $A$. Much like in the off-resonant case, this order parameter displays a marqued increase by one order of magnitude across the transition region. Photon blockade: dependence on the drive frequency -------------------------------------------------- ![image](fig4){width="1.0\linewidth"} Here, we discuss in more detail the phenomenon of photon blockade in the pendulum-resonator system as a function of the drive detuning $\delta_{\rm d} = \omega_{\rm d}-\omega_{\rm r}$. First, we consider the transition between the ground state and the state $|n_{r},\pm\rangle$ \[Eq. (\[eq:JCstates\])\] of the resonant Jaynes–Cummings system ($\omega_{\rm q}=\omega_{\rm r}$). We recall that the selection rules Eqs. (\[eq:selection1\]) and (\[eq:selection2\]) allow only direct transitions that change the excitation number by one. However, at higher amplitudes the probability of higher order processes is no longer negligible and excited states can be populated by virtual non-resonant single-photon transitions. As a consequence, one obtains the resonance condition for multi-photon transitions as $n_{r}\omega_{\rm d} =n_{r}\omega_{\rm r} \pm \sqrt{n_{r}}g$. Because the energy-level structure is non-equidistant, the drive couples only weakly to other transitions in the system. In the absence of dissipation, the dynamics of the Jaynes–Cummings system can, thus, be approximated in a subspace spanned by the states $\{|0,0\rangle,|n_{r},\pm\rangle\}$. Thus, one expects that, due to the driving, the system goes through $n_{r}$-photon Rabi oscillations between the basis states of the subspace. The Rabi frequency $\Omega_{n_{r},\pm}$ of such process is proportional to the corresponding matrix element of the driving Hamiltonian in Eq. (\[eq:Hd\]) and the drive amplitude $A$. Consequently, the time-averaged photon number in the system is $N_{\rm r} = (n_{r}-\frac{1}{2})/2$. The driving does not, however, lead into a further increase of the photon number either because the drive is not resonant with transitions from the state $|n_{r},\pm\rangle$ to higher excited states or the matrix element of the resonant transitions are negligibly small. We are referring to this phenomenon as $n_{r}$-photon blockade. Dissipation modifies somewhat this picture, as it causes transitions outside the resonantly driven subspace. As a consequence, the average photon number decays with a rate which is proportional to $\kappa$. Thus, the steady state of such system is determined by the competition between the excitation and relaxation processes caused by the drive and the dissipation, respectively. At low temperatures, the occupation in the ground state becomes more pronounced as the dissipation causes mostly downward transitions. Thus, the steady-state photon number is reduced compared to the time-averaged result for Rabi-driven non-dissipative transition. This was visible already in Fig. \[fig:Occupation7\] in which the data was obtained with the two-state truncation and corresponds to the 4-photon blockade of the Jaynes–Cummings system. The diagram in Fig. \[fig:DriveDetuning\](a) represents the eigenenergies of the Hamiltonian for Eq. (\[eq:ManyStatesHam\]) in the two-state truncation for the transmon. The states are classified according to the excitation number $N$ from Eq. (\[eq:exitationN\_2\]). We note that, here, we do not make a rotating-wave approximation and strictly speaking $N$ is, therefore, not a good quantum number. However, it still provides a useful classification of the states since the coupling frequency is relatively small, i.e. $g/\omega_{\rm r}=0.04$. In Fig. \[fig:DriveDetuning\](c), we show the photon blockade spectrum of the resonator-transmon system as a function of the drive detuning $\delta_{\rm d}$, obtained numerically with the Floquet–Born–Markov master equation. Here, one can clearly identify the one-photon blockade at the locations where the drive frequency is in resonance with the single-photon transition frequency of the resonator-transmon system [@bishop2009], i.e. when $\delta_{\rm d}= \pm g$. Two, three, and higher-order blockades occur at smaller detunings and higher drive amplitudes, similar to Ref. [@carmichael2015]. Transitions involving up to five drive photons are denoted in the diagram in Fig. \[fig:DriveDetuning\](a) and are vertically aligned with the corresponding blockades in Fig. \[fig:DriveDetuning\](c). At zero detuning, there is no excitation as the coupling to the transmon shifts the energy levels of the resonator so that there is no transition corresponding with the energy $\hbar\omega_{\rm r}$. We also note that the photon-number spectrum is symmetric with respect to the drive detuning $\delta_{\rm d}$. We see this same symmetry also in Eq. (\[eq:linearNr\]) for the linearized classical system when the classical linearized frequency of the transmon is in resonance with the resonator frequency, i.e. when $\omega_{\rm p}=\omega_{\rm r}$. However, in experimentally relevant realisations of such systems the higher excited states have a considerable quantitative influence to the photon-number spectrum. We demonstrate this by showing data for the seven-state transmon truncation in Figs. \[fig:DriveDetuning\](b) and (d). The eigenenergies shown in Fig. \[fig:DriveDetuning\](b) are those obtained in Fig. \[fig:oscpendevals\] at resonance ($\omega_{\rm r}=\omega_{\rm q}$). We have again confirmed that for our choice of drive amplitudes and other parameters, this truncation is sufficient to obtain converged results with the Floquet–Born–Markov master equation. We observe that the inclusion of the higher excited states changes considerably the observed photon number spectrum. However, the states can again be labeled by the excitation number $N$ which we have confirmed by numerically calculating $N=\langle \hat N\rangle$ for all states shown in Fig. \[fig:DriveDetuning\](c). The relative difference from whole integers is less than one percent for each shown state. Corresponding to each $N$, the energy diagram forms blocks containing $N+1$ eigenstates with (nearly) the same excitation number, similar to the doublet structure of the Jaynes–Cummings model. Contrary to the two-state case, these blocks start to overlap if $N>4$ for our set of parameters, as can be seen in Fig. \[fig:DriveDetuning\](b). The number of transitions that are visible for our range of drive frequencies and amplitudes in Fig. \[fig:DriveDetuning\](d) is, thus, increased from ten observed in the Jaynes–Cummings case to 15 in the seven-state system. However, some of these transitions are not visible for our range amplitudes due to the fact that the corresponding virtual one-photon transitions are not resonant and/or have small transition matrix elements. In addition, the spectrum is asymmetric with respect to the detuning as the multi-photon resonances are shifted towards larger values of $\delta_{\rm d}$. As a consequence, the break-down of the photon blockade at $\delta_{\rm d}=0$ occurs at much lower amplitudes as is observed in the Jaynes–Cummings system [@carmichael2015]. Approaching the ultrastrong coupling ------------------------------------ For most applications in quantum information processing a relative coupling strength $g/\omega_{\rm r}$ of a few percent is sufficient. However, recent experiments with superconducting circuits have demonstrated that it is possible to increase this coupling into the ultrastrong regime ($g/\omega_{\rm r} \sim 0.1 - 1$) and even further in the deep strong coupling regime ($g/\omega_{\rm r} \geq 1$) [@FornDiaz2018; @Gu2017; @Kockum2019]. While the highest values have been obtained so far with flux qubits, vacuum-gap transmon devices with a similar electrical circuit as in Fig. \[fig:oscpendevals\](a) can reach $g/\omega_{\rm r} = 0.07$ [@Bosman2017a] and $g/\omega_{\rm r} = 0.19$ [@Bosman2017b]. for the average number of photons in the resonator for couplings $g/\omega_{\rm r} = 0.04, 0.06$, and $0.1$, employing the Floquet–Born–Markov approach to dissipation. At low drive powers the two-level approximation can be used for the transmon, and the Josephson pendulum-resonator system maps into the quantum Rabi model. From Fig. \[fig:ultrastrong\] we see that the average number of photons $N_{\rm r}$ in the resonator is not zero even in the ground state; this number clearly increases as the coupling gets stronger. As noted also before, this is indeed a feature of the quantum Rabi physics: differently from the Jaynes–Cummings model where the ground state contains zero photons, the terms that do not conserve the excitation number in $\hat{H}_{\rm c}$ lead to a ground state which is a superposition of transmon and resonator states with non-zero number of excitations. the perturbative formula $N_{\rm r} \approx (g/2\omega_{\rm r})^2$ approximates very well the average number of photons at zero temperature, while in Fig. \[fig:ultrastrong\] we observe slightly higher values due to the finite temperature. As the drive increases, we observe that the photon blockade tends to be more effective for large $g$’s. Interestingly, the transition to a classical state also occurs more abruptly as the coupling gets stronger. We have checked that this coincides with many of the upper levels of the transmon being rapidly populated. Due to this effect, the truncation to seven states (which is the maximum that our code can handle in a reasonable amount of time) becomes less reliable and artefacts such as the sharp resonances at some values start to appear. ![Steady-state photon number $N_{\rm r}$ as a function of drive amplitude for different coupling strengths. The simulations are realized using the Floquet–Born–Markov approach with the seven-state truncation for the transmon. The drive detuning is $\delta_{\rm d}/\omega_{\rm r} = -0.02$ and also the other parameters are the same as in Table \[tab:params1\].[]{data-label="fig:ultrastrong"}](fig5){width="1.0\linewidth"} ![image](fig6){width="0.9\linewidth"} Dependence on the gate charge ----------------------------- ![Onset of nonlinearity as a function of the gate charge. (a) The photon number $N_{\rm r}$ as a function of the drive amplitude. We compare the numerical data for $n_{\rm g} = 0$ and $n_{\rm g} = 0.5$ obtained with seven transmon states. (b) Corresponding occupations $P_{k}$ in the transmon eigenstates. The drive detuning is $\delta_{\rm d}/\omega_{\rm r} = -0.02$ and also the other parameters are the same as in Table \[tab:params1\].[]{data-label="fig:Ng"}](fig7){width="\linewidth"} If the transmon is only weakly nonlinear, i.e. $\eta \gg 1$, its lowest bound eigenstates are insensitive to the gate charge, see Appendix \[app:eigenvalue\]. As a consequence, one expects that the value of the gate charge should not affect the photon-number response to a weak drive. However, as the amplitude of the drive is increased, the higher excited states of the transmon become occupied, as discussed in the context of Fig. \[fig:Occupation7\]. Especially, the transition region between the quantum and classical responses should be dependent on the gate-charge dispersion of the transmon states. We demonstrate this in Fig. \[fig:Ng\], where we show the simulation data for the gate-charge values $n_{\rm g}=0$ and $n_{\rm g}=0.5$. Clearly, in the weak driving regime, the responses for the two gate-charge values are nearly equal. The deviation in the photon number is of the order of $10^{-3}$, which is explained by our rather modest value of $\eta=30$. The deviations between the photon numbers of the two gate-charge values are notable if $A/\kappa = 10\ldots 20$. In this regime, the transmon escapes the subspace spanned by the two lowest eigenstates and, thus, the solutions obtained with different gate-charge numbers are expected to differ. At very high amplitudes, the free-rotor states with $k\geq 6$ also begin to contribute the dynamics. These states have a considerable gate-charge dispersion but, however, the superconducting phase is delocalized. Accordingly, the gate-charge dependence is smeared by the free rotations of the phase degree of freedom. We also note that the photon number response displays two sharp peaks for $n_{\rm g}=0.5$ at $A/\kappa \approx 13$ and $A/\kappa \approx 25$. The locations of the peaks are very sensitive to the value of the gate charge, i.e. to the energy level structure of the transmon. Similar abrupt changes in the transmon occupation were also observed in recent experiments in Ref. [@lescanne2018]. They could be related to quantum chaotic motion of the system recently discussed in Ref. [@mourik2018]. In this parameter regime, also the Jaynes–Cummings model displays bistability [@Vukics2018]. Comparison between different master equations --------------------------------------------- ![Comparison between the Floquet–Born–Markov (FBM) and Lindblad models for dissipation in the two-level approximation for the transmon. The drive detuning is $\delta_{\rm d}/\omega_{\rm r}= -0.02$ and also the other parameters are the same as in Table \[tab:params1\].[]{data-label="fig:comparison"}](fig8){width="1.0\linewidth"} We have also compared our numerical Floquet–Born–Markov method against the Lindblad master equation which was presented in Sec. \[sec:Lindblad\] and has been conventionally used in the studies of similar strongly driven systems with weak dissipation. We note that for the case of strong coupling to the bath, the possible treatment is by the method of path integrals, as developed by Feynman and Vernon, which has already been applied to describe the dynamics of the Rabi model [@Henriet2014]. We recall that in the Lindblad formalism, the environment induces transitions between the non-driven states of the system, whereas in the Floquet–Born–Markov approach the dissipation couples to the drive-dressed states of the system. Thus, one expects deviations from the Floquet–Born–Markov results in the limit of strong driving. In Fig. \[fig:comparison\] we show a comparison between the two models in the two-state truncation approximation for the transmon. We see that the largest differences between the models appear when the transition from the quantum to classical response starts to emerge, see Fig. \[fig:Occupation7\]. Based on our numerical calculations, the differences are the largest at resonance and both models give equivalent results whenever one of the three frequencies, $\omega_{\rm r}$, $\omega_{\rm q}$, or $\omega_{\rm d}$, is detuned from the other two. We emphasize, however, that computationally the Floquet–Born–Markov master equation is by two orders of magnitude more efficient than the corresponding Lindblad equation. Moreover, in the case of Fig. \[fig:DriveDetuning\](d) the computing time of the Floquet–Born–Markov equation was roughly a week with an ordinary CPU. In such cases, the solution of the Lindblad equation becomes impractical and one should use a parallelized implementation of the Floquet–Born–Markov master equation. Conclusions {#sec:V} =========== We have given a comprehensive treatment of the driven-dissipative quantum-to-classical phase transition for a Josephson pendulum coupled to a resonator, going beyond the truncated Rabi form of the Hamiltonian through the full inclusion of the higher energy level of the pendulum. We modelled the open quantum system with the Floquet–Born–Markov method, in which the dissipative transitions occur between the drive-dressed states of the system. We compared our results also against those given by the conventional Lindblad formalism where the dissipation couples to the eigenstates of the non-driven system. We found that the quantitative description of the multi-photon blockade phenomenon and of the nonlinearities associated with the phase transition in this system requires a systematic inclusion of the higher energy levels of the transmon and a proper model for dissipation. We also studied approximate classical models for this system, and showed that the discrete energy structure of the quantum system suppresses the classical chaotic motion of the quantum pendulum. Indeed, while the classical solution predicts a sudden change between the low and high amplitude solutions, the quantum solution displays a continuous transition from the normal-mode oscillations to the freely rotating pendulum regime. Finally, we analyzed in detail the two models of dissipation and demonstrated that they produce slightly different predictions for the onset of the photon blockade. Acknowledgments =============== We thank D. Angelakis, S. Laine, and M. Silveri for useful discusssions. We would like to acknowledge financial support from the Academy of Finland under its Centre of Excellence program (projects 312296, 312057, 312298, 312300) and the Finnish Cultural Foundation. This work uses the facilities of the Low Temperature Laboratory (part of OtaNano) at Aalto University. The eigenvalue problem for the Josephson pendulum {#app:eigenvalue} ================================================= The energy eigenstates of the pendulum can be solved from the Mathieu equation [@baker2005; @cottet2002; @abramovitz1972] which produces a spectrum with bound and free-particle parts. The high-energy unbound states are given by the doubly-degenerate quantum rotor states, which are also the eigenstates of the angular momentum operator. In analogy with the elimination of the vector potential by a gauge transformation as usually done for a particle in magnetic field, one can remove the dependence on $n_{\rm g}$ from the transmon Hamiltonian in Eq. (\[eq:transmonHam\]), i.e. $$\hat H_{\rm t} = 4E_{\rm C}(\hat n-n_{\rm g})^2 - E_{\rm J}\cos \hat \varphi,$$ with the gauge transformation $\hat U \hat H_{\rm t} \hat U^{\dag}$, where $$\hat U = e^{-i n_{\rm g}\hat \varphi}.$$ As a consequence, the eigenstates $|k\rangle$ of the Hamiltonian are modified into $$|k\rangle \ \rightarrow \ e^{-in_{\rm g} \hat \varphi}|k\rangle.$$ The transformed Hamiltonian can be written as $$\hat H_{\rm t} = 4E_{\rm C}\hat n^2-E_{\rm J}\cos\hat \varphi.$$ Here, we represent the (Schrödinger) eigenvalue equation for the transformed Hamiltonian in the eigenbasis of the operator $\hat \varphi$. As a result, the energy levels of the transmon can be obtained from the Mathieu equation [@baker2005; @cottet2002; @abramovitz1972] $$\label{eq:Mathieu} \frac{\partial^2}{\partial z^2}\psi_k(z) - 2 q \cos(2z) \psi_k(z) = -a\psi_k(z),$$ where $z = \varphi/2$, $q=-\eta/2 = -E_{\rm J}/(2 E_{\rm C})$, and $a=E_k/E_{\rm C}$. We have also denoted the transformed eigenstate $|k\rangle$ in the $\varphi$ representation with $\psi_k(\varphi) = \langle \varphi | e^{-in_{\rm g} \hat \varphi}|k\rangle$. Note that $\Psi_k(\varphi) = e^{in_{\rm g}\varphi} \psi_k(\varphi)$ is the eigenfunction of the original Hamiltonian in Eq. (\[eq:transmonHam\]). Due to the periodic boundary conditions, one has that $\Psi_k(\varphi+2\pi) = \Psi_k(\varphi)$. The solutions to Eq. (\[eq:Mathieu\]) are generally Mathieu functions which have a power series representation, but cannot be written in terms of elementary functions [@abramovitz1972]. However, the corresponding energy-level structure can be studied analytically in the high and low-energy limits. In Fig. \[fig:Mathieuevals\], we present the eigenenergies $E_k$ obtained as solutions of the Mathieu equation (\[eq:Mathieu\]). The eigenstates that lie within the wells formed by the cosine potential are localized in the coordinate $\varphi$, whereas the states far above are (nearly) evenly distributed, see Fig. \[fig:Mathieuevals\](a). As a consequence, the high-energy states are localized in the charge basis. The data shows that if plotted as a function of the gate charge, the states inside the cosine potential are nearly flat, see Fig. \[fig:Mathieuevals\](b). This implies that such levels are immune to gate charge fluctuations, which results in a high coherence of the device. Outside the well, the energy dispersion with respect to the gate charge becomes significant, and leads to the formation of a band structure typical for periodic potentials [@marder2000]. ![Eigenvalues and eigenstates of the transmon obtained with the Mathieu equation (\[eq:Mathieu\]) for $\eta =30$. (a) Eigenenergies as a function of the superconducting phase difference $\varphi$. The cosine potential is indicated with the blue line. Inside the well the eigenenergies are discrete and denoted with dashed black lines. On top of each line, we show the absolute square of the corresponding Mathieu eigenfunction. The energy bands from (b) are indicated with gray. (b) Eigenenergies as a function of the gate charge. We compare the numerically exact eigenenergies $E_k$ (solid black) with those of the perturbative Duffing oscillator (dashed red) and the free rotor (dashed blue). The charge dispersion in the (nearly) free rotor states leads to energy bands, which are denoted with gray. We show the Duffing and free rotor solutions only inside and outside the potential, respectively. []{data-label="fig:Mathieuevals"}](fig9){width="\linewidth"} High-energy limit: Free rotor ----------------------------- If the energy in the system is very high due to, e.g., strong driving, the Josephson energy can be neglected and the transmon behaves as a free particle rotating in a planar circular orbit, which can be described solely by its angular momentum $\hat L_{\rm z} = \hat n$. Since the angular momentum is a good quantum number, the eigenenergies and the corresponding eigenfunctions are given by $$\label{eq:rotEn} E_k = 4E_{\rm C}(k-n_{\rm g})^2, \ \ \psi_k(\varphi) = e^{i (k-n_{\rm g}) \varphi},$$ where $k=0, \pm 1, \pm 2, \ldots$. We note that if the magnetic field is zero ($n_g=0$), the nonzero free rotor energies are doubly degenerate. The level spacing is not constant but increases with increasing $k$ as [@baker2005] $$\Delta E_k = E_{k+1}-E_k = 4E_{\rm C}[ 2(k-n_{\rm g})+1].$$ In Fig. \[fig:Mathieuevals\], we show the eigenenergies calculated with Eq. (\[eq:rotEn\]). Clearly, with large energies outside the potential, the energy spectrum of the particle starts to resemble that of the free rotor. Also, the eigenfunctions of the free rotor are plane waves in the $\varphi$ eigenbasis, yielding a flat probability density as a function of $\varphi$. On the other hand, in the momentum eigenbasis, the free rotor states are fully localized. Low-energy limit: Duffing oscillator ------------------------------------ If the pendulum energy is very low, the superconducting phase of the transmon is localized near $\varphi\approx 0$. Thus, the cosine potential can be approximated with the first terms of its Taylor expansion. Consequently, the transmon Hamiltonian reduces to that of a harmonic oscillator with an additional quartic potential $$\hat H_{\rm t} \approx 4E_{\rm C}\hat n^2 + E_{\rm J}\left[-1 + \frac12\hat\varphi^2 - \frac{1}{12}\hat \varphi^4\right].$$ This is the Hamiltonian operator of the quantum Duffing model. The Duffing model has received a considerable attention in the recent literature [@Peano2006; @Serban2007; @Verso2010; @Vierheilig2010; @divincenzo2012; @Everitt2005b] especially in the context of superconducting transmon realizations. It is worthwhile to notice that in this regime the potential is no longer periodic and, thus, we can neglect the periodic boundary condition of the wavefunction. As a consequence, the eigenenergies and eigenfunctions are not dependent on the offset charge $n_{\rm g}$. If $\eta = E_{J}/E_{C}\gg 1$, the quartic term is small and one can solve the eigenvalues and the corresponding eigenvectors perturbatively up to the first order in $\eta$. This regime in which the Josephson energy dominates over the charging energy is referred to as the transmon limit. One, therefore, obtains the eigenenergies $$\label{eq:DuffEn} \frac{E_k}{4E_{\rm C}} = -\frac{\eta}{4}+\sqrt{\eta/2}\left(k+\frac12\right) - \frac{1}{48}(6k^2+6k+3),$$ where $k=0,1,2,\ldots$. Especially, the transition energy between the two lowest Duffing oscillator states can be written as $$\label{eq:DOqubitEn} \hbar \omega_{\rm q} = E_1-E_0 = \sqrt{8E_{\rm J}E_{\rm C}}-E_{\rm C}.$$ This becomes accurate as $\eta\rightarrow \infty$. The anharmonicity of a nonlinear oscillator is typically characterized in terms of the absolute and relative anharmonicity, which are defined, respectively, as $$\mu = E_{12} - E_{01}\approx - E_{\rm C}, \ \ \mu_{\rm r} = \mu/E_{01}\approx -(8\eta)^{-1/2},$$ where $E_{ij} = E_j-E_i$ and the latter approximations are valid in the transmon limit. We emphasize that in the low-excitation limit the transmon oscillates coherently with frequency $\omega_{\rm q} \approx \omega_{\rm p} - E_{\rm C}/\hbar$. Thus, in the quantum pendulum the nonlinearity is present even in the zero-point energy, whereas the small-amplitude oscillations in classical pendulum occur at the angular (plasma) frequency $\hbar\omega_{\rm p} = \sqrt{8E_{\rm J}E_{\rm C}}$. In Fig. (\[fig:Mathieuevals\]), we compare the eigenenergies (\[eq:DuffEn\]) of the Duffing model obtained with the perturbation theory against the exact solutions of the Mathieu equation (\[eq:Mathieu\]). We see that in the low-energy subspace the perturbed-Duffing solution reproduces very well the full Mathieu results. For the higher excited states, the momentum dispersion starts to play a dominant role and deviations arise as expected. This starts to occur close to the boundary of the potential. One can estimate the number $K_{\rm b}$ of bound states by requiring $E_{K_{\rm b}-1}\approx E_{\rm J}$ in Eq. (\[eq:DuffEn\]). This implies that the number of states within the potential scales with $\eta\gg 1$ as $$\label{eq:Nbound} K_{\rm b} \propto \sqrt{\eta}.$$ For the device with parameters listed in Table \[tab:params1\], one has that $\sqrt{\eta} \approx 5$, and the above estimate gives $K_{\rm b} \approx 5$. This coincides with the number of bound states extracted from the numerically exact spectrum of the eigenenergies depicted in Fig. \[fig:Mathieuevals\]. Driven and damped classical system {#app:classeom} ================================== The classical behaviour of the uncoupled pendulum has been extensively studied in the literature [@Dykman1990; @Dykman2012; @baker2005]. If the driving force is not too strong, one can approximate the pendulum with a Duffing oscillator with a quartic non-linearity, as shown in the previous appendix. The main feature of such an oscillator is the bistability of its dynamics. Namely, in a certain range of drive amplitudes and frequency detunings between the driving signal and the oscillator, two stable solutions with low and high amplitudes of the oscillations are possible. If one gradually increases the driving, the pendulum suddenly jumps from the low to the high amplitude solution at the critical driving strength, at which the low amplitude solution vanishes. In the bistable region the Duffing oscillator may switch between the two solutions if one includes noise into the model [@Dykman1990]. This complicated dynamics has been observed in a classical Josephson junction [@Siddiqi2004; @Siddiqi2005]. However, differently from the previous work mentioned above, in our setup the pendulum is coupled to a resonator and driven only indirectly. Here we develop the classical theory of the coupled system and show that the basic physics of bistability is present as well. We first linearize the equations of motion and then introduce systematically the corrections due to the nonlinearity. The system Hamiltonian $\hat H_{\rm S} = \hat H_0 + \hat H_{\rm d}$, defined by Eq. (\[eq:H0\]) and Eq. (\[eq:Hd\]), can be written in terms of the circuit variables as $$\begin{aligned} \label{eq:classHam} \hat H_{\rm S} &=& \frac{\hat q^2}{2C_{\rm r}} + \frac{\hat \phi^2}{2L_{\rm r}} + 4E_{\rm C}\hat n^2-E_{\rm J}\cos \hat \varphi + \tilde g \hat n \hat q \nonumber\\ && + \tilde A \cos(\omega_{\rm d}t) \hat q,\end{aligned}$$ Above, we have denoted the capacitance and inductance of the $LC$ resonator with $C_{\rm r}$ and $L_{\rm r}$, respectively, the effective coupling with $\tilde g=\hbar g/q_{\rm zp}$, the effective drive with $\tilde A=\hbar A/q_{\rm zp}$, and the zero-point fluctuations with $\phi_{\rm zp} = \sqrt{\hbar/(2C_{\rm r}\omega_{\rm r})}$ and $q_{\rm zp}=\sqrt{C_{\rm r}\hbar \omega_{\rm r}/2}$. Also, the resonance frequency of the bare resonator is defined as $\omega_{\rm r}=1/\sqrt{L_{\rm r}C_{\rm r}}$. The corresponding equations of motion for the expectation values of the dimensionless operators $\hat\phi_{\rm r}=\hat \phi/\phi_{\rm zp}$ and $\hat q_{\rm r}=\hat q/q_{\rm zp}$ can be written as $$\begin{aligned} \dot{\phi}_{\rm r} &=& \omega_{\rm r} q_{\rm r} + 2g n + 2 A\cos(\omega_{\rm d}t)-\frac{\kappa}{2}\phi_{\rm r},\label{eq:classeom1}\\ \dot{q}_{\rm r} &=& -\omega_{\rm r} \phi_{\rm r}-\frac{\kappa}{2}q_{\rm r},\label{eq:classeom2}\\ \dot{\varphi} &=& \frac{8E_{\rm C}}{\hbar}n+gq_{\rm r}-\frac{\gamma}{2}\varphi,\\ \dot{n} &=& -\frac{E_{\rm J}}{\hbar}\sin\varphi-\frac{\gamma}{2}n.\label{eq:classeom4}\end{aligned}$$ where we have denoted the expectation value of operator $\hat x$ as $\langle \hat x\rangle\equiv x$, applied the commutation relations $[\hat \phi_{\rm r},\hat q_{\rm r}] = 2i$ and $[\hat\varphi,\hat{n}]=i$, and defined the phenomenological damping constants $\kappa$ and $\gamma = \gamma_0\omega_{\rm q}$ for the oscillator and the pendulum, respectively. The exact solution to these equations of motion is unavoidably numerical and is given in Figs. \[fig:classsteps\] and \[fig:Occupation7\]. The resonator occupation is calculated as $N_{\rm r} = \frac{1}{4}(q_{\rm r}^2 +\phi_{\rm r}^2)$. Solution of the linearized equation {#app:lineom} ----------------------------------- We study Eqs. (\[eq:classeom1\])-(\[eq:classeom4\]) in the limit of weak driving. As a consequence, one can linearize the equations of motion by writing $\sin\varphi \approx \varphi$. In addition, by defining $$\begin{aligned} \alpha &=& \frac12(q_{\rm r}-i\phi_{\rm r}),\\ \beta &=& \frac{1}{\sqrt{2}}\left(\sqrt[4]{\frac{\eta}{8}} \varphi + i\sqrt[4]{\frac{8}{\eta}} n\right),\end{aligned}$$ we obtain $$\begin{split} \dot{\alpha}=& -i\omega_{\rm r}\alpha + g_{\rm eff}(\beta^*-\beta)-\frac{iA}{2}\left(e^{i\omega_{\rm d}t}+e^{-i\omega_{\rm d}t}\right) - \frac{\kappa}{2}\alpha,\\ \dot{\beta}=& - i\omega_{\rm p}\beta + g_{\rm eff}(\alpha+\alpha^*) - \frac{\gamma}{2}\beta, \end{split}$$ where we have introduced an effective coupling as $g_{\rm eff}=g\sqrt[4]{\eta/32}$. The above equations describe two driven and dissipative coupled oscillators. We assume that both oscillators are excited at the drive frequency, i.e. $\alpha = \alpha_0 \exp(-i\omega_{\rm d}t)$ and $\beta = \beta_0 \exp(-i\omega_{\rm d}t)$. By making a rotating-wave approximation for the coupling and the drive, we obtain the resonator occupation $N_{\rm lin} = |\alpha_0|^2$ in the steady state $$N_{\rm lin} = \frac{A^2}{4}\frac{1}{\left(\delta_{\rm d}-g_{\rm eff}^2\frac{\delta_{\rm p}}{\delta_{\rm p}^2+\gamma^2/4}\right)^2+\left(\frac{\kappa}{2}+g_{\rm eff}^2\frac{\gamma/2}{\delta_{\rm p}^2+\gamma^2/4}\right)^2},$$ where $\delta_{\rm d}=\omega_{\rm d}-\omega_{\rm r}$ and $\delta_{\rm p} = \omega_{\rm d}-\omega_{\rm p}$, with $\omega_{\rm p}=\sqrt{8E_{\rm J}E_{\rm C}}/\hbar$. This appeared already in Eq. (\[eq:linearNr\]). Correction due to the pendulum nonlinearity {#app:nonlin} ------------------------------------------- Here, we study the nonlinear effects neglected in the above linearized calculation. We eliminate the variables $\phi_{\rm r}$ and $n$ from Eqs. (\[eq:classeom1\])-(\[eq:classeom4\]) and obtain $$\begin{aligned} \ddot{q}_{\rm r} + \kappa \dot{q}_{\rm r} + \tilde{\omega}_{\rm r}^2q_{\rm r}+g_1 \dot{\varphi} + 2A\omega_{\rm r}\cos(\omega_{\rm d}t) &=& 0,\label{eq:classqr}\\ \ddot{\varphi} + \gamma\dot{\varphi}+\omega_{\rm p}^2\sin\varphi - g\dot{q}_{\rm r}&=& 0,\label{eq:classvarphi}\end{aligned}$$ where we have denoted $g_1=g \hbar \omega_{\rm r}/(4E_{\rm C})$, and defined the renormalized resonator frequency as $\tilde{\omega}_{\rm r}^2 = \omega_{\rm r}^2 - g^2\hbar \omega_{\rm r}/(4E_{\rm C})$. In Eq. (\[eq:classqr\]), we have included only the term that is proportional to $g^2$ as it provides the major contribution to the frequency renormalization, and neglected the other second order terms in $\kappa$, $\gamma$ and $g$ that lead to similar but considerable smaller effects. We write the solutions formally in terms of Fourier transform as $$\begin{aligned} q_{\rm r}(t) &=& \int \frac{d\Omega}{2\pi}q_{\rm r}[\Omega]e^{-i\Omega t},\\ \varphi(t) &=& \int \frac{d\Omega}{2\pi}\varphi[\Omega]e^{-i\Omega t},\end{aligned}$$ where $q_{\rm r}[\Omega]$ and $\varphi[\Omega]$ are the (complex valued) Fourier coefficients of $q_{\rm r}(t)$ and $\varphi(t)$, respectively. As a consequence, one can write the equations of motion as $$\begin{aligned} \int \frac{d\Omega}{2\pi}\left\{\left(\tilde{\omega}_{\rm r}^2 - \Omega^2 - i\kappa\Omega\right)q_{\rm r}[\Omega] - ig_1\Omega \varphi[\Omega]+2\pi A\omega_{\rm r} \left[\delta(\Omega-\omega_{\rm d})+\delta(\Omega+\omega_{\rm d})\right]\right\}e^{-i\Omega t}&=&0,\\ \int \frac{d\Omega}{2\pi}\left\{\left(- \Omega^2 - i\gamma\Omega\right)\varphi[\Omega] + ig\Omega q_{\rm r}[\Omega]\right\}e^{-i\Omega t} + \omega_{\rm p}^2\sin\varphi &=&0.\label{eq:classvphi}\end{aligned}$$ We solve $q_{\rm r}[\Omega]$ from the first equation and obtain $$\begin{aligned} \label{eq:classqr2} q_{\rm r}[\Omega] &=& \frac{ig_1\Omega\varphi[\Omega]-2\pi A\omega_{\rm r}[\delta(\Omega-\omega_{\rm d})+\delta(\Omega+\omega_{\rm d})]}{\tilde{\omega}_{\rm r}^2-\Omega^2-i\kappa\Omega}.\end{aligned}$$ By replacing this result into Eq. (\[eq:classvphi\]), we obtain $$\int \frac{d\Omega}{2\pi}\left\{\left(- \Omega^2 - i\gamma\Omega- \frac{gg_1\Omega^2}{\tilde{\omega}_{\rm r}^2-\Omega^2-i\kappa\Omega}\right)\varphi[\Omega] \right\}e^{-i\Omega t} + \omega_{\rm p}^2\sin\varphi =\frac{2gA\omega_{\rm r}\omega_{\rm d}}{\sqrt{(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)^2+\kappa^2\omega_{\rm d}^2}}\cos(\omega_{\rm d}t),$$ where we have neglected a constant phase factor. For weak drive amplitudes, $\varphi[\omega_{\rm d}]$ is the only non-zero Fourier component. Thus, one can evaluate the Fourier transform in the above equation at the drive frequency. Consequently, the Fourier component of the third term in the equation can be evaluated as $$\begin{split} \frac{gg_1\Omega^2}{\tilde{\omega}_{\rm r}^2-\Omega^2-i\kappa\Omega} &\approx \frac{gg_1\omega_{\rm d}^2}{(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)^2+\kappa^2\omega_{\rm d}^2}\left[(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)+i\kappa\omega_{\rm d}\right]\\ &\approx \frac{gg_1\omega_{\rm d}^2}{\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2} + i\frac{gg_1\kappa\omega_{\rm d}^3}{(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)^2+\kappa^2\omega_{\rm d}^2}, \end{split}$$ where in the second term we have assumed that the dissipation is weak, i.e. $\kappa\ll\sqrt{\tilde\omega_{\rm r}^2-\omega_{\rm d}^2}$ and we taken into account the dominant terms for the real and imaginary parts. As a result, we obtain $$\ddot{\varphi}+\tilde{\gamma}\dot{\varphi}+\omega_{\rm p}^2\sin\varphi + (\tilde{\omega}_{\rm p}^2-\omega_{\rm p}^2)\varphi = B\cos(\omega_{\rm d} t). \label{eqn:appendix_nonlin}$$ Here, we have defined the renormalized linear oscillation frequency $\tilde{\omega}_{\rm p}$, dissipation rate $\tilde{\gamma}$, and drive amplitude $B$ as $$\begin{aligned} \tilde{\omega}_{\rm p}^2 &=& \omega_{\rm p}^2-\frac{g g_1 \omega_{\rm d}^2}{\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2},\\ \tilde{\gamma} &=& \gamma+\frac{gg_1\kappa\omega_{\rm d}^2}{(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)^2+\kappa^2\omega_{\rm d}^2},\\ B &=& \frac{2gA\omega_{\rm r}\omega_{\rm d}}{\sqrt{(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)^2+\kappa^2\omega_{\rm d}^2}},\label{eq:effamp}\end{aligned}$$ where the two first equations are valid if $\kappa \ll \sqrt{\tilde\omega_{\rm r}^2-\omega_{\rm d}^2}$. Thus, we have shown that in the limit of low dissipation, the classical resonator-transmon system can be modeled as a driven and damped pendulum. In the case of weak driving, we expand the sinusoidal term up to the third order in $\varphi$. We obtain the equation of motion for the driven and damped Duffing oscillator: $$\label{eq:duff} \ddot{\varphi}+\tilde{\gamma}\dot{\varphi}+\tilde{\omega}_{\rm p}^2\left(\varphi-\frac{\omega_{\rm p}^2}{6\tilde{\omega}_{\rm p}^2}\varphi^3\right) = B\cos(\omega_{\rm d} t).$$ This equation can be solved approximatively by applying a trial solution $\varphi(t) = \varphi_1 \cos(\omega_{\rm d}t)$ into Eq. (\[eq:duff\]). By applying harmonic balance, and neglecting super-harmonic terms, we obtain a relation for the amplitude $\varphi_1$ in terms of the drive amplitude $B$. By taking a second power of this equation and, again, neglecting the super-harmonic terms, we obtain $$\left[\left(\tilde{\omega}_{\rm p}^2-\omega_{\rm d}^2-\frac{\omega_{\rm p}^2}{8}\varphi_1^2\right)^2+\tilde\gamma^2\omega_{\rm d}^2\right]\varphi_1^2 = B^2.$$ The above equation is cubic in $\varphi_1^2$. It has one real solution if the discriminant $D$ of the equation is negative, i.e. $D<0$. If $D>0$, the equation has three real solutions, two stable and one unstable. The stable solutions can appear only if $\omega_{\rm d}<\tilde{\omega}_{\rm p}$, which is typical for Duffing oscillators with a soft spring (negative nonlinearity). The bistability can, thus, occur for amplitudes $B_{\rm min}<B<B_{\rm crit}$ where the minimal and critical amplitudes $B_{\rm min}$ and $B_{\rm crit}$, respectively, determine the region of bistability and are obtained from the equation $D=0$. By expanding the resulting $B_{\rm min}$ and $B_{\rm crit}$ in terms of $\tilde\gamma$ and by taking into account the dominant terms, we find that $$\begin{aligned} B_{\rm min} &=& \tilde\gamma\frac{\omega_{\rm d}}{\omega_{\rm p}}\sqrt{8(\tilde\omega_{\rm p}^2-\omega_{\rm d}^2)} = \tilde\gamma\frac{\sqrt{27}\omega_{\rm d}}{2(\tilde\omega_{\rm p}^2-\omega_{\rm d}^2)}B_{\rm crit}\label{eq:bistabmin}\\ B_{\rm crit} &=& \sqrt{\frac{32}{27}}\frac{(\tilde{\omega}_{\rm p}^2-\omega_{\rm d}^2)^{3/2}}{\omega_{\rm p}}\approx \frac{16}{3\sqrt{3}}\sqrt{\omega_{\rm p}\delta_{\rm p}^3},\label{eq:anjump}\end{aligned}$$ where the last equality holds if $\delta_{\rm p} = \tilde{\omega}_{\rm p}-\omega_{\rm d}\ll \omega_{\rm p}$. The iterative numerical solution of Eq. (\[eq:duff\]) indicates that the initial state affects the switching location between the two stable solutions. We note that this approximation neglects all higher harmonics and, thus, cannot reproduce any traces towards chaotic motion inherent to the strongly driven pendulum. Finally, we are able to write the minimal and critical drive amplitudes of the coupled resonator-transmon system using Eqs. (\[eq:effamp\]), (\[eq:bistabmin\]), and (\[eq:anjump\]). We obtain \[see Eq. (\[eq:duffan\])\] $$\begin{aligned} A_{\rm min} &=& \tilde\gamma\sqrt{2(\tilde\omega_{\rm p}^2-\omega_{\rm d}^2)}\frac{\sqrt{(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)^2+\kappa^2\omega_{\rm d}^2}}{g\omega_{\rm r}\omega_{\rm p}},\\ A_{\rm crit} &=& \sqrt{\frac{8}{27}} \left(\tilde{\omega}_{\rm p}^2-\omega_{\rm d}^2\right)^{3/2}\frac{\sqrt{(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)^2+\kappa^2\omega_{\rm d}^2}}{g\omega_{\rm r}\omega_{\rm d}\omega_{\rm p}}.\end{aligned}$$ Note that these equations are valid for $\kappa \ll \sqrt{\tilde\omega_{\rm q}^2-\omega_{\rm d}^2}$.
{ "pile_set_name": "arxiv" }
cask "font-cormorant-sc" do version :latest sha256 :no_check # github.com/google/fonts/ was verified as official when first introduced to the cask url "https://github.com/google/fonts/trunk/ofl/cormorantsc", using: :svn, trust_cert: true name "Cormorant SC" homepage "https://fonts.google.com/specimen/Cormorant+SC" font "CormorantSC-Bold.ttf" font "CormorantSC-Light.ttf" font "CormorantSC-Medium.ttf" font "CormorantSC-Regular.ttf" font "CormorantSC-SemiBold.ttf" end
{ "pile_set_name": "github" }
<a href="https://www.buymeacoffee.com/7eDr4fv" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/lato-orange.png" alt="Buy Me A Coffee" style="height: 41px !important;width: 174px !important;" ></a> # 2019-ncov-frontend > Coronavirus (COVID-19) Frontend Backend setup can be found here [2019-ncov-api](https://github.com/sorxrob/2019-ncov-api). ## Project setup ``` npm install ``` ### Compiles and hot-reloads for development ``` npm run serve ``` ### Compiles and minifies for production ``` npm run build ``` ### Lints and fixes files ``` npm run lint ``` ## License & copyright © Robert C Soriano Licensed under the [MIT License](LICENSE). ## Acknowledgments - Hat tip to anyone who's module was used - Richard Matsen for radius scale calculation
{ "pile_set_name": "github" }
<?xml version="1.0" encoding="UTF-8"?> <Workspace version = "1.0"> <FileRef location = "group:Runner.xcodeproj"> </FileRef> </Workspace>
{ "pile_set_name": "github" }
//------------------------------------------------------------------------------ // <auto-generated> // This code was generated by AsyncGenerator. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ using System.Collections.Generic; using NUnit.Framework; using NHibernate.Criterion; namespace NHibernate.Test.NHSpecificTest.NH2546 { using System.Threading.Tasks; [TestFixture] public class SetCommandParameterSizesFalseFixtureAsync : BugTestCase { protected override bool AppliesTo(Dialect.Dialect dialect) { return dialect is Dialect.MsSql2008Dialect; } protected override void OnSetUp() { using (ISession session = Sfi.OpenSession()) { session.Persist(new Student() { StringTypeWithLengthDefined = "Julian Maughan" }); session.Persist(new Student() { StringTypeWithLengthDefined = "Bill Clinton" }); session.Flush(); } } protected override void OnTearDown() { using (ISession session = Sfi.OpenSession()) { session.CreateQuery("delete from Student").ExecuteUpdate(); session.Flush(); } base.OnTearDown(); } [Test] public async Task LikeExpressionWithinDefinedTypeSizeAsync() { using (ISession session = Sfi.OpenSession()) { ICriteria criteria = session .CreateCriteria<Student>() .Add(Restrictions.Like("StringTypeWithLengthDefined", "Julian%")); IList<Student> list = await (criteria.ListAsync<Student>()); Assert.That(list.Count, Is.EqualTo(1)); } } [Test] public async Task LikeExpressionExceedsDefinedTypeSizeAsync() { // In this case we are forcing the usage of LikeExpression class where the length of the associated property is ignored using (ISession session = Sfi.OpenSession()) { ICriteria criteria = session .CreateCriteria<Student>() .Add(Restrictions.Like("StringTypeWithLengthDefined", "[a-z][a-z][a-z]ian%", MatchMode.Exact, null)); IList<Student> list = await (criteria.ListAsync<Student>()); Assert.That(list.Count, Is.EqualTo(1)); } } } }
{ "pile_set_name": "github" }
/* * linux/include/asm-arm/proc-armv/processor.h * * Copyright (C) 1996-1999 Russell King. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. * * Changelog: * 20-09-1996 RMK Created * 26-09-1996 RMK Added 'EXTRA_THREAD_STRUCT*' * 28-09-1996 RMK Moved start_thread into the processor dependencies * 09-09-1998 PJB Delete redundant `wp_works_ok' * 30-05-1999 PJB Save sl across context switches * 31-07-1999 RMK Added 'domain' stuff */ #ifndef __ASM_PROC_PROCESSOR_H #define __ASM_PROC_PROCESSOR_H #include <asm/proc/domain.h> #define KERNEL_STACK_SIZE PAGE_SIZE struct context_save_struct { unsigned long cpsr; unsigned long r4; unsigned long r5; unsigned long r6; unsigned long r7; unsigned long r8; unsigned long r9; unsigned long sl; unsigned long fp; unsigned long pc; }; #define INIT_CSS (struct context_save_struct){ SVC_MODE, 0, 0, 0, 0, 0, 0, 0, 0, 0 } #define EXTRA_THREAD_STRUCT \ unsigned int domain; #define EXTRA_THREAD_STRUCT_INIT \ domain: domain_val(DOMAIN_USER, DOMAIN_CLIENT) | \ domain_val(DOMAIN_KERNEL, DOMAIN_MANAGER) | \ domain_val(DOMAIN_IO, DOMAIN_CLIENT) #define start_thread(regs,pc,sp) \ ({ \ unsigned long *stack = (unsigned long *)sp; \ set_fs(USER_DS); \ memzero(regs->uregs, sizeof(regs->uregs)); \ if (current->personality & ADDR_LIMIT_32BIT) \ regs->ARM_cpsr = USR_MODE; \ else \ regs->ARM_cpsr = USR26_MODE; \ regs->ARM_pc = pc; /* pc */ \ regs->ARM_sp = sp; /* sp */ \ regs->ARM_r2 = stack[2]; /* r2 (envp) */ \ regs->ARM_r1 = stack[1]; /* r1 (argv) */ \ regs->ARM_r0 = stack[0]; /* r0 (argc) */ \ }) #define KSTK_EIP(tsk) (((unsigned long *)(4096+(unsigned long)(tsk)))[1019]) #define KSTK_ESP(tsk) (((unsigned long *)(4096+(unsigned long)(tsk)))[1017]) /* Allocation and freeing of basic task resources. */ /* * NOTE! The task struct and the stack go together */ #define ll_alloc_task_struct() ((struct task_struct *) __get_free_pages(GFP_KERNEL,1)) #define ll_free_task_struct(p) free_pages((unsigned long)(p),1) #endif
{ "pile_set_name": "github" }
#!/bin/bash # WINDOWS PACKAGING SCRIPT FOR NAEV # Requires NSIS, and python3-pip to be installed # # This script should be run after compiling Naev # It detects the current environment, and builds the appropriate NSIS installer # into the root naev directory. # # Checks if argument(s) are valid if [[ $1 == "--nightly" ]]; then echo "Building for nightly release" NIGHTLY=true # Get Formatted Date BUILD_DATE="$(date +%m_%d_%Y)" elif [[ $1 == "" ]]; then echo "No arguments passed, assuming normal release" NIGHTLY=false elif [[ $1 != "--nightly" ]]; then echo "Please use argument --nightly if you are building this as a nightly build" exit -1 else echo "Something went wrong." exit -1 fi # Check if we are running in the right place if [[ ! -f "naev.6" ]]; then echo "Please run from Naev root directory." exit -1 fi # Rudementary way of detecting which environment we are packaging.. # It works, and it should remain working until msys changes their naming scheme if [[ $PATH == *"mingw32"* ]]; then echo "Detected MinGW32 environment" ARCH="32" elif [[ $PATH == *"mingw64"* ]]; then echo "Detected MinGW64 environment" ARCH="64" else echo "Welp, I don't know what environment this is... Make sure you are running this in an MSYS2 MinGW environment" exit -1 fi VERSION="$(cat $(pwd)/VERSION)" BETA=false # Get version, negative minors mean betas if [[ -n $(echo "$VERSION" | grep "-") ]]; then BASEVER=$(echo "$VERSION" | sed 's/\.-.*//') BETAVER=$(echo "$VERSION" | sed 's/.*-//') VERSION="$BASEVER.0-beta.$BETAVER" BETA=true else echo "could not find VERSION file" exit -1 fi # Download and Install mingw-ldd echo "Update pip" pip3 install --upgrade pip echo "Install mingw-ldd script" pip3 install mingw-ldd # Move compiled binary to staging folder. echo "creating staging area" mkdir -p extras/windows/installer/bin # Move data to staging folder echo "moving data to staging area" cp -r dat/ extras/windows/installer/bin cp AUTHORS extras/windows/installer/bin cp VERSION extras/windows/installer/bin # Collect DLLs if [[ $ARCH == "32" ]]; then for fn in `mingw-ldd naev.exe --dll-lookup-dirs /mingw32/bin | grep -i "mingw32" | cut -f1 -d"/" --complement`; do fp="/"$fn echo "copying $fp to staging area" cp $fp extras/windows/installer/bin done elif [[ $ARCH == "64" ]]; then for fn in `mingw-ldd naev.exe --dll-lookup-dirs /mingw64/bin | grep -i "mingw64" | cut -f1 -d"/" --complement`; do fp="/"$fn echo "copying $fp to staging area" cp $fp extras/windows/installer/bin done else echo "Aw, man, I shot Marvin in the face..." echo "Something went wrong while looking for DLLs to stage." exit -1 fi echo "copying naev binary to staging area" if [[ $NIGHTLY == true ]]; then cp src/naev.exe extras/windows/installer/bin/naev-$VERSION-$BUILD_DATE-win$ARCH.exe elif [[ $NIGHTLY == false ]]; then cp src/naev.exe extras/windows/installer/bin/naev-$VERSION-win$ARCH.exe else echo "Cannot think of another movie quote." echo "Something went wrong while copying binary to staging area." exit -1 fi # Create distribution folder echo "creating distribution folder" mkdir -p dist/release # Build installer if [[ $NIGHTLY == true ]]; then if [[ $BETA == true ]]; then makensis -DVERSION=$BASEVER.0 -DVERSION_SUFFIX=-beta.$BETAVER-$BUILD_DATE -DARCH=$ARCH extras/windows/installer/naev.nsi elif [[ $BETA == false ]]; then makensis -DVERSION=$VERSION -DVERSION_SUFFIX=-$BUILD_DATE -DARCH=$ARCH extras/windows/installer/naev.nsi else echo "Something went wrong determining if this is a beta or not." fi # Move installer to distribution directory mv extras/windows/installer/naev-$VERSION-$BUILD_DATE-win$ARCH.exe dist/release/naev-win$ARCH.exe elif [[ $NIGHTLY == false ]]; then if [[ $BETA == true ]]; then makensis -DVERSION=$BASEVER.0 -DVERSION_SUFFIX=-beta.$BETAVER -DARCH=$ARCH extras/windows/installer/naev.nsi elif [[ $BETA == false ]]; then makensis -DVERSION=$VERSION -DVERSION_SUFFIX= -DARCH=$ARCH extras/windows/installer/naev.nsi else echo "Something went wrong determining if this is a beta or not." fi # Move installer to distribution directory mv extras/windows/installer/naev-$VERSION-win$ARCH.exe dist/release/naev-win$ARCH.exe else echo "Cannot think of another movie quote.. again." echo "Something went wrong.." exit -1 fi echo "Successfully built Windows Installer for win$ARCH" # Package zip cd extras/windows/installer/bin zip ../../../../dist/release/naev-win$ARCH.zip *.dll *.exe cd ../../../../ echo "Successfully packaged zipped folder for win$ARCH" echo "Cleaning up staging area" rm -rf extras/windows/installer/bin
{ "pile_set_name": "github" }